A Note on the Reward Function for PHD Filters with Sensor Control

作者:Ristic Branko*; Vo Ba Ngu; Clark Daniel
来源:IEEE Transactions on Aerospace and Electronic Systems, 2011, 47(2): 1521-1529.
DOI:10.1109/TAES.2011.5751278

摘要

The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density pdf), while the reward function associated with each sensor control (action) is the information gain measured by the alpha or Renyi divergence. Assuming that both the predicted and updated state can be represented by independent identically distributed (IID) cluster random finite sets (RFSs) or, as a special case, the Poisson RFSs, this work derives the analytic expressions of the corresponding Renyi divergence based information gains. The implementation of Renyi divergence via the sequential Monte Carlo method is presented. The performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter.

  • 出版日期2011-4