摘要

A recently proposed sparse coding based Fisher vector extends traditional GMM based Fisher Vector with a sparse term. Our experiments revealed that the addition of this sparse term alone significantly outperforms GMM based Fisher Vector by almost 20% improvement on small sized datasets (15-Scene, Caltech-10) and up to 5% improvement on medium sized datasets (MIT-67). In the original work, sparse coding based Fisher vector requires an off-the-shelf Sparse Coding solver. From a statistical perspective, an off-the-shelf solver may appear as a black-box. A more elegant way is to use a probabilistic model to learn Sparse Coding. We propose a probabilistic model known as sparse coding based GMM. It differs from GMM by an additional sparse coefficient hidden variable. The prior model of the sparse term is assumed Gaussian distributed for tractability. Inference of the model is performed by iteratively computing a set of closed-form solution obtained via variational method. Experimental results on several well-cited datasets show that our probabilistic based solver obtained on-par learning performance to an off-the-shelf solver as far as sparse coding based Fisher vector is concerned.

  • 出版日期2017-1
  • 单位南阳理工学院