摘要

The sparse coding hypothesis has generated significant interest in the computational and theoretical neuroscience communities, but there remain open questions about the exact quantitative form of the sparsity penalty and the implementation of such a coding rule in neurally plausible architectures. The main contribution of this work is to show that a wide variety of sparsity-based probabilistic inference problems proposed in the signal processing and statistics literatures can be implemented exactly in the common network architecture known as the locally competitive algorithm (LCA). Among the cost functions we examine are approximate l(p) norms (0 <= p <= 2), modified l(p)-norms, block-l(1) norms, and reweighted algorithms. Of particular interest is that we show significantly increased performance in reweighted l(1) algorithms by inferring all parameters jointly in a dynamical system rather than using an iterative approach native to digital computational architectures.

  • 出版日期2012-12