Limits...
Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.

Zylberberg J, DeWeese MR - PLoS Comput. Biol. (2013)

Bottom Line: Intuitively, this is expected to result in sparser network activity over time.We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development.To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

View Article: PubMed Central - PubMed

Affiliation: Department of Physics, University of California, Berkeley, Berkeley, California, United States of America.

ABSTRACT
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

Show MeSH
SparseNet can also display either increasing or decreasing sparseness during learning.To check that our conclusions apply to other models besides SAILnet, we performed simulations with the publicly available SparseNet code of Olshausen and Field [3], [21]. (A) When the basis functions are initialized with large-amplitude white noise (see text for details), the sparseness increases over time contrary to the ferret data shown in Fig. 1. (B) However, when the bases are initialized with small-amplitude white noise, the sparseness decreases over time.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3757070&req=5

pcbi-1003182-g005: SparseNet can also display either increasing or decreasing sparseness during learning.To check that our conclusions apply to other models besides SAILnet, we performed simulations with the publicly available SparseNet code of Olshausen and Field [3], [21]. (A) When the basis functions are initialized with large-amplitude white noise (see text for details), the sparseness increases over time contrary to the ferret data shown in Fig. 1. (B) However, when the bases are initialized with small-amplitude white noise, the sparseness decreases over time.

Mentions: We begin by initializing these basis functions with Gaussian white noise of variance , so that the bases have norms of approximately . In this case, sparseness increases over time (Fig. 5a) and the basis amplitudes decrease: the mean norm of these bases is approximately 0.5 once the model converges, after the training period.


Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.

Zylberberg J, DeWeese MR - PLoS Comput. Biol. (2013)

SparseNet can also display either increasing or decreasing sparseness during learning.To check that our conclusions apply to other models besides SAILnet, we performed simulations with the publicly available SparseNet code of Olshausen and Field [3], [21]. (A) When the basis functions are initialized with large-amplitude white noise (see text for details), the sparseness increases over time contrary to the ferret data shown in Fig. 1. (B) However, when the bases are initialized with small-amplitude white noise, the sparseness decreases over time.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3757070&req=5

pcbi-1003182-g005: SparseNet can also display either increasing or decreasing sparseness during learning.To check that our conclusions apply to other models besides SAILnet, we performed simulations with the publicly available SparseNet code of Olshausen and Field [3], [21]. (A) When the basis functions are initialized with large-amplitude white noise (see text for details), the sparseness increases over time contrary to the ferret data shown in Fig. 1. (B) However, when the bases are initialized with small-amplitude white noise, the sparseness decreases over time.
Mentions: We begin by initializing these basis functions with Gaussian white noise of variance , so that the bases have norms of approximately . In this case, sparseness increases over time (Fig. 5a) and the basis amplitudes decrease: the mean norm of these bases is approximately 0.5 once the model converges, after the training period.

Bottom Line: Intuitively, this is expected to result in sparser network activity over time.We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development.To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

View Article: PubMed Central - PubMed

Affiliation: Department of Physics, University of California, Berkeley, Berkeley, California, United States of America.

ABSTRACT
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

Show MeSH