Limits...
Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.

Zylberberg J, DeWeese MR - PLoS Comput. Biol. (2013)

Bottom Line: Intuitively, this is expected to result in sparser network activity over time.We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development.To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

View Article: PubMed Central - PubMed

Affiliation: Department of Physics, University of California, Berkeley, Berkeley, California, United States of America.

ABSTRACT
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

Show MeSH

Related in: MedlinePlus

For less sparse initial conditions, SAILnet multi-unit sparseness measures increase during training.A SAILnet simulation was performed in which the RFs were initially randomized, and the recurrent inhibitory connection strengths and firing thresholds were initialized with random numbers that were smaller than for the simulation described in Fig. 3 (see Methods section for details). (A) The initial RFs are shown for 196 randomly selected model neurons. As in Fig. 3, each box on the grid depicts the RF of one neuron, with lighter tones corresponding to positive pixel values, and darker tones corresponding to negative values. (B) After training with natural images, these same SAILnet neurons have oriented, localized RFs. (C) All three of our multi-unit sparseness measures increase during the training period. Aside from the initial conditions, the network used to generate these data was identical to the one from Fig. 3: both networks have the same learning rates, the same number of neurons, the same target mean firing rate, and are trained on the same database of whitened natural images.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3757070&req=5

pcbi-1003182-g004: For less sparse initial conditions, SAILnet multi-unit sparseness measures increase during training.A SAILnet simulation was performed in which the RFs were initially randomized, and the recurrent inhibitory connection strengths and firing thresholds were initialized with random numbers that were smaller than for the simulation described in Fig. 3 (see Methods section for details). (A) The initial RFs are shown for 196 randomly selected model neurons. As in Fig. 3, each box on the grid depicts the RF of one neuron, with lighter tones corresponding to positive pixel values, and darker tones corresponding to negative values. (B) After training with natural images, these same SAILnet neurons have oriented, localized RFs. (C) All three of our multi-unit sparseness measures increase during the training period. Aside from the initial conditions, the network used to generate these data was identical to the one from Fig. 3: both networks have the same learning rates, the same number of neurons, the same target mean firing rate, and are trained on the same database of whitened natural images.

Mentions: In this case, the relatively low firing thresholds and relatively small amount of lateral inhibition lead to the initial network state being less sparse than the final (equilibrium) state, so sparseness increases over time (Fig. 4).


Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.

Zylberberg J, DeWeese MR - PLoS Comput. Biol. (2013)

For less sparse initial conditions, SAILnet multi-unit sparseness measures increase during training.A SAILnet simulation was performed in which the RFs were initially randomized, and the recurrent inhibitory connection strengths and firing thresholds were initialized with random numbers that were smaller than for the simulation described in Fig. 3 (see Methods section for details). (A) The initial RFs are shown for 196 randomly selected model neurons. As in Fig. 3, each box on the grid depicts the RF of one neuron, with lighter tones corresponding to positive pixel values, and darker tones corresponding to negative values. (B) After training with natural images, these same SAILnet neurons have oriented, localized RFs. (C) All three of our multi-unit sparseness measures increase during the training period. Aside from the initial conditions, the network used to generate these data was identical to the one from Fig. 3: both networks have the same learning rates, the same number of neurons, the same target mean firing rate, and are trained on the same database of whitened natural images.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3757070&req=5

pcbi-1003182-g004: For less sparse initial conditions, SAILnet multi-unit sparseness measures increase during training.A SAILnet simulation was performed in which the RFs were initially randomized, and the recurrent inhibitory connection strengths and firing thresholds were initialized with random numbers that were smaller than for the simulation described in Fig. 3 (see Methods section for details). (A) The initial RFs are shown for 196 randomly selected model neurons. As in Fig. 3, each box on the grid depicts the RF of one neuron, with lighter tones corresponding to positive pixel values, and darker tones corresponding to negative values. (B) After training with natural images, these same SAILnet neurons have oriented, localized RFs. (C) All three of our multi-unit sparseness measures increase during the training period. Aside from the initial conditions, the network used to generate these data was identical to the one from Fig. 3: both networks have the same learning rates, the same number of neurons, the same target mean firing rate, and are trained on the same database of whitened natural images.
Mentions: In this case, the relatively low firing thresholds and relatively small amount of lateral inhibition lead to the initial network state being less sparse than the final (equilibrium) state, so sparseness increases over time (Fig. 4).

Bottom Line: Intuitively, this is expected to result in sparser network activity over time.We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development.To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

View Article: PubMed Central - PubMed

Affiliation: Department of Physics, University of California, Berkeley, Berkeley, California, United States of America.

ABSTRACT
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

Show MeSH
Related in: MedlinePlus