Limits...
Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.

Zylberberg J, DeWeese MR - PLoS Comput. Biol. (2013)

Bottom Line: Intuitively, this is expected to result in sparser network activity over time.We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development.To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

View Article: PubMed Central - PubMed

Affiliation: Department of Physics, University of California, Berkeley, Berkeley, California, United States of America.

ABSTRACT
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

Show MeSH

Related in: MedlinePlus

V1 developmental data appear to challenge the canonical sparse coding models.Multi-unit activity in primary visual cortex (V1) of awake young ferrets watching natural movies shows decreasing sparseness over time. The sparseness metrics shown in this figure are defined in the results section of this paper, and the data are courtesy of Pietro Berkes [14], [15]. The plot has a logarithmic horizontal axis. For contrast, one expects that, in sparse coding models, the sparseness should increase over time. This point was emphasized in recent work [14]. In this paper, we show that, in sparse coding models sparseness can actually decrease during the learning process, so the data shown here cannot rule out sparse coding as a theory of sensory coding.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3757070&req=5

pcbi-1003182-g001: V1 developmental data appear to challenge the canonical sparse coding models.Multi-unit activity in primary visual cortex (V1) of awake young ferrets watching natural movies shows decreasing sparseness over time. The sparseness metrics shown in this figure are defined in the results section of this paper, and the data are courtesy of Pietro Berkes [14], [15]. The plot has a logarithmic horizontal axis. For contrast, one expects that, in sparse coding models, the sparseness should increase over time. This point was emphasized in recent work [14]. In this paper, we show that, in sparse coding models sparseness can actually decrease during the learning process, so the data shown here cannot rule out sparse coding as a theory of sensory coding.

Mentions: In simulating the development of a sparse coding model, one typically [1], [3] initializes the receptive fields with random white noise — so as to not bias the shapes of the RFs learned by the network — and then presents the network with natural images, in response to which the RFs get modified. As the model (e.g., [1], [3], [13]) modifies itself in response to the stimuli, neurons gradually learn features that allow for a better encoding of the stimuli, so the sparseness is expected to increase over time. This point was emphasized in recent work [14]. Physiology experiments, however, show something different in the developing visual cortex. Recently, Berkes and colleagues measured multi-unit V1 activity in awake young ferrets viewing natural movies, and found that, as the animals matured, their stimulus-driven V1 activity became less sparse [14], [15] (Fig. 1).


Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.

Zylberberg J, DeWeese MR - PLoS Comput. Biol. (2013)

V1 developmental data appear to challenge the canonical sparse coding models.Multi-unit activity in primary visual cortex (V1) of awake young ferrets watching natural movies shows decreasing sparseness over time. The sparseness metrics shown in this figure are defined in the results section of this paper, and the data are courtesy of Pietro Berkes [14], [15]. The plot has a logarithmic horizontal axis. For contrast, one expects that, in sparse coding models, the sparseness should increase over time. This point was emphasized in recent work [14]. In this paper, we show that, in sparse coding models sparseness can actually decrease during the learning process, so the data shown here cannot rule out sparse coding as a theory of sensory coding.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3757070&req=5

pcbi-1003182-g001: V1 developmental data appear to challenge the canonical sparse coding models.Multi-unit activity in primary visual cortex (V1) of awake young ferrets watching natural movies shows decreasing sparseness over time. The sparseness metrics shown in this figure are defined in the results section of this paper, and the data are courtesy of Pietro Berkes [14], [15]. The plot has a logarithmic horizontal axis. For contrast, one expects that, in sparse coding models, the sparseness should increase over time. This point was emphasized in recent work [14]. In this paper, we show that, in sparse coding models sparseness can actually decrease during the learning process, so the data shown here cannot rule out sparse coding as a theory of sensory coding.
Mentions: In simulating the development of a sparse coding model, one typically [1], [3] initializes the receptive fields with random white noise — so as to not bias the shapes of the RFs learned by the network — and then presents the network with natural images, in response to which the RFs get modified. As the model (e.g., [1], [3], [13]) modifies itself in response to the stimuli, neurons gradually learn features that allow for a better encoding of the stimuli, so the sparseness is expected to increase over time. This point was emphasized in recent work [14]. Physiology experiments, however, show something different in the developing visual cortex. Recently, Berkes and colleagues measured multi-unit V1 activity in awake young ferrets viewing natural movies, and found that, as the animals matured, their stimulus-driven V1 activity became less sparse [14], [15] (Fig. 1).

Bottom Line: Intuitively, this is expected to result in sparser network activity over time.We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development.To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

View Article: PubMed Central - PubMed

Affiliation: Department of Physics, University of California, Berkeley, Berkeley, California, United States of America.

ABSTRACT
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.

Show MeSH
Related in: MedlinePlus