Limits...
Developmental self-construction and -configuration of functional neocortical neuronal networks.

Bauer R, Zubler F, Pfister S, Hauri A, Pfeiffer M, Muir DR, Douglas RJ - PLoS Comput. Biol. (2014)

Bottom Line: Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative ('winner-take-all', WTA) network architecture can arise by development from a single precursor cell.This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis.We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data.

View Article: PubMed Central - PubMed

Affiliation: Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland; School of Computing Science, Newcastle University, Newcastle upon Tyne, United Kingdom.

ABSTRACT
The prenatal development of neural circuits must provide sufficient configuration to support at least a set of core postnatal behaviors. Although knowledge of various genetic and cellular aspects of development is accumulating rapidly, there is less systematic understanding of how these various processes play together in order to construct such functional networks. Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative ('winner-take-all', WTA) network architecture can arise by development from a single precursor cell. This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis. Once initial axonal connection patterns are established, their synaptic weights undergo homeostatic unsupervised learning that is shaped by wave-like input patterns. We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data.

Show MeSH

Related in: MedlinePlus

Winner-take-all functionality.(A) Weight matrix of 117 excitatory neurons in a WTA network. After learning the network exhibits a 1-dimensional neighborhood topology, as shown by the predominantly strong weights around the diagonal. This topology mirrors the neighborhood relationship of the input stimuli, which are continuously and periodically moving hills of activity. Only the excitatory connections are shown here, because the inhibitory neurons do not integrate into the neighborhood topology (see text). (B) Demonstration of WTA functionality on the network connectivity shown in (A). Neurons are ordered here such that adjacent neurons connect most strongly. The input to the network (; top row) has a hill shape, with added noise. The network response (; middle row) is a de-noised version of the input with the bump in the same location. The neuronal gain (; bottom row) is high for neurons receiving the strongest input, and low (or zero) for neurons distant from the main input to the network. The dashed horizontal line indicates a gain of 1. (C) Activity of a winning neuron (blue, solid), during presentation of its feedforward input (blue, dashed) in the same simulation as shown in (B). Recurrent connectivity amplifies the response of the neuron for the duration of the stimulus (). In contrast, a losing neuron (green, solid) receives non-zero feedforward input (green, dashed), but is suppressed due to the WTA functionality of the network. (D) Response of the same network to a different feedforward input. The recovery of a bump shaped activity can occur anywhere in the network topology.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4256067&req=5

pcbi-1003994-g006: Winner-take-all functionality.(A) Weight matrix of 117 excitatory neurons in a WTA network. After learning the network exhibits a 1-dimensional neighborhood topology, as shown by the predominantly strong weights around the diagonal. This topology mirrors the neighborhood relationship of the input stimuli, which are continuously and periodically moving hills of activity. Only the excitatory connections are shown here, because the inhibitory neurons do not integrate into the neighborhood topology (see text). (B) Demonstration of WTA functionality on the network connectivity shown in (A). Neurons are ordered here such that adjacent neurons connect most strongly. The input to the network (; top row) has a hill shape, with added noise. The network response (; middle row) is a de-noised version of the input with the bump in the same location. The neuronal gain (; bottom row) is high for neurons receiving the strongest input, and low (or zero) for neurons distant from the main input to the network. The dashed horizontal line indicates a gain of 1. (C) Activity of a winning neuron (blue, solid), during presentation of its feedforward input (blue, dashed) in the same simulation as shown in (B). Recurrent connectivity amplifies the response of the neuron for the duration of the stimulus (). In contrast, a losing neuron (green, solid) receives non-zero feedforward input (green, dashed), but is suppressed due to the WTA functionality of the network. (D) Response of the same network to a different feedforward input. The recovery of a bump shaped activity can occur anywhere in the network topology.

Mentions: As a consequence of the synaptic learning in the second developmental phase, the network learns the topology of its inputs. Those neurons which are excited by a common input, become more strongly connected with one another. Because of the competition that is inherent to the BCM learning rule, excitatory neurons become progressively more connected to only particular input neurons (those which evoke their strongest response), while decreasing their affinity to the others. Fig. 6A shows that the final functional connectivity of excitatory neurons indeed exhibits a strong neighborhood relationship: The connection weights are stronger around the diagonal, so that the neurons are close to or distant from one another in weight space. This connection topology reflects the (1-dimensional) topology of the input patterns.


Developmental self-construction and -configuration of functional neocortical neuronal networks.

Bauer R, Zubler F, Pfister S, Hauri A, Pfeiffer M, Muir DR, Douglas RJ - PLoS Comput. Biol. (2014)

Winner-take-all functionality.(A) Weight matrix of 117 excitatory neurons in a WTA network. After learning the network exhibits a 1-dimensional neighborhood topology, as shown by the predominantly strong weights around the diagonal. This topology mirrors the neighborhood relationship of the input stimuli, which are continuously and periodically moving hills of activity. Only the excitatory connections are shown here, because the inhibitory neurons do not integrate into the neighborhood topology (see text). (B) Demonstration of WTA functionality on the network connectivity shown in (A). Neurons are ordered here such that adjacent neurons connect most strongly. The input to the network (; top row) has a hill shape, with added noise. The network response (; middle row) is a de-noised version of the input with the bump in the same location. The neuronal gain (; bottom row) is high for neurons receiving the strongest input, and low (or zero) for neurons distant from the main input to the network. The dashed horizontal line indicates a gain of 1. (C) Activity of a winning neuron (blue, solid), during presentation of its feedforward input (blue, dashed) in the same simulation as shown in (B). Recurrent connectivity amplifies the response of the neuron for the duration of the stimulus (). In contrast, a losing neuron (green, solid) receives non-zero feedforward input (green, dashed), but is suppressed due to the WTA functionality of the network. (D) Response of the same network to a different feedforward input. The recovery of a bump shaped activity can occur anywhere in the network topology.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4256067&req=5

pcbi-1003994-g006: Winner-take-all functionality.(A) Weight matrix of 117 excitatory neurons in a WTA network. After learning the network exhibits a 1-dimensional neighborhood topology, as shown by the predominantly strong weights around the diagonal. This topology mirrors the neighborhood relationship of the input stimuli, which are continuously and periodically moving hills of activity. Only the excitatory connections are shown here, because the inhibitory neurons do not integrate into the neighborhood topology (see text). (B) Demonstration of WTA functionality on the network connectivity shown in (A). Neurons are ordered here such that adjacent neurons connect most strongly. The input to the network (; top row) has a hill shape, with added noise. The network response (; middle row) is a de-noised version of the input with the bump in the same location. The neuronal gain (; bottom row) is high for neurons receiving the strongest input, and low (or zero) for neurons distant from the main input to the network. The dashed horizontal line indicates a gain of 1. (C) Activity of a winning neuron (blue, solid), during presentation of its feedforward input (blue, dashed) in the same simulation as shown in (B). Recurrent connectivity amplifies the response of the neuron for the duration of the stimulus (). In contrast, a losing neuron (green, solid) receives non-zero feedforward input (green, dashed), but is suppressed due to the WTA functionality of the network. (D) Response of the same network to a different feedforward input. The recovery of a bump shaped activity can occur anywhere in the network topology.
Mentions: As a consequence of the synaptic learning in the second developmental phase, the network learns the topology of its inputs. Those neurons which are excited by a common input, become more strongly connected with one another. Because of the competition that is inherent to the BCM learning rule, excitatory neurons become progressively more connected to only particular input neurons (those which evoke their strongest response), while decreasing their affinity to the others. Fig. 6A shows that the final functional connectivity of excitatory neurons indeed exhibits a strong neighborhood relationship: The connection weights are stronger around the diagonal, so that the neurons are close to or distant from one another in weight space. This connection topology reflects the (1-dimensional) topology of the input patterns.

Bottom Line: Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative ('winner-take-all', WTA) network architecture can arise by development from a single precursor cell.This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis.We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data.

View Article: PubMed Central - PubMed

Affiliation: Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland; School of Computing Science, Newcastle University, Newcastle upon Tyne, United Kingdom.

ABSTRACT
The prenatal development of neural circuits must provide sufficient configuration to support at least a set of core postnatal behaviors. Although knowledge of various genetic and cellular aspects of development is accumulating rapidly, there is less systematic understanding of how these various processes play together in order to construct such functional networks. Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative ('winner-take-all', WTA) network architecture can arise by development from a single precursor cell. This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis. Once initial axonal connection patterns are established, their synaptic weights undergo homeostatic unsupervised learning that is shaped by wave-like input patterns. We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data.

Show MeSH
Related in: MedlinePlus