Limits...
A neural mechanism for background information-gated learning based on axonal-dendritic overlaps.

Mainetti M, Ascoli GA - PLoS Comput. Biol. (2015)

Bottom Line: The simplest instantiation encodes each concept by single neurons.Results are then generalized to cell assemblies.The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

View Article: PubMed Central - PubMed

Affiliation: Krasnow Institute for Advanced Study, George Mason University, Fairfax, Virginia, United States of America.

ABSTRACT
Experiencing certain events triggers the acquisition of new memories. Although necessary, however, actual experience is not sufficient for memory formation. One-trial learning is also gated by knowledge of appropriate background information to make sense of the experienced occurrence. Strong neurobiological evidence suggests that long-term memory storage involves formation of new synapses. On the short time scale, this form of structural plasticity requires that the axon of the pre-synaptic neuron be physically proximal to the dendrite of the post-synaptic neuron. We surmise that such "axonal-dendritic overlap" (ADO) constitutes the neural correlate of background information-gated (BIG) learning. The hypothesis is based on a fundamental neuroanatomical constraint: an axon must pass close to the dendrites that are near other neurons it contacts. The topographic organization of the mammalian cortex ensures that nearby neurons encode related information. Using neural network simulations, we demonstrate that ADO is a suitable mechanism for BIG learning. We model knowledge as associations between terms, concepts or indivisible units of thought via directed graphs. The simplest instantiation encodes each concept by single neurons. Results are then generalized to cell assemblies. The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

Show MeSH

Related in: MedlinePlus

Generalization of ADO to biologically realistic mechanisms.A. BIG learning with cell-assemblies in small-world graphs of different connectivity: Ratios between the percentages of associations learned in the novice vs. expert domain (bottom surface) and for spurious vs. real co-occurrences (top surface) with varying graph degrees and rewiring probabilities when using cell assembly representation of Watts-Strogatz graphs. Lower rewiring probabilities and, to some extent, higher degrees improve the ability to discriminate real from spurious co-occurrences. These conditions correspond to highly clustered (as opposed to fully random) graphs. The ability to learn new associations within the domain of expertise remains more than double compared to a novice domain. B. Robustness of the BIG ADO mechanism: Ratios between the percentages of associations learned in the novice vs. expert domain with cell assembly representation of Watts-Strogatz graphs when varying (typically one at a time) several model parameters. The full ordinate scale is used to allow comparison with panel C, but the same data are also expanded in the inset to emphasize the invariance of the results (error bars: standard deviation). All parameter values are reported in the table legend below the plot (with default values in bold). The parameters and their abbreviations are: the number of nodes in the Watts-Strogatz graph (N), which also implies a change in the graph degree, d (kept at 2% of N) as well as the number of pre-training associations (corresponding to N×d/4, that is one half of the pool of available associations); the number of neurons in the network (Nn) and the cell assembly size (S), whereas N was also varied together with S (SNn) so as to keep their ratio constant at 200; the activation threshold (AT), i.e. the fraction of neurons in the cell assembly that need to be synchronously active in order to “identify” the node of the graph represented by that assembly; the firing threshold (FT), i.e. the proportion of presynaptic neuron required to fire in order to activate a postsynaptic neuron; the matrix load (ML), i.e. the constant fraction of presynaptic neurons connected to each postsynaptic neuron in the cell assembly learning model; and the proximity load (PL), i.e. the (top) fraction of axonal-dendritic overlaps throughout the network that are considered to be potential synapses (see also S1 Text 2.4). C. Optimal conditions for one-trial learning of real but not spurious associations: Ratios between the percentages of associations learned for spurious vs. real co-occurrences with cell assembly representation of Watts-Strogatz graphs when varying the same model parameters as in Fig. 4B. The most tunable parameters are the firing threshold (neuronal excitability) and the proximity load (strength of BIG ADO filter: see S1 Text 2.4).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4359104&req=5

pcbi.1004155.g004: Generalization of ADO to biologically realistic mechanisms.A. BIG learning with cell-assemblies in small-world graphs of different connectivity: Ratios between the percentages of associations learned in the novice vs. expert domain (bottom surface) and for spurious vs. real co-occurrences (top surface) with varying graph degrees and rewiring probabilities when using cell assembly representation of Watts-Strogatz graphs. Lower rewiring probabilities and, to some extent, higher degrees improve the ability to discriminate real from spurious co-occurrences. These conditions correspond to highly clustered (as opposed to fully random) graphs. The ability to learn new associations within the domain of expertise remains more than double compared to a novice domain. B. Robustness of the BIG ADO mechanism: Ratios between the percentages of associations learned in the novice vs. expert domain with cell assembly representation of Watts-Strogatz graphs when varying (typically one at a time) several model parameters. The full ordinate scale is used to allow comparison with panel C, but the same data are also expanded in the inset to emphasize the invariance of the results (error bars: standard deviation). All parameter values are reported in the table legend below the plot (with default values in bold). The parameters and their abbreviations are: the number of nodes in the Watts-Strogatz graph (N), which also implies a change in the graph degree, d (kept at 2% of N) as well as the number of pre-training associations (corresponding to N×d/4, that is one half of the pool of available associations); the number of neurons in the network (Nn) and the cell assembly size (S), whereas N was also varied together with S (SNn) so as to keep their ratio constant at 200; the activation threshold (AT), i.e. the fraction of neurons in the cell assembly that need to be synchronously active in order to “identify” the node of the graph represented by that assembly; the firing threshold (FT), i.e. the proportion of presynaptic neuron required to fire in order to activate a postsynaptic neuron; the matrix load (ML), i.e. the constant fraction of presynaptic neurons connected to each postsynaptic neuron in the cell assembly learning model; and the proximity load (PL), i.e. the (top) fraction of axonal-dendritic overlaps throughout the network that are considered to be potential synapses (see also S1 Text 2.4). C. Optimal conditions for one-trial learning of real but not spurious associations: Ratios between the percentages of associations learned for spurious vs. real co-occurrences with cell assembly representation of Watts-Strogatz graphs when varying the same model parameters as in Fig. 4B. The most tunable parameters are the firing threshold (neuronal excitability) and the proximity load (strength of BIG ADO filter: see S1 Text 2.4).

Mentions: Neural network simulations with realistic cell assemblies (Fig. 4) implemented the Zip Net model [20], a computational enhancement of classic Associative Nets [21] that ensures optimal Bayesian learning [22]. Briefly, learning the association between two concepts A and B represented respectively by neurons a1, a2, …, as and b1, b2, …, bs, entails strengthening (or forming) synapses between co-active neurons and weakening or eliminating those between active and inactive neurons. Specifically, in the “incidence” matrix M with rows and columns respectively representing pre- and post-synaptic neurons, the entries in columns bj’s of all ai’s rows are increased while the remaining entries are decreasing by an appropriate amount to keep the total synaptic input constant (S1 Text 2.3).


A neural mechanism for background information-gated learning based on axonal-dendritic overlaps.

Mainetti M, Ascoli GA - PLoS Comput. Biol. (2015)

Generalization of ADO to biologically realistic mechanisms.A. BIG learning with cell-assemblies in small-world graphs of different connectivity: Ratios between the percentages of associations learned in the novice vs. expert domain (bottom surface) and for spurious vs. real co-occurrences (top surface) with varying graph degrees and rewiring probabilities when using cell assembly representation of Watts-Strogatz graphs. Lower rewiring probabilities and, to some extent, higher degrees improve the ability to discriminate real from spurious co-occurrences. These conditions correspond to highly clustered (as opposed to fully random) graphs. The ability to learn new associations within the domain of expertise remains more than double compared to a novice domain. B. Robustness of the BIG ADO mechanism: Ratios between the percentages of associations learned in the novice vs. expert domain with cell assembly representation of Watts-Strogatz graphs when varying (typically one at a time) several model parameters. The full ordinate scale is used to allow comparison with panel C, but the same data are also expanded in the inset to emphasize the invariance of the results (error bars: standard deviation). All parameter values are reported in the table legend below the plot (with default values in bold). The parameters and their abbreviations are: the number of nodes in the Watts-Strogatz graph (N), which also implies a change in the graph degree, d (kept at 2% of N) as well as the number of pre-training associations (corresponding to N×d/4, that is one half of the pool of available associations); the number of neurons in the network (Nn) and the cell assembly size (S), whereas N was also varied together with S (SNn) so as to keep their ratio constant at 200; the activation threshold (AT), i.e. the fraction of neurons in the cell assembly that need to be synchronously active in order to “identify” the node of the graph represented by that assembly; the firing threshold (FT), i.e. the proportion of presynaptic neuron required to fire in order to activate a postsynaptic neuron; the matrix load (ML), i.e. the constant fraction of presynaptic neurons connected to each postsynaptic neuron in the cell assembly learning model; and the proximity load (PL), i.e. the (top) fraction of axonal-dendritic overlaps throughout the network that are considered to be potential synapses (see also S1 Text 2.4). C. Optimal conditions for one-trial learning of real but not spurious associations: Ratios between the percentages of associations learned for spurious vs. real co-occurrences with cell assembly representation of Watts-Strogatz graphs when varying the same model parameters as in Fig. 4B. The most tunable parameters are the firing threshold (neuronal excitability) and the proximity load (strength of BIG ADO filter: see S1 Text 2.4).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4359104&req=5

pcbi.1004155.g004: Generalization of ADO to biologically realistic mechanisms.A. BIG learning with cell-assemblies in small-world graphs of different connectivity: Ratios between the percentages of associations learned in the novice vs. expert domain (bottom surface) and for spurious vs. real co-occurrences (top surface) with varying graph degrees and rewiring probabilities when using cell assembly representation of Watts-Strogatz graphs. Lower rewiring probabilities and, to some extent, higher degrees improve the ability to discriminate real from spurious co-occurrences. These conditions correspond to highly clustered (as opposed to fully random) graphs. The ability to learn new associations within the domain of expertise remains more than double compared to a novice domain. B. Robustness of the BIG ADO mechanism: Ratios between the percentages of associations learned in the novice vs. expert domain with cell assembly representation of Watts-Strogatz graphs when varying (typically one at a time) several model parameters. The full ordinate scale is used to allow comparison with panel C, but the same data are also expanded in the inset to emphasize the invariance of the results (error bars: standard deviation). All parameter values are reported in the table legend below the plot (with default values in bold). The parameters and their abbreviations are: the number of nodes in the Watts-Strogatz graph (N), which also implies a change in the graph degree, d (kept at 2% of N) as well as the number of pre-training associations (corresponding to N×d/4, that is one half of the pool of available associations); the number of neurons in the network (Nn) and the cell assembly size (S), whereas N was also varied together with S (SNn) so as to keep their ratio constant at 200; the activation threshold (AT), i.e. the fraction of neurons in the cell assembly that need to be synchronously active in order to “identify” the node of the graph represented by that assembly; the firing threshold (FT), i.e. the proportion of presynaptic neuron required to fire in order to activate a postsynaptic neuron; the matrix load (ML), i.e. the constant fraction of presynaptic neurons connected to each postsynaptic neuron in the cell assembly learning model; and the proximity load (PL), i.e. the (top) fraction of axonal-dendritic overlaps throughout the network that are considered to be potential synapses (see also S1 Text 2.4). C. Optimal conditions for one-trial learning of real but not spurious associations: Ratios between the percentages of associations learned for spurious vs. real co-occurrences with cell assembly representation of Watts-Strogatz graphs when varying the same model parameters as in Fig. 4B. The most tunable parameters are the firing threshold (neuronal excitability) and the proximity load (strength of BIG ADO filter: see S1 Text 2.4).
Mentions: Neural network simulations with realistic cell assemblies (Fig. 4) implemented the Zip Net model [20], a computational enhancement of classic Associative Nets [21] that ensures optimal Bayesian learning [22]. Briefly, learning the association between two concepts A and B represented respectively by neurons a1, a2, …, as and b1, b2, …, bs, entails strengthening (or forming) synapses between co-active neurons and weakening or eliminating those between active and inactive neurons. Specifically, in the “incidence” matrix M with rows and columns respectively representing pre- and post-synaptic neurons, the entries in columns bj’s of all ai’s rows are increased while the remaining entries are decreasing by an appropriate amount to keep the total synaptic input constant (S1 Text 2.3).

Bottom Line: The simplest instantiation encodes each concept by single neurons.Results are then generalized to cell assemblies.The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

View Article: PubMed Central - PubMed

Affiliation: Krasnow Institute for Advanced Study, George Mason University, Fairfax, Virginia, United States of America.

ABSTRACT
Experiencing certain events triggers the acquisition of new memories. Although necessary, however, actual experience is not sufficient for memory formation. One-trial learning is also gated by knowledge of appropriate background information to make sense of the experienced occurrence. Strong neurobiological evidence suggests that long-term memory storage involves formation of new synapses. On the short time scale, this form of structural plasticity requires that the axon of the pre-synaptic neuron be physically proximal to the dendrite of the post-synaptic neuron. We surmise that such "axonal-dendritic overlap" (ADO) constitutes the neural correlate of background information-gated (BIG) learning. The hypothesis is based on a fundamental neuroanatomical constraint: an axon must pass close to the dendrites that are near other neurons it contacts. The topographic organization of the mammalian cortex ensures that nearby neurons encode related information. Using neural network simulations, we demonstrate that ADO is a suitable mechanism for BIG learning. We model knowledge as associations between terms, concepts or indivisible units of thought via directed graphs. The simplest instantiation encodes each concept by single neurons. Results are then generalized to cell assemblies. The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

Show MeSH
Related in: MedlinePlus