Limits...
A neural mechanism for background information-gated learning based on axonal-dendritic overlaps.

Mainetti M, Ascoli GA - PLoS Comput. Biol. (2015)

Bottom Line: The simplest instantiation encodes each concept by single neurons.Results are then generalized to cell assemblies.The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

View Article: PubMed Central - PubMed

Affiliation: Krasnow Institute for Advanced Study, George Mason University, Fairfax, Virginia, United States of America.

ABSTRACT
Experiencing certain events triggers the acquisition of new memories. Although necessary, however, actual experience is not sufficient for memory formation. One-trial learning is also gated by knowledge of appropriate background information to make sense of the experienced occurrence. Strong neurobiological evidence suggests that long-term memory storage involves formation of new synapses. On the short time scale, this form of structural plasticity requires that the axon of the pre-synaptic neuron be physically proximal to the dendrite of the post-synaptic neuron. We surmise that such "axonal-dendritic overlap" (ADO) constitutes the neural correlate of background information-gated (BIG) learning. The hypothesis is based on a fundamental neuroanatomical constraint: an axon must pass close to the dendrites that are near other neurons it contacts. The topographic organization of the mammalian cortex ensures that nearby neurons encode related information. Using neural network simulations, we demonstrate that ADO is a suitable mechanism for BIG learning. We model knowledge as associations between terms, concepts or indivisible units of thought via directed graphs. The simplest instantiation encodes each concept by single neurons. Results are then generalized to cell assemblies. The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

Show MeSH
The cognitive value of BIG computations.A. BIG ADO in generic co-occurrence graphs: Simplified representation of the Watts-Strogatz graph-based model. During pre-training, half of the associations the network learns (solid lines) correspond to edges terminating in 20% of the nodes (black: “domain of expertise”). The other half is sampled from the remaining 80% of the graph (gray: novice domain). After pre-training, the ability to learn new (dashed) associations is tested both within and outside the domain of expertise. If two or more pairs of nodes are co-activated at once, spurious associations (dotted) could be learned across the pairs. B. BIG learning in small-world graphs: Differential ability of the pre-trained network to acquire new associations within (72.1±2.3%) and outside (3.9±0.4%) domain of expertise. C. Differentiating real from spurious associations: To discern the ability to learn real versus spurious associations in Watts-Strogatz graphs, pairs of new co-occurrences were presented, such as “buzzing beetle” and “buzzing grapefruit” (as if seeing/hearing a buzzing beetle while eating a grapefruit). The former is real (it belongs to the Watts-Strogatz graph), while the latter is spurious. Almost 13% of real associations were learned, including both those within and outside domain of expertise (black and gray lines in Fig. 3A), as opposed to less than 2% of spurious associations (dotted line in Fig. 3A).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4359104&req=5

pcbi.1004155.g003: The cognitive value of BIG computations.A. BIG ADO in generic co-occurrence graphs: Simplified representation of the Watts-Strogatz graph-based model. During pre-training, half of the associations the network learns (solid lines) correspond to edges terminating in 20% of the nodes (black: “domain of expertise”). The other half is sampled from the remaining 80% of the graph (gray: novice domain). After pre-training, the ability to learn new (dashed) associations is tested both within and outside the domain of expertise. If two or more pairs of nodes are co-activated at once, spurious associations (dotted) could be learned across the pairs. B. BIG learning in small-world graphs: Differential ability of the pre-trained network to acquire new associations within (72.1±2.3%) and outside (3.9±0.4%) domain of expertise. C. Differentiating real from spurious associations: To discern the ability to learn real versus spurious associations in Watts-Strogatz graphs, pairs of new co-occurrences were presented, such as “buzzing beetle” and “buzzing grapefruit” (as if seeing/hearing a buzzing beetle while eating a grapefruit). The former is real (it belongs to the Watts-Strogatz graph), while the latter is spurious. Almost 13% of real associations were learned, including both those within and outside domain of expertise (black and gray lines in Fig. 3A), as opposed to less than 2% of spurious associations (dotted line in Fig. 3A).

Mentions: To test the BIG ADO learning rule in more broadly applicable cases than noun-adjective associations, we generated small-world graphs adapting the algorithm of Watts and Strogatz [19]. Specifically, unless otherwise noted, Watts-Strogatz graphs were initially produced with degree 20 and 10% rewiring probability. Next, a random direction was selected for 90% of the edges, while the remaining 10% was made bidirectional. A random 20% of the nodes, along with all their incoming edges, were then labeled as belonging to the agent’s area of expertise. In the pre-training phase, networks were wired with a random set of edges of the graph, with the constraint that half of them must belong to the area of expertise, unless otherwise specified. The resulting connectivity consisted of a sub-graph of the initial graph, whose nodes in the area of expertise had higher average degree than those outside the agent’s expertise. In the “grandmother cell” implementation (Fig. 3), the BIG ADO threshold was set at 1. When the size of the graph (N) was varied to assess the robustness of the BIG ADO findings with respect to the parameter space, the degree (d) and the number of associations (edges) used to pre-train the network (T) also varied as d = N/50 and T = N×d/4, in order to keep the fraction of associations learned during pre-training constant.


A neural mechanism for background information-gated learning based on axonal-dendritic overlaps.

Mainetti M, Ascoli GA - PLoS Comput. Biol. (2015)

The cognitive value of BIG computations.A. BIG ADO in generic co-occurrence graphs: Simplified representation of the Watts-Strogatz graph-based model. During pre-training, half of the associations the network learns (solid lines) correspond to edges terminating in 20% of the nodes (black: “domain of expertise”). The other half is sampled from the remaining 80% of the graph (gray: novice domain). After pre-training, the ability to learn new (dashed) associations is tested both within and outside the domain of expertise. If two or more pairs of nodes are co-activated at once, spurious associations (dotted) could be learned across the pairs. B. BIG learning in small-world graphs: Differential ability of the pre-trained network to acquire new associations within (72.1±2.3%) and outside (3.9±0.4%) domain of expertise. C. Differentiating real from spurious associations: To discern the ability to learn real versus spurious associations in Watts-Strogatz graphs, pairs of new co-occurrences were presented, such as “buzzing beetle” and “buzzing grapefruit” (as if seeing/hearing a buzzing beetle while eating a grapefruit). The former is real (it belongs to the Watts-Strogatz graph), while the latter is spurious. Almost 13% of real associations were learned, including both those within and outside domain of expertise (black and gray lines in Fig. 3A), as opposed to less than 2% of spurious associations (dotted line in Fig. 3A).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4359104&req=5

pcbi.1004155.g003: The cognitive value of BIG computations.A. BIG ADO in generic co-occurrence graphs: Simplified representation of the Watts-Strogatz graph-based model. During pre-training, half of the associations the network learns (solid lines) correspond to edges terminating in 20% of the nodes (black: “domain of expertise”). The other half is sampled from the remaining 80% of the graph (gray: novice domain). After pre-training, the ability to learn new (dashed) associations is tested both within and outside the domain of expertise. If two or more pairs of nodes are co-activated at once, spurious associations (dotted) could be learned across the pairs. B. BIG learning in small-world graphs: Differential ability of the pre-trained network to acquire new associations within (72.1±2.3%) and outside (3.9±0.4%) domain of expertise. C. Differentiating real from spurious associations: To discern the ability to learn real versus spurious associations in Watts-Strogatz graphs, pairs of new co-occurrences were presented, such as “buzzing beetle” and “buzzing grapefruit” (as if seeing/hearing a buzzing beetle while eating a grapefruit). The former is real (it belongs to the Watts-Strogatz graph), while the latter is spurious. Almost 13% of real associations were learned, including both those within and outside domain of expertise (black and gray lines in Fig. 3A), as opposed to less than 2% of spurious associations (dotted line in Fig. 3A).
Mentions: To test the BIG ADO learning rule in more broadly applicable cases than noun-adjective associations, we generated small-world graphs adapting the algorithm of Watts and Strogatz [19]. Specifically, unless otherwise noted, Watts-Strogatz graphs were initially produced with degree 20 and 10% rewiring probability. Next, a random direction was selected for 90% of the edges, while the remaining 10% was made bidirectional. A random 20% of the nodes, along with all their incoming edges, were then labeled as belonging to the agent’s area of expertise. In the pre-training phase, networks were wired with a random set of edges of the graph, with the constraint that half of them must belong to the area of expertise, unless otherwise specified. The resulting connectivity consisted of a sub-graph of the initial graph, whose nodes in the area of expertise had higher average degree than those outside the agent’s expertise. In the “grandmother cell” implementation (Fig. 3), the BIG ADO threshold was set at 1. When the size of the graph (N) was varied to assess the robustness of the BIG ADO findings with respect to the parameter space, the degree (d) and the number of associations (edges) used to pre-train the network (T) also varied as d = N/50 and T = N×d/4, in order to keep the fraction of associations learned during pre-training constant.

Bottom Line: The simplest instantiation encodes each concept by single neurons.Results are then generalized to cell assemblies.The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

View Article: PubMed Central - PubMed

Affiliation: Krasnow Institute for Advanced Study, George Mason University, Fairfax, Virginia, United States of America.

ABSTRACT
Experiencing certain events triggers the acquisition of new memories. Although necessary, however, actual experience is not sufficient for memory formation. One-trial learning is also gated by knowledge of appropriate background information to make sense of the experienced occurrence. Strong neurobiological evidence suggests that long-term memory storage involves formation of new synapses. On the short time scale, this form of structural plasticity requires that the axon of the pre-synaptic neuron be physically proximal to the dendrite of the post-synaptic neuron. We surmise that such "axonal-dendritic overlap" (ADO) constitutes the neural correlate of background information-gated (BIG) learning. The hypothesis is based on a fundamental neuroanatomical constraint: an axon must pass close to the dendrites that are near other neurons it contacts. The topographic organization of the mammalian cortex ensures that nearby neurons encode related information. Using neural network simulations, we demonstrate that ADO is a suitable mechanism for BIG learning. We model knowledge as associations between terms, concepts or indivisible units of thought via directed graphs. The simplest instantiation encodes each concept by single neurons. Results are then generalized to cell assemblies. The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

Show MeSH