Limits...
A neural mechanism for background information-gated learning based on axonal-dendritic overlaps.

Mainetti M, Ascoli GA - PLoS Comput. Biol. (2015)

Bottom Line: The simplest instantiation encodes each concept by single neurons.Results are then generalized to cell assemblies.The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

View Article: PubMed Central - PubMed

Affiliation: Krasnow Institute for Advanced Study, George Mason University, Fairfax, Virginia, United States of America.

ABSTRACT
Experiencing certain events triggers the acquisition of new memories. Although necessary, however, actual experience is not sufficient for memory formation. One-trial learning is also gated by knowledge of appropriate background information to make sense of the experienced occurrence. Strong neurobiological evidence suggests that long-term memory storage involves formation of new synapses. On the short time scale, this form of structural plasticity requires that the axon of the pre-synaptic neuron be physically proximal to the dendrite of the post-synaptic neuron. We surmise that such "axonal-dendritic overlap" (ADO) constitutes the neural correlate of background information-gated (BIG) learning. The hypothesis is based on a fundamental neuroanatomical constraint: an axon must pass close to the dendrites that are near other neurons it contacts. The topographic organization of the mammalian cortex ensures that nearby neurons encode related information. Using neural network simulations, we demonstrate that ADO is a suitable mechanism for BIG learning. We model knowledge as associations between terms, concepts or indivisible units of thought via directed graphs. The simplest instantiation encodes each concept by single neurons. Results are then generalized to cell assemblies. The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

Show MeSH
Word association with grandmother neurons.A. Adjective-noun associations in different domains of expertise: Portion of the bipartite association graph extracted from Wikipedia based on adjective pairing frequency for animals (red) and objects (blue) nouns. Arrows represent associations that have been learned during pre-training (solid lines) as well as those present in the bipartite graph but not used for pre-training (dotted lines). This example illustrates greater pre-training with animal associations (“animal expert”). Consequently, this network will be more likely to acquire newly presented associations that belong to the animal class (yellow highlight) as opposed to the object class (orange highlight). B. Background information-gated learning in the word graph: Proportion of newly acquired associations in the bipartite association graph. Networks were pre-trained with half of the edges, varying the amount of expertise from highly specialized (top row: 40% animal edges and 10% object edges or vice versa) to mildly specialized (middle: 30%-20% animal-object edges or vice versa) to not specialized (bottom: 25%-25%). A third network was pre-trained with the same proportions of two arbitrary subsets of edges in a random equivalent bipartite graph. The expert groups (left to right pairs in each row: animal, object, random) always outperformed the “novice” group (object, animal, random). The improved learning for animals relative to object (and random) cases is due to intrinsic background information (see text).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4359104&req=5

pcbi.1004155.g002: Word association with grandmother neurons.A. Adjective-noun associations in different domains of expertise: Portion of the bipartite association graph extracted from Wikipedia based on adjective pairing frequency for animals (red) and objects (blue) nouns. Arrows represent associations that have been learned during pre-training (solid lines) as well as those present in the bipartite graph but not used for pre-training (dotted lines). This example illustrates greater pre-training with animal associations (“animal expert”). Consequently, this network will be more likely to acquire newly presented associations that belong to the animal class (yellow highlight) as opposed to the object class (orange highlight). B. Background information-gated learning in the word graph: Proportion of newly acquired associations in the bipartite association graph. Networks were pre-trained with half of the edges, varying the amount of expertise from highly specialized (top row: 40% animal edges and 10% object edges or vice versa) to mildly specialized (middle: 30%-20% animal-object edges or vice versa) to not specialized (bottom: 25%-25%). A third network was pre-trained with the same proportions of two arbitrary subsets of edges in a random equivalent bipartite graph. The expert groups (left to right pairs in each row: animal, object, random) always outperformed the “novice” group (object, animal, random). The improved learning for animals relative to object (and random) cases is due to intrinsic background information (see text).

Mentions: The dataset of word associations used in the first test of the BIG ADO learning rule (Fig. 2A-B) was derived from a compilation of noun/adjective pairings in Wikipedia. In its original form, it consisted of 32 million adjective-modified nouns (http://wiki.ims.uni-stuttgart.de/extern/WordGraph). After identifying nouns corresponding to animals and household objects, we skimmed infrequent adjectives and removed ambiguous terms (see S1 Text 2.1 for exact protocol). The resulting bipartite graph consisted of 50 animal nouns, 50 household object nouns, 285 adjectives and 2,682 edges (1,324 for animals and 1,358 for objects). Next, two networks were pre-trained by connecting half of the noun-adjective pairs from the graph. One of the networks associated more edges pertaining to animal nodes (becoming an animal expert and object novice), while the other associated more edges pertaining to object nodes (object expert, animal novice). Moreover, the amount of specialization was also varied to mimic different levels of specialization. This was achieved by varying the ratio between animals and objects learned in pre-training. Learning was then tested on the other half of the noun-adjective pairs using the BIG ADO rule with a proximity threshold (θ in equation 1) of 6. In the random equivalent graphs, edges between 100 “noun” nodes and 285 “adjective” nodes were generated stochastically by preserving both the overall noun and adjective degree distributions of the word graph. In this “control” condition, networks were pre-trained with expertise on one arbitrary subset of nodes.


A neural mechanism for background information-gated learning based on axonal-dendritic overlaps.

Mainetti M, Ascoli GA - PLoS Comput. Biol. (2015)

Word association with grandmother neurons.A. Adjective-noun associations in different domains of expertise: Portion of the bipartite association graph extracted from Wikipedia based on adjective pairing frequency for animals (red) and objects (blue) nouns. Arrows represent associations that have been learned during pre-training (solid lines) as well as those present in the bipartite graph but not used for pre-training (dotted lines). This example illustrates greater pre-training with animal associations (“animal expert”). Consequently, this network will be more likely to acquire newly presented associations that belong to the animal class (yellow highlight) as opposed to the object class (orange highlight). B. Background information-gated learning in the word graph: Proportion of newly acquired associations in the bipartite association graph. Networks were pre-trained with half of the edges, varying the amount of expertise from highly specialized (top row: 40% animal edges and 10% object edges or vice versa) to mildly specialized (middle: 30%-20% animal-object edges or vice versa) to not specialized (bottom: 25%-25%). A third network was pre-trained with the same proportions of two arbitrary subsets of edges in a random equivalent bipartite graph. The expert groups (left to right pairs in each row: animal, object, random) always outperformed the “novice” group (object, animal, random). The improved learning for animals relative to object (and random) cases is due to intrinsic background information (see text).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4359104&req=5

pcbi.1004155.g002: Word association with grandmother neurons.A. Adjective-noun associations in different domains of expertise: Portion of the bipartite association graph extracted from Wikipedia based on adjective pairing frequency for animals (red) and objects (blue) nouns. Arrows represent associations that have been learned during pre-training (solid lines) as well as those present in the bipartite graph but not used for pre-training (dotted lines). This example illustrates greater pre-training with animal associations (“animal expert”). Consequently, this network will be more likely to acquire newly presented associations that belong to the animal class (yellow highlight) as opposed to the object class (orange highlight). B. Background information-gated learning in the word graph: Proportion of newly acquired associations in the bipartite association graph. Networks were pre-trained with half of the edges, varying the amount of expertise from highly specialized (top row: 40% animal edges and 10% object edges or vice versa) to mildly specialized (middle: 30%-20% animal-object edges or vice versa) to not specialized (bottom: 25%-25%). A third network was pre-trained with the same proportions of two arbitrary subsets of edges in a random equivalent bipartite graph. The expert groups (left to right pairs in each row: animal, object, random) always outperformed the “novice” group (object, animal, random). The improved learning for animals relative to object (and random) cases is due to intrinsic background information (see text).
Mentions: The dataset of word associations used in the first test of the BIG ADO learning rule (Fig. 2A-B) was derived from a compilation of noun/adjective pairings in Wikipedia. In its original form, it consisted of 32 million adjective-modified nouns (http://wiki.ims.uni-stuttgart.de/extern/WordGraph). After identifying nouns corresponding to animals and household objects, we skimmed infrequent adjectives and removed ambiguous terms (see S1 Text 2.1 for exact protocol). The resulting bipartite graph consisted of 50 animal nouns, 50 household object nouns, 285 adjectives and 2,682 edges (1,324 for animals and 1,358 for objects). Next, two networks were pre-trained by connecting half of the noun-adjective pairs from the graph. One of the networks associated more edges pertaining to animal nodes (becoming an animal expert and object novice), while the other associated more edges pertaining to object nodes (object expert, animal novice). Moreover, the amount of specialization was also varied to mimic different levels of specialization. This was achieved by varying the ratio between animals and objects learned in pre-training. Learning was then tested on the other half of the noun-adjective pairs using the BIG ADO rule with a proximity threshold (θ in equation 1) of 6. In the random equivalent graphs, edges between 100 “noun” nodes and 285 “adjective” nodes were generated stochastically by preserving both the overall noun and adjective degree distributions of the word graph. In this “control” condition, networks were pre-trained with expertise on one arbitrary subset of nodes.

Bottom Line: The simplest instantiation encodes each concept by single neurons.Results are then generalized to cell assemblies.The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

View Article: PubMed Central - PubMed

Affiliation: Krasnow Institute for Advanced Study, George Mason University, Fairfax, Virginia, United States of America.

ABSTRACT
Experiencing certain events triggers the acquisition of new memories. Although necessary, however, actual experience is not sufficient for memory formation. One-trial learning is also gated by knowledge of appropriate background information to make sense of the experienced occurrence. Strong neurobiological evidence suggests that long-term memory storage involves formation of new synapses. On the short time scale, this form of structural plasticity requires that the axon of the pre-synaptic neuron be physically proximal to the dendrite of the post-synaptic neuron. We surmise that such "axonal-dendritic overlap" (ADO) constitutes the neural correlate of background information-gated (BIG) learning. The hypothesis is based on a fundamental neuroanatomical constraint: an axon must pass close to the dendrites that are near other neurons it contacts. The topographic organization of the mammalian cortex ensures that nearby neurons encode related information. Using neural network simulations, we demonstrate that ADO is a suitable mechanism for BIG learning. We model knowledge as associations between terms, concepts or indivisible units of thought via directed graphs. The simplest instantiation encodes each concept by single neurons. Results are then generalized to cell assemblies. The proposed mechanism results in learning real associations better than spurious co-occurrences, providing definitive cognitive advantages.

Show MeSH