Limits...
Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses.

Rigotti M, Ben Dayan Rubin D, Wang XJ, Fusi S - Front Comput Neurosci (2010)

Bottom Line: Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states.A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks.Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

View Article: PubMed Central - PubMed

Affiliation: Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University New York, NY, USA.

ABSTRACT
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

No MeSH data available.


(A)Distributed/dense representations are the most efficient: total number of neurons (recurrent network neurons + RCNs) needed to implement r = m transitions between m random attractor states (internal mental states) as a function of f, the average fraction of inputs that activate each individual RCN. The minimal value is realized with f = 1/2. The three curves correspond to three different numbers of mental states m (5,10,20). The number of RCNs is 4/5 of the total number of neurons. (B) Total number of needed neurons to implement m random mental states and r transitions which are randomly chosen between mental states, with f = 1/2. The number of needed neurons grows linearly with m. Different curves correspond to different ratios between r and m. The size of the basin of attraction is at least ρB = 0.03 (i.e., all patterns with an overlap larger than о = 1 - 2ρB = 0.94 with the attractor are required to relax back into the attractor). (C) The size of basins of attraction increases with the number of RCNs. The quality of retrieval (fraction of cases in which the network dynamics flows to the correct attractor) is plotted against the distance between the initial pattern of activity and the attractor, that is the maximal level of degradation tolerated by the network to still be able to retrieve the attractor. The four curves correspond to four different numbers of RCNs. In all these simulations the number of recurrent neurons was kept fixed at N = 220 and m = r = 48. (D) Same as (B), but for larger basins of attraction, ρB = 0.10.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2967380&req=5

Figure 5: (A)Distributed/dense representations are the most efficient: total number of neurons (recurrent network neurons + RCNs) needed to implement r = m transitions between m random attractor states (internal mental states) as a function of f, the average fraction of inputs that activate each individual RCN. The minimal value is realized with f = 1/2. The three curves correspond to three different numbers of mental states m (5,10,20). The number of RCNs is 4/5 of the total number of neurons. (B) Total number of needed neurons to implement m random mental states and r transitions which are randomly chosen between mental states, with f = 1/2. The number of needed neurons grows linearly with m. Different curves correspond to different ratios between r and m. The size of the basin of attraction is at least ρB = 0.03 (i.e., all patterns with an overlap larger than о = 1 - 2ρB = 0.94 with the attractor are required to relax back into the attractor). (C) The size of basins of attraction increases with the number of RCNs. The quality of retrieval (fraction of cases in which the network dynamics flows to the correct attractor) is plotted against the distance between the initial pattern of activity and the attractor, that is the maximal level of degradation tolerated by the network to still be able to retrieve the attractor. The four curves correspond to four different numbers of RCNs. In all these simulations the number of recurrent neurons was kept fixed at N = 220 and m = r = 48. (D) Same as (B), but for larger basins of attraction, ρB = 0.10.

Mentions: Figure 5A shows the required total number of neurons (recurrent and RCNs) as a function of the coding level f of the RCNs found by varying the number of neurons so that the RCNs were always four times as many as the recurrent neurons. The results are shown for r = m transitions for three different numbers of contexts, m = 5,10,20. Consistently with the estimates of the probability that an RCN solves a single context dependence problem plotted in Figure 3, the minimal number of required neurons is in correspondence of dense RCNs patterns of activity (f = 1/2). With f = 1/2, we examined in Figure 5B how the minimal number of needed neurons depends on the task complexity, and in particular how it depends on the number of mental states m and transitions r. Notice that for the curves in Figure 5B labeled with r > m, the same event drives more than one transition, which is what typically happens in context-dependent tasks. The total number of neurons needed to implement the scheme of mental states and event-driven transitions and to keep the size of the basins of attraction constant, increases linearly with m and the slope turns out to be approximately proportional to the ratio r/m, the number of contexts in which each event can appear. In other words, the number of needed neurons increases linearly with the total number of conditions to be imposed for the stability of mental states, and the event-driven transitions. This favorable scaling relation indicates that highly complicated schemes of attractor states and transitions can be implemented in a biological network with a relatively small number of neurons.


Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses.

Rigotti M, Ben Dayan Rubin D, Wang XJ, Fusi S - Front Comput Neurosci (2010)

(A)Distributed/dense representations are the most efficient: total number of neurons (recurrent network neurons + RCNs) needed to implement r = m transitions between m random attractor states (internal mental states) as a function of f, the average fraction of inputs that activate each individual RCN. The minimal value is realized with f = 1/2. The three curves correspond to three different numbers of mental states m (5,10,20). The number of RCNs is 4/5 of the total number of neurons. (B) Total number of needed neurons to implement m random mental states and r transitions which are randomly chosen between mental states, with f = 1/2. The number of needed neurons grows linearly with m. Different curves correspond to different ratios between r and m. The size of the basin of attraction is at least ρB = 0.03 (i.e., all patterns with an overlap larger than о = 1 - 2ρB = 0.94 with the attractor are required to relax back into the attractor). (C) The size of basins of attraction increases with the number of RCNs. The quality of retrieval (fraction of cases in which the network dynamics flows to the correct attractor) is plotted against the distance between the initial pattern of activity and the attractor, that is the maximal level of degradation tolerated by the network to still be able to retrieve the attractor. The four curves correspond to four different numbers of RCNs. In all these simulations the number of recurrent neurons was kept fixed at N = 220 and m = r = 48. (D) Same as (B), but for larger basins of attraction, ρB = 0.10.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2967380&req=5

Figure 5: (A)Distributed/dense representations are the most efficient: total number of neurons (recurrent network neurons + RCNs) needed to implement r = m transitions between m random attractor states (internal mental states) as a function of f, the average fraction of inputs that activate each individual RCN. The minimal value is realized with f = 1/2. The three curves correspond to three different numbers of mental states m (5,10,20). The number of RCNs is 4/5 of the total number of neurons. (B) Total number of needed neurons to implement m random mental states and r transitions which are randomly chosen between mental states, with f = 1/2. The number of needed neurons grows linearly with m. Different curves correspond to different ratios between r and m. The size of the basin of attraction is at least ρB = 0.03 (i.e., all patterns with an overlap larger than о = 1 - 2ρB = 0.94 with the attractor are required to relax back into the attractor). (C) The size of basins of attraction increases with the number of RCNs. The quality of retrieval (fraction of cases in which the network dynamics flows to the correct attractor) is plotted against the distance between the initial pattern of activity and the attractor, that is the maximal level of degradation tolerated by the network to still be able to retrieve the attractor. The four curves correspond to four different numbers of RCNs. In all these simulations the number of recurrent neurons was kept fixed at N = 220 and m = r = 48. (D) Same as (B), but for larger basins of attraction, ρB = 0.10.
Mentions: Figure 5A shows the required total number of neurons (recurrent and RCNs) as a function of the coding level f of the RCNs found by varying the number of neurons so that the RCNs were always four times as many as the recurrent neurons. The results are shown for r = m transitions for three different numbers of contexts, m = 5,10,20. Consistently with the estimates of the probability that an RCN solves a single context dependence problem plotted in Figure 3, the minimal number of required neurons is in correspondence of dense RCNs patterns of activity (f = 1/2). With f = 1/2, we examined in Figure 5B how the minimal number of needed neurons depends on the task complexity, and in particular how it depends on the number of mental states m and transitions r. Notice that for the curves in Figure 5B labeled with r > m, the same event drives more than one transition, which is what typically happens in context-dependent tasks. The total number of neurons needed to implement the scheme of mental states and event-driven transitions and to keep the size of the basins of attraction constant, increases linearly with m and the slope turns out to be approximately proportional to the ratio r/m, the number of contexts in which each event can appear. In other words, the number of needed neurons increases linearly with the total number of conditions to be imposed for the stability of mental states, and the event-driven transitions. This favorable scaling relation indicates that highly complicated schemes of attractor states and transitions can be implemented in a biological network with a relatively small number of neurons.

Bottom Line: Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states.A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks.Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

View Article: PubMed Central - PubMed

Affiliation: Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University New York, NY, USA.

ABSTRACT
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation.

No MeSH data available.