Limits...
Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH

Related in: MedlinePlus

Robustness against noiseA: Activity of three sample units in the recurrent network at three different levels of noise. Blue: “template” trajectory (no noise); Red: “test” trajectory (continuous noise in each unit). The standard deviation of the noise current I0 was 0.001, 0.1, and 1.0 (top to bottom panels; noise amplitude as a fraction of total absolute incoming synaptic weights to each unit averaged across units is 0.007%, 0.7%, and 7%, respectively). B: Average data from 10 different networks. Performance was measured as the averaged Pearson correlation coefficient between template (blue) and test trajectories (red) for each condition (after Fisher transformation), mean ± SEM across networks.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3753043&req=5

Figure 5: Robustness against noiseA: Activity of three sample units in the recurrent network at three different levels of noise. Blue: “template” trajectory (no noise); Red: “test” trajectory (continuous noise in each unit). The standard deviation of the noise current I0 was 0.001, 0.1, and 1.0 (top to bottom panels; noise amplitude as a fraction of total absolute incoming synaptic weights to each unit averaged across units is 0.007%, 0.7%, and 7%, respectively). B: Average data from 10 different networks. Performance was measured as the averaged Pearson correlation coefficient between template (blue) and test trajectories (red) for each condition (after Fisher transformation), mean ± SEM across networks.

Mentions: We next examined two critical issues relating to the stability and dynamics of the trained recurrent networks. First, we performed a parametric noise analysis in order to quantitatively characterize the response of the trained networks in the presence of high levels of noise. To this end different levels of noise were continuously injected into all 800 units of the recurrent network. Second, we examined whether training specifically altered the noise sensitivity of the trajectory elicited by the trained input, or if training produced global changes of all network trajectories. This question can be seen as addressing whether learning (creating locally stable trajectories) was stimulus specific. Each of 10 different networks (N=800, g=1.8) was stimulated with two different 50-ms long inputs. The neural trajectory produced by Input 1 (In1) served as the “innate” training target (duration of 2 s) for recurrent plasticity, while the trajectory triggered by the second input (In2) served as a “control” to determine the effect of training on untrained trajectories. After training, performance was quantified by examining the correlation within the 2 sec window between the trajectories elicited in the presence of noise in relation to the trajectory in the absence of noise (“reproducibility”; see Online Methods). After training the activity patterns in the recurrent units were very similar in the absence and in the presence of continuous noise at levels of 0.001 and 0.1, but not 1.0 (Fig. 5A). Average data (Fig. 5B) shows that over noise amplitudes of up to 0.1 performance in response to In1 was essentially perfect. In these simulations the RRNs were trained for 20 trials (in the presence of noise with amplitude 0.001). The reproducibility was not significantly better with 30 training trials (Fig. S3). But the sensitivity to noise can be even further decreased by training in the presence of more noise for more trials (e.g., Fig. 4 and supplementary Figs. S1 and S3).


Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Robustness against noiseA: Activity of three sample units in the recurrent network at three different levels of noise. Blue: “template” trajectory (no noise); Red: “test” trajectory (continuous noise in each unit). The standard deviation of the noise current I0 was 0.001, 0.1, and 1.0 (top to bottom panels; noise amplitude as a fraction of total absolute incoming synaptic weights to each unit averaged across units is 0.007%, 0.7%, and 7%, respectively). B: Average data from 10 different networks. Performance was measured as the averaged Pearson correlation coefficient between template (blue) and test trajectories (red) for each condition (after Fisher transformation), mean ± SEM across networks.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3753043&req=5

Figure 5: Robustness against noiseA: Activity of three sample units in the recurrent network at three different levels of noise. Blue: “template” trajectory (no noise); Red: “test” trajectory (continuous noise in each unit). The standard deviation of the noise current I0 was 0.001, 0.1, and 1.0 (top to bottom panels; noise amplitude as a fraction of total absolute incoming synaptic weights to each unit averaged across units is 0.007%, 0.7%, and 7%, respectively). B: Average data from 10 different networks. Performance was measured as the averaged Pearson correlation coefficient between template (blue) and test trajectories (red) for each condition (after Fisher transformation), mean ± SEM across networks.
Mentions: We next examined two critical issues relating to the stability and dynamics of the trained recurrent networks. First, we performed a parametric noise analysis in order to quantitatively characterize the response of the trained networks in the presence of high levels of noise. To this end different levels of noise were continuously injected into all 800 units of the recurrent network. Second, we examined whether training specifically altered the noise sensitivity of the trajectory elicited by the trained input, or if training produced global changes of all network trajectories. This question can be seen as addressing whether learning (creating locally stable trajectories) was stimulus specific. Each of 10 different networks (N=800, g=1.8) was stimulated with two different 50-ms long inputs. The neural trajectory produced by Input 1 (In1) served as the “innate” training target (duration of 2 s) for recurrent plasticity, while the trajectory triggered by the second input (In2) served as a “control” to determine the effect of training on untrained trajectories. After training, performance was quantified by examining the correlation within the 2 sec window between the trajectories elicited in the presence of noise in relation to the trajectory in the absence of noise (“reproducibility”; see Online Methods). After training the activity patterns in the recurrent units were very similar in the absence and in the presence of continuous noise at levels of 0.001 and 0.1, but not 1.0 (Fig. 5A). Average data (Fig. 5B) shows that over noise amplitudes of up to 0.1 performance in response to In1 was essentially perfect. In these simulations the RRNs were trained for 20 trials (in the presence of noise with amplitude 0.001). The reproducibility was not significantly better with 30 training trials (Fig. S3). But the sensitivity to noise can be even further decreased by training in the presence of more noise for more trials (e.g., Fig. 4 and supplementary Figs. S1 and S3).

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH
Related in: MedlinePlus