Limits...
Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH

Related in: MedlinePlus

Innate training decreases the neural variance and results in Weber-like timingA: (Top panel) Time traces of three sample units over two different trials (blue and red) (N=800, g=1.5, pc=0.25, 1.3 s training window). Gaussian noise with a standard deviation of I0=1.5 was continuously injected into all recurrent units. As in Figs. 1 and 3 the output unit was trained to generate a timed pulse (1000 ms after the onset of the 50 ms input pulse, middle panel). The lower panel shows the neural variance. The variance of each unit was calculated over 8 trials, and then averaged over all 800 units. There was a sharp decrease in variance produced by the onset of the stimulus, which persisted over many seconds before gradually ramping back up to baseline (not shown). The dashed line shows the neural variance before training: because the input “clamps” network activity stimulus onset also produced a decrease in the variance, but it rapidly increased after stimulus offset. The mean Std across units at the input of the input pulse was 0.037 and 0.024, before and after training respectively. B: Example of two simulations in which the output unit were trained to produce events at 250, 500, 750, 1000, and 1250 ms (upper panels). Variance across trials was estimated by calculating the time of the peak of each response. The relationship between variance and t2 was well fit by a linear function (lower panels). I0=1.0.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3753043&req=5

Figure 4: Innate training decreases the neural variance and results in Weber-like timingA: (Top panel) Time traces of three sample units over two different trials (blue and red) (N=800, g=1.5, pc=0.25, 1.3 s training window). Gaussian noise with a standard deviation of I0=1.5 was continuously injected into all recurrent units. As in Figs. 1 and 3 the output unit was trained to generate a timed pulse (1000 ms after the onset of the 50 ms input pulse, middle panel). The lower panel shows the neural variance. The variance of each unit was calculated over 8 trials, and then averaged over all 800 units. There was a sharp decrease in variance produced by the onset of the stimulus, which persisted over many seconds before gradually ramping back up to baseline (not shown). The dashed line shows the neural variance before training: because the input “clamps” network activity stimulus onset also produced a decrease in the variance, but it rapidly increased after stimulus offset. The mean Std across units at the input of the input pulse was 0.037 and 0.024, before and after training respectively. B: Example of two simulations in which the output unit were trained to produce events at 250, 500, 750, 1000, and 1250 ms (upper panels). Variance across trials was estimated by calculating the time of the peak of each response. The relationship between variance and t2 was well fit by a linear function (lower panels). I0=1.0.

Mentions: Implicit in the findings above is that after training there are different types of dynamics within the same network: while ongoing activity (or trajectories triggered by untrained inputs) continue to produce chaotic trajectories, the trained trajectories exhibit locally stable patterns of activity. Recent experimental studies have also revealed different types of dynamics within the same network. For example, it has been shown that cross-trial variability of neural activity is “quenched” in response to stimulus onset34; that is, the variability of neural “ongoing” or “background” activity is significantly larger than that observed after a stimulus or during a behavioral task. We thus quantified the cross-trial variance before and after the brief 50-ms input in the trained and untrained networks. Additionally, to “push the envelope” in terms of how much noise the network can handle we increased the noise levels during training and testing (as well as the number of training trials). The variance was calculated over 8 test trials for each of the 800 units over a time period starting 500 ms before the stimulus. The target delay was 1000 ms (and the training window was 1300 ms). The sample “firing rates” of three units in Fig 4A (upper panel) show that in presence of continuous very high levels of noise (I0=1.5) each of the recurrent units exhibits significant jitter, reminiscent of the membrane voltage fluctuations observed in vivo, resulting in a high cross-trial variance before stimulation (t<0). Nevertheless, in response to the input, the trained network was still able to robustly generate an appropriately timed output (Fig. 4A, middle panel). And, as expected, this robustness reflects a dramatic decrease in the variance of the activity after the stimulus onset (Fig. 4A, lower panel).


Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Innate training decreases the neural variance and results in Weber-like timingA: (Top panel) Time traces of three sample units over two different trials (blue and red) (N=800, g=1.5, pc=0.25, 1.3 s training window). Gaussian noise with a standard deviation of I0=1.5 was continuously injected into all recurrent units. As in Figs. 1 and 3 the output unit was trained to generate a timed pulse (1000 ms after the onset of the 50 ms input pulse, middle panel). The lower panel shows the neural variance. The variance of each unit was calculated over 8 trials, and then averaged over all 800 units. There was a sharp decrease in variance produced by the onset of the stimulus, which persisted over many seconds before gradually ramping back up to baseline (not shown). The dashed line shows the neural variance before training: because the input “clamps” network activity stimulus onset also produced a decrease in the variance, but it rapidly increased after stimulus offset. The mean Std across units at the input of the input pulse was 0.037 and 0.024, before and after training respectively. B: Example of two simulations in which the output unit were trained to produce events at 250, 500, 750, 1000, and 1250 ms (upper panels). Variance across trials was estimated by calculating the time of the peak of each response. The relationship between variance and t2 was well fit by a linear function (lower panels). I0=1.0.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3753043&req=5

Figure 4: Innate training decreases the neural variance and results in Weber-like timingA: (Top panel) Time traces of three sample units over two different trials (blue and red) (N=800, g=1.5, pc=0.25, 1.3 s training window). Gaussian noise with a standard deviation of I0=1.5 was continuously injected into all recurrent units. As in Figs. 1 and 3 the output unit was trained to generate a timed pulse (1000 ms after the onset of the 50 ms input pulse, middle panel). The lower panel shows the neural variance. The variance of each unit was calculated over 8 trials, and then averaged over all 800 units. There was a sharp decrease in variance produced by the onset of the stimulus, which persisted over many seconds before gradually ramping back up to baseline (not shown). The dashed line shows the neural variance before training: because the input “clamps” network activity stimulus onset also produced a decrease in the variance, but it rapidly increased after stimulus offset. The mean Std across units at the input of the input pulse was 0.037 and 0.024, before and after training respectively. B: Example of two simulations in which the output unit were trained to produce events at 250, 500, 750, 1000, and 1250 ms (upper panels). Variance across trials was estimated by calculating the time of the peak of each response. The relationship between variance and t2 was well fit by a linear function (lower panels). I0=1.0.
Mentions: Implicit in the findings above is that after training there are different types of dynamics within the same network: while ongoing activity (or trajectories triggered by untrained inputs) continue to produce chaotic trajectories, the trained trajectories exhibit locally stable patterns of activity. Recent experimental studies have also revealed different types of dynamics within the same network. For example, it has been shown that cross-trial variability of neural activity is “quenched” in response to stimulus onset34; that is, the variability of neural “ongoing” or “background” activity is significantly larger than that observed after a stimulus or during a behavioral task. We thus quantified the cross-trial variance before and after the brief 50-ms input in the trained and untrained networks. Additionally, to “push the envelope” in terms of how much noise the network can handle we increased the noise levels during training and testing (as well as the number of training trials). The variance was calculated over 8 test trials for each of the 800 units over a time period starting 500 ms before the stimulus. The target delay was 1000 ms (and the training window was 1300 ms). The sample “firing rates” of three units in Fig 4A (upper panel) show that in presence of continuous very high levels of noise (I0=1.5) each of the recurrent units exhibits significant jitter, reminiscent of the membrane voltage fluctuations observed in vivo, resulting in a high cross-trial variance before stimulation (t<0). Nevertheless, in response to the input, the trained network was still able to robustly generate an appropriately timed output (Fig. 4A, middle panel). And, as expected, this robustness reflects a dramatic decrease in the variance of the activity after the stimulus onset (Fig. 4A, lower panel).

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH
Related in: MedlinePlus