Limits...
Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH

Related in: MedlinePlus

Suppression of chaosA: Average logarithmic distance between original and perturbed trajectories for each of ten networks, for the trajectories triggered by Input1 (the trained input) before and after training. A straight portion with a positive slope indicates chaotic dynamics; the value of the slope is the estimate for the Largest Lyapunov Exponent (λ). After training, the original and perturbed trajectories no longer diverge (except for one network). B: The pre-training trajectories triggered by both inputs displayed positive λ, indicative of chaotic dynamics (Input1: λ=7.12 ± 0.35, mean ± SEM across the ten networks, values significantly different from zero t-test p=10−8; Input2: λ=7.29 ± 0.45, p=4×10−8; all reported λs have units of 1/s). After training, the trajectory triggered by Input1 was locally stable, as indicated by a non-positive mean λ (λ=0.05 ± 0.45, p=0.90); Input2, however, still produced diverging trajectories as evidence by λ significantly above zero (λ=3.05 ± 0.70, p=0.0016). After training the trajectories outside the trained window had a positive mean λ in response to both inputs (Input1: λ=2.75 ± 0.70, p=0.0035; Input2: λ=2.27 ± 0.60, p=0.0039), with some networks displaying chaotic activity (8/10) and some entering limit cycles (2/10). The interaction effect is significant (F2,18=20.7, p=2×10−5, a 2×3 two-way ANOVA with repeated measures, factors “Input” and “Training”). In addition to this stimulus-specific effect of training, there was a global nonspecific effect of decreased divergence of trajectories after training, represented by a lower though still positive λ for Post-train Input2 and Post-outside Input1 and Input2.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3753043&req=5

Figure 6: Suppression of chaosA: Average logarithmic distance between original and perturbed trajectories for each of ten networks, for the trajectories triggered by Input1 (the trained input) before and after training. A straight portion with a positive slope indicates chaotic dynamics; the value of the slope is the estimate for the Largest Lyapunov Exponent (λ). After training, the original and perturbed trajectories no longer diverge (except for one network). B: The pre-training trajectories triggered by both inputs displayed positive λ, indicative of chaotic dynamics (Input1: λ=7.12 ± 0.35, mean ± SEM across the ten networks, values significantly different from zero t-test p=10−8; Input2: λ=7.29 ± 0.45, p=4×10−8; all reported λs have units of 1/s). After training, the trajectory triggered by Input1 was locally stable, as indicated by a non-positive mean λ (λ=0.05 ± 0.45, p=0.90); Input2, however, still produced diverging trajectories as evidence by λ significantly above zero (λ=3.05 ± 0.70, p=0.0016). After training the trajectories outside the trained window had a positive mean λ in response to both inputs (Input1: λ=2.75 ± 0.70, p=0.0035; Input2: λ=2.27 ± 0.60, p=0.0039), with some networks displaying chaotic activity (8/10) and some entering limit cycles (2/10). The interaction effect is significant (F2,18=20.7, p=2×10−5, a 2×3 two-way ANOVA with repeated measures, factors “Input” and “Training”). In addition to this stimulus-specific effect of training, there was a global nonspecific effect of decreased divergence of trajectories after training, represented by a lower though still positive λ for Post-train Input2 and Post-outside Input1 and Input2.

Mentions: Training to the In1 trajectory also improved the reproducibility of In2, but despite this improvement there was a fundamental difference between the trained and untrained trajectories. The increased reproducibility of both the trained and untrained patterns does not imply that either of them is no longer chaotic—rather it provides an estimate of how much two trajectories diverge within a 2 second window in response to different levels of noise. Thus, to formally characterize the behavior of the networks before and after training we quantified the divergence of trajectories by estimating the largest Lyapunov exponent (λ)—which provides a measure of the rate of separation of two nearby points in state-space, a standard way to determine if a dynamical system is chaotic or not. For each of the ten networks, λ was numerically estimated for the trajectories elicited by In1 and In2, both before and after training (Fig. 6), and both within and outside of the training window. Before training both trajectories exhibited positive exponents, indicative of exponential divergence and thus chaotic dynamics. After training the mean λ across networks for In1 was not significantly different from zero, suggestive of local stability. The mean λ for In2 also decreased, but remained above zero (10/10 networks). The dynamics in response to both inputs outside the training window (between t=8 s and t=10 s) exhibited chaotic dynamics (8/10 networks) or entered stable limit cycles (2/10). Which of these regimes occurred was in part dependent on the initial structure of the network and the extent of the training: lower initial values of λ and/or more training loops were more likely to lead to a limit cycle (not shown). Importantly, a 2×3 two-way ANOVA with repeated measures (factors “Input” and “Training”) showed a significant interaction effect (F2,18=20.7, p=2×10−5), meaning that λ post-training was differentially affected by Input1.


Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Suppression of chaosA: Average logarithmic distance between original and perturbed trajectories for each of ten networks, for the trajectories triggered by Input1 (the trained input) before and after training. A straight portion with a positive slope indicates chaotic dynamics; the value of the slope is the estimate for the Largest Lyapunov Exponent (λ). After training, the original and perturbed trajectories no longer diverge (except for one network). B: The pre-training trajectories triggered by both inputs displayed positive λ, indicative of chaotic dynamics (Input1: λ=7.12 ± 0.35, mean ± SEM across the ten networks, values significantly different from zero t-test p=10−8; Input2: λ=7.29 ± 0.45, p=4×10−8; all reported λs have units of 1/s). After training, the trajectory triggered by Input1 was locally stable, as indicated by a non-positive mean λ (λ=0.05 ± 0.45, p=0.90); Input2, however, still produced diverging trajectories as evidence by λ significantly above zero (λ=3.05 ± 0.70, p=0.0016). After training the trajectories outside the trained window had a positive mean λ in response to both inputs (Input1: λ=2.75 ± 0.70, p=0.0035; Input2: λ=2.27 ± 0.60, p=0.0039), with some networks displaying chaotic activity (8/10) and some entering limit cycles (2/10). The interaction effect is significant (F2,18=20.7, p=2×10−5, a 2×3 two-way ANOVA with repeated measures, factors “Input” and “Training”). In addition to this stimulus-specific effect of training, there was a global nonspecific effect of decreased divergence of trajectories after training, represented by a lower though still positive λ for Post-train Input2 and Post-outside Input1 and Input2.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3753043&req=5

Figure 6: Suppression of chaosA: Average logarithmic distance between original and perturbed trajectories for each of ten networks, for the trajectories triggered by Input1 (the trained input) before and after training. A straight portion with a positive slope indicates chaotic dynamics; the value of the slope is the estimate for the Largest Lyapunov Exponent (λ). After training, the original and perturbed trajectories no longer diverge (except for one network). B: The pre-training trajectories triggered by both inputs displayed positive λ, indicative of chaotic dynamics (Input1: λ=7.12 ± 0.35, mean ± SEM across the ten networks, values significantly different from zero t-test p=10−8; Input2: λ=7.29 ± 0.45, p=4×10−8; all reported λs have units of 1/s). After training, the trajectory triggered by Input1 was locally stable, as indicated by a non-positive mean λ (λ=0.05 ± 0.45, p=0.90); Input2, however, still produced diverging trajectories as evidence by λ significantly above zero (λ=3.05 ± 0.70, p=0.0016). After training the trajectories outside the trained window had a positive mean λ in response to both inputs (Input1: λ=2.75 ± 0.70, p=0.0035; Input2: λ=2.27 ± 0.60, p=0.0039), with some networks displaying chaotic activity (8/10) and some entering limit cycles (2/10). The interaction effect is significant (F2,18=20.7, p=2×10−5, a 2×3 two-way ANOVA with repeated measures, factors “Input” and “Training”). In addition to this stimulus-specific effect of training, there was a global nonspecific effect of decreased divergence of trajectories after training, represented by a lower though still positive λ for Post-train Input2 and Post-outside Input1 and Input2.
Mentions: Training to the In1 trajectory also improved the reproducibility of In2, but despite this improvement there was a fundamental difference between the trained and untrained trajectories. The increased reproducibility of both the trained and untrained patterns does not imply that either of them is no longer chaotic—rather it provides an estimate of how much two trajectories diverge within a 2 second window in response to different levels of noise. Thus, to formally characterize the behavior of the networks before and after training we quantified the divergence of trajectories by estimating the largest Lyapunov exponent (λ)—which provides a measure of the rate of separation of two nearby points in state-space, a standard way to determine if a dynamical system is chaotic or not. For each of the ten networks, λ was numerically estimated for the trajectories elicited by In1 and In2, both before and after training (Fig. 6), and both within and outside of the training window. Before training both trajectories exhibited positive exponents, indicative of exponential divergence and thus chaotic dynamics. After training the mean λ across networks for In1 was not significantly different from zero, suggestive of local stability. The mean λ for In2 also decreased, but remained above zero (10/10 networks). The dynamics in response to both inputs outside the training window (between t=8 s and t=10 s) exhibited chaotic dynamics (8/10 networks) or entered stable limit cycles (2/10). Which of these regimes occurred was in part dependent on the initial structure of the network and the extent of the training: lower initial values of λ and/or more training loops were more likely to lead to a limit cycle (not shown). Importantly, a 2×3 two-way ANOVA with repeated measures (factors “Input” and “Training”) showed a significant interaction effect (F2,18=20.7, p=2×10−5), meaning that λ post-training was differentially affected by Input1.

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH
Related in: MedlinePlus