Limits...
Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH

Related in: MedlinePlus

Complexity without chaosA: A random recurrent network (left panel) in the chaotic regime is stimulated by a brief input pulse (small black rectangle at t=0 in right panel) to produce a complex pattern of activity in the absence of noise. Color-coded raster plot of the activity of 100 out of 800 recurrent units (right panel). Color-coded activity ranges from −1 (blue) to 1 (red). B: Time series of three sample recurrent units (top panel), and the output unit (bottom panel). In the pre-training (left) the blue traces comprised the innate trajectory subsequently used for training. The divergence of the blue and red lines demonstrates that two different initial conditions (before the input) lead to diverging trajectories before training, even in the absence of ongoing noise. After training (right), however, the time series are reproducible during the trained window (2.25 s; shaded area). That is, despite different initial conditions the blue and red lines trace very similar paths, while still diverging outside of the trained window. The output unit was trained to “pulse” after 2 s. C: Five different runs of the network above, perturbed with a 10-ms pulse at t=0.5 s (dashed line) from an additional input unit randomly connected to the recurrent network. The trained network (right) robustly reproduces the trained trajectory, recovering from the perturbation resulting in the timed response of the output unit at t=2 s.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3753043&req=5

Figure 1: Complexity without chaosA: A random recurrent network (left panel) in the chaotic regime is stimulated by a brief input pulse (small black rectangle at t=0 in right panel) to produce a complex pattern of activity in the absence of noise. Color-coded raster plot of the activity of 100 out of 800 recurrent units (right panel). Color-coded activity ranges from −1 (blue) to 1 (red). B: Time series of three sample recurrent units (top panel), and the output unit (bottom panel). In the pre-training (left) the blue traces comprised the innate trajectory subsequently used for training. The divergence of the blue and red lines demonstrates that two different initial conditions (before the input) lead to diverging trajectories before training, even in the absence of ongoing noise. After training (right), however, the time series are reproducible during the trained window (2.25 s; shaded area). That is, despite different initial conditions the blue and red lines trace very similar paths, while still diverging outside of the trained window. The output unit was trained to “pulse” after 2 s. C: Five different runs of the network above, perturbed with a 10-ms pulse at t=0.5 s (dashed line) from an additional input unit randomly connected to the recurrent network. The trained network (right) robustly reproduces the trained trajectory, recovering from the perturbation resulting in the timed response of the output unit at t=2 s.

Mentions: The network studied consists of randomly connected nonlinear firing rate units26,28–29. In these networks the connectivity is represented by a recurrent weight matrix WRec drawn from a normal distribution with a mean of zero and a standard deviation scaled by a “gain” parameter g. For large networks, values of g>1 generate increasingly complex and chaotic patterns of self-sustained activity26. In all simulations presented here the networks are in this “high gain” chaotic regime (g ≥ 1.5)26,30. Figure 1A provides an example of an RRN where all the recurrent units connect to a single output unit. There were 800 units, each with a sigmoidal activation function, and a time constant of 10 ms (see Online Methods). By adjusting the synaptic weights onto the output unit the network can be trained to produce some desirable computation, such as a timed response or a complex motor output10–12,28 (see below). The network is spontaneously active (i.e., it has self-sustaining activity), and an external input at t=0 ms (50 ms duration) temporarily kicks the network into a delimited volume of state space, which can be defined as the starting point of a neural trajectory. Across trials, even in the absence of continuous noise (omitted here for illustrative purposes) different initial conditions result in a divergence of the trajectories as illustrated in Figure 1B (Pre-training) by the “firing rates” of 3 sample units. This divergence renders the network useless from a computational perspective because the patterns cannot be reproduced across trials. One approach to overcome this problem has been to use tuned feedback to control the dynamics of the network28–29. An alternate approach would be to alter the weights of the RRN proper in order to decrease the sensitivity to noise; this approach, however, has been limited by the challenges inherent in changing the weights in recurrent networks. Specifically, since all weights are “being used” throughout the trajectory, plasticity tends to dramatically alter network dynamics, produce bifurcations, and not converge31.


Robust timing and motor patterns by taming chaos in recurrent neural networks.

Laje R, Buonomano DV - Nat. Neurosci. (2013)

Complexity without chaosA: A random recurrent network (left panel) in the chaotic regime is stimulated by a brief input pulse (small black rectangle at t=0 in right panel) to produce a complex pattern of activity in the absence of noise. Color-coded raster plot of the activity of 100 out of 800 recurrent units (right panel). Color-coded activity ranges from −1 (blue) to 1 (red). B: Time series of three sample recurrent units (top panel), and the output unit (bottom panel). In the pre-training (left) the blue traces comprised the innate trajectory subsequently used for training. The divergence of the blue and red lines demonstrates that two different initial conditions (before the input) lead to diverging trajectories before training, even in the absence of ongoing noise. After training (right), however, the time series are reproducible during the trained window (2.25 s; shaded area). That is, despite different initial conditions the blue and red lines trace very similar paths, while still diverging outside of the trained window. The output unit was trained to “pulse” after 2 s. C: Five different runs of the network above, perturbed with a 10-ms pulse at t=0.5 s (dashed line) from an additional input unit randomly connected to the recurrent network. The trained network (right) robustly reproduces the trained trajectory, recovering from the perturbation resulting in the timed response of the output unit at t=2 s.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3753043&req=5

Figure 1: Complexity without chaosA: A random recurrent network (left panel) in the chaotic regime is stimulated by a brief input pulse (small black rectangle at t=0 in right panel) to produce a complex pattern of activity in the absence of noise. Color-coded raster plot of the activity of 100 out of 800 recurrent units (right panel). Color-coded activity ranges from −1 (blue) to 1 (red). B: Time series of three sample recurrent units (top panel), and the output unit (bottom panel). In the pre-training (left) the blue traces comprised the innate trajectory subsequently used for training. The divergence of the blue and red lines demonstrates that two different initial conditions (before the input) lead to diverging trajectories before training, even in the absence of ongoing noise. After training (right), however, the time series are reproducible during the trained window (2.25 s; shaded area). That is, despite different initial conditions the blue and red lines trace very similar paths, while still diverging outside of the trained window. The output unit was trained to “pulse” after 2 s. C: Five different runs of the network above, perturbed with a 10-ms pulse at t=0.5 s (dashed line) from an additional input unit randomly connected to the recurrent network. The trained network (right) robustly reproduces the trained trajectory, recovering from the perturbation resulting in the timed response of the output unit at t=2 s.
Mentions: The network studied consists of randomly connected nonlinear firing rate units26,28–29. In these networks the connectivity is represented by a recurrent weight matrix WRec drawn from a normal distribution with a mean of zero and a standard deviation scaled by a “gain” parameter g. For large networks, values of g>1 generate increasingly complex and chaotic patterns of self-sustained activity26. In all simulations presented here the networks are in this “high gain” chaotic regime (g ≥ 1.5)26,30. Figure 1A provides an example of an RRN where all the recurrent units connect to a single output unit. There were 800 units, each with a sigmoidal activation function, and a time constant of 10 ms (see Online Methods). By adjusting the synaptic weights onto the output unit the network can be trained to produce some desirable computation, such as a timed response or a complex motor output10–12,28 (see below). The network is spontaneously active (i.e., it has self-sustaining activity), and an external input at t=0 ms (50 ms duration) temporarily kicks the network into a delimited volume of state space, which can be defined as the starting point of a neural trajectory. Across trials, even in the absence of continuous noise (omitted here for illustrative purposes) different initial conditions result in a divergence of the trajectories as illustrated in Figure 1B (Pre-training) by the “firing rates” of 3 sample units. This divergence renders the network useless from a computational perspective because the patterns cannot be reproduced across trials. One approach to overcome this problem has been to use tuned feedback to control the dynamics of the network28–29. An alternate approach would be to alter the weights of the RRN proper in order to decrease the sensitivity to noise; this approach, however, has been limited by the challenges inherent in changing the weights in recurrent networks. Specifically, since all weights are “being used” throughout the trajectory, plasticity tends to dramatically alter network dynamics, produce bifurcations, and not converge31.

Bottom Line: We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.

View Article: PubMed Central - PubMed

Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.

ABSTRACT
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.

Show MeSH
Related in: MedlinePlus