Robust timing and motor patterns by taming chaos in recurrent neural networks.
Bottom Line:
We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise.This is achieved through the tuning of the recurrent connections.The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories.
View Article:
PubMed Central - PubMed
Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.
ABSTRACT
Show MeSH
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations. Related in: MedlinePlus |
Related In:
Results -
Collection
getmorefigures.php?uid=PMC3753043&req=5
Mentions: The network studied consists of randomly connected nonlinear firing rate units26,28–29. In these networks the connectivity is represented by a recurrent weight matrix WRec drawn from a normal distribution with a mean of zero and a standard deviation scaled by a “gain” parameter g. For large networks, values of g>1 generate increasingly complex and chaotic patterns of self-sustained activity26. In all simulations presented here the networks are in this “high gain” chaotic regime (g ≥ 1.5)26,30. Figure 1A provides an example of an RRN where all the recurrent units connect to a single output unit. There were 800 units, each with a sigmoidal activation function, and a time constant of 10 ms (see Online Methods). By adjusting the synaptic weights onto the output unit the network can be trained to produce some desirable computation, such as a timed response or a complex motor output10–12,28 (see below). The network is spontaneously active (i.e., it has self-sustaining activity), and an external input at t=0 ms (50 ms duration) temporarily kicks the network into a delimited volume of state space, which can be defined as the starting point of a neural trajectory. Across trials, even in the absence of continuous noise (omitted here for illustrative purposes) different initial conditions result in a divergence of the trajectories as illustrated in Figure 1B (Pre-training) by the “firing rates” of 3 sample units. This divergence renders the network useless from a computational perspective because the patterns cannot be reproduced across trials. One approach to overcome this problem has been to use tuned feedback to control the dynamics of the network28–29. An alternate approach would be to alter the weights of the RRN proper in order to decrease the sensitivity to noise; this approach, however, has been limited by the challenges inherent in changing the weights in recurrent networks. Specifically, since all weights are “being used” throughout the trajectory, plasticity tends to dramatically alter network dynamics, produce bifurcations, and not converge31. |
View Article: PubMed Central - PubMed
Affiliation: Department of Neurobiology, University of California, Los Angeles, California, USA.