Limits...
A spike-timing pattern based neural network model for the study of memory dynamics.

Liu JK, She ZS - PLoS ONE (2009)

Bottom Line: We show that the distance measure can capture the timing difference of memory states.In addition, we examine the influence of network topology on learning ability, and show that local connections can increase the network's ability to embed more memory states.Together theses results suggest that the proposed system based on spike-timing patterns gives a productive model for the study of detailed learning and memory dynamics.

View Article: PubMed Central - PubMed

Affiliation: Department of Mathematics, University of California Los Angeles, Los Angeles, California, United States of America. liujk@ucla.edu

ABSTRACT
It is well accepted that the brain's computation relies on spatiotemporal activity of neural networks. In particular, there is growing evidence of the importance of continuously and precisely timed spiking activity. Therefore, it is important to characterize memory states in terms of spike-timing patterns that give both reliable memory of firing activities and precise memory of firing timings. The relationship between memory states and spike-timing patterns has been studied empirically with large-scale recording of neuron population in recent years. Here, by using a recurrent neural network model with dynamics at two time scales, we construct a dynamical memory network model which embeds both fast neural and synaptic variation and slow learning dynamics. A state vector is proposed to describe memory states in terms of spike-timing patterns of neural population, and a distance measure of state vector is defined to study several important phenomena of memory dynamics: partial memory recall, learning efficiency, learning with correlated stimuli. We show that the distance measure can capture the timing difference of memory states. In addition, we examine the influence of network topology on learning ability, and show that local connections can increase the network's ability to embed more memory states. Together theses results suggest that the proposed system based on spike-timing patterns gives a productive model for the study of detailed learning and memory dynamics.

Show MeSH

Related in: MedlinePlus

Learning time decays with the fading of network topology.Learning time  decreases when network become more random by increasing topology parameters (blue: UC with diam; red: NC with σ; green: SC with s and ). Note that the increasing diam and s make the UC and SC network similar, where the optimal learning dynamics reaches at an immediate connection radius . Error bars (S.E.M.) are calculated with 3 simulations with different random number seeds.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2710501&req=5

pone-0006247-g003: Learning time decays with the fading of network topology.Learning time decreases when network become more random by increasing topology parameters (blue: UC with diam; red: NC with σ; green: SC with s and ). Note that the increasing diam and s make the UC and SC network similar, where the optimal learning dynamics reaches at an immediate connection radius . Error bars (S.E.M.) are calculated with 3 simulations with different random number seeds.

Mentions: Figure 2 shows four typical networks with different connection topologies. In all panels, the number of synaptic connections was same, however, the degree of localization and globalization were varied with different topologies. All four networks have been suggested to exist in the cortex [19]. To understand how the variation of topology determined the learning time, we systematically changed network parameters within the range to compare the learning time . As shown in Fig. 3, with the same stimulus A as in Fig. 1 decreased with the increasing of ; this was a consequence of the fact that the propagation of activity became significantly slower as connections became more local [9]. This effect can be understood from the learning rule Eq. 9 that is presynaptically dependent. Whenever the presynaptic cell was a firing input, its action potential only propagated to the downstream postsynaptic cells in its neighborhood. Therefore, activity can not spread to the whole network until its neighbors fired, which resulted in that the network highly localized with used the largest to reach the target. It should be noted that reached its asymptotical value at around , which implied that the network with an immediate connection diameter reached to its optimal learning dynamics.


A spike-timing pattern based neural network model for the study of memory dynamics.

Liu JK, She ZS - PLoS ONE (2009)

Learning time decays with the fading of network topology.Learning time  decreases when network become more random by increasing topology parameters (blue: UC with diam; red: NC with σ; green: SC with s and ). Note that the increasing diam and s make the UC and SC network similar, where the optimal learning dynamics reaches at an immediate connection radius . Error bars (S.E.M.) are calculated with 3 simulations with different random number seeds.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2710501&req=5

pone-0006247-g003: Learning time decays with the fading of network topology.Learning time decreases when network become more random by increasing topology parameters (blue: UC with diam; red: NC with σ; green: SC with s and ). Note that the increasing diam and s make the UC and SC network similar, where the optimal learning dynamics reaches at an immediate connection radius . Error bars (S.E.M.) are calculated with 3 simulations with different random number seeds.
Mentions: Figure 2 shows four typical networks with different connection topologies. In all panels, the number of synaptic connections was same, however, the degree of localization and globalization were varied with different topologies. All four networks have been suggested to exist in the cortex [19]. To understand how the variation of topology determined the learning time, we systematically changed network parameters within the range to compare the learning time . As shown in Fig. 3, with the same stimulus A as in Fig. 1 decreased with the increasing of ; this was a consequence of the fact that the propagation of activity became significantly slower as connections became more local [9]. This effect can be understood from the learning rule Eq. 9 that is presynaptically dependent. Whenever the presynaptic cell was a firing input, its action potential only propagated to the downstream postsynaptic cells in its neighborhood. Therefore, activity can not spread to the whole network until its neighbors fired, which resulted in that the network highly localized with used the largest to reach the target. It should be noted that reached its asymptotical value at around , which implied that the network with an immediate connection diameter reached to its optimal learning dynamics.

Bottom Line: We show that the distance measure can capture the timing difference of memory states.In addition, we examine the influence of network topology on learning ability, and show that local connections can increase the network's ability to embed more memory states.Together theses results suggest that the proposed system based on spike-timing patterns gives a productive model for the study of detailed learning and memory dynamics.

View Article: PubMed Central - PubMed

Affiliation: Department of Mathematics, University of California Los Angeles, Los Angeles, California, United States of America. liujk@ucla.edu

ABSTRACT
It is well accepted that the brain's computation relies on spatiotemporal activity of neural networks. In particular, there is growing evidence of the importance of continuously and precisely timed spiking activity. Therefore, it is important to characterize memory states in terms of spike-timing patterns that give both reliable memory of firing activities and precise memory of firing timings. The relationship between memory states and spike-timing patterns has been studied empirically with large-scale recording of neuron population in recent years. Here, by using a recurrent neural network model with dynamics at two time scales, we construct a dynamical memory network model which embeds both fast neural and synaptic variation and slow learning dynamics. A state vector is proposed to describe memory states in terms of spike-timing patterns of neural population, and a distance measure of state vector is defined to study several important phenomena of memory dynamics: partial memory recall, learning efficiency, learning with correlated stimuli. We show that the distance measure can capture the timing difference of memory states. In addition, we examine the influence of network topology on learning ability, and show that local connections can increase the network's ability to embed more memory states. Together theses results suggest that the proposed system based on spike-timing patterns gives a productive model for the study of detailed learning and memory dynamics.

Show MeSH
Related in: MedlinePlus