Limits...
Memory dynamics in attractor networks.

Li G, Ramanathan K, Ning N, Shi L, Wen C - Comput Intell Neurosci (2015)

Bottom Line: To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points.Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided.The simulation results show the effectiveness of the proposed method.

View Article: PubMed Central - PubMed

Affiliation: Centre for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing 100084, China.

ABSTRACT
As can be represented by neurons and their synaptic connections, attractor networks are widely believed to underlie biological memory systems and have been used extensively in recent years to model the storage and retrieval process of memory. In this paper, we propose a new energy function, which is nonnegative and attains zero values only at the desired memory patterns. An attractor network is designed based on the proposed energy function. It is shown that the desired memory patterns are stored as the stable equilibrium points of the attractor network. To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points. Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided. The simulation results show the effectiveness of the proposed method.

No MeSH data available.


Energy function for one-dimensional case.
© Copyright Policy - open-access
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4417571&req=5

fig1: Energy function for one-dimensional case.

Mentions: This corresponds to the one-dimensional case. The energy function is constructed as F(x) = (x − x1)2(x − x2)2 from (2) with ∇F(X) and ∇2F(X) being given by(19)∇Fx=2x−x12x−x2+2x−x22x−x1,∇2Fx=2x−x12+2x−x22+4x−x2x−x1+4x−x1x−x2,respectively. The dynamic system can be then designed in (5)–(8). By solving ∇F(x∗) = 0, we have x∗ = x1, x∗ = x2, or x∗ = (x1 + x2)/2 as analyzed in Section 3. From (20)∇2Fxx=x∗=x1+x2/2<0,x∗ = (x1 + x2)/2 is a local maximum point of F(x). So x1 and x2 are the only two local minimum points of F(x). The energy function is shown in Figure 1 where “∗” denotes the local minimum point and “Δ” denotes the local maximum or the saddle point. From Figure 1, when the initial stimulus x < (x1 + x2)/2, the states converge to x1; when x > (x1 + x2)/2, the states converge to x2. If the initial stimulus x = (x1 + x2)/2, the states can converge to either x1 or x2, which depends on the random noise v1 in (7).


Memory dynamics in attractor networks.

Li G, Ramanathan K, Ning N, Shi L, Wen C - Comput Intell Neurosci (2015)

Energy function for one-dimensional case.
© Copyright Policy - open-access
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4417571&req=5

fig1: Energy function for one-dimensional case.
Mentions: This corresponds to the one-dimensional case. The energy function is constructed as F(x) = (x − x1)2(x − x2)2 from (2) with ∇F(X) and ∇2F(X) being given by(19)∇Fx=2x−x12x−x2+2x−x22x−x1,∇2Fx=2x−x12+2x−x22+4x−x2x−x1+4x−x1x−x2,respectively. The dynamic system can be then designed in (5)–(8). By solving ∇F(x∗) = 0, we have x∗ = x1, x∗ = x2, or x∗ = (x1 + x2)/2 as analyzed in Section 3. From (20)∇2Fxx=x∗=x1+x2/2<0,x∗ = (x1 + x2)/2 is a local maximum point of F(x). So x1 and x2 are the only two local minimum points of F(x). The energy function is shown in Figure 1 where “∗” denotes the local minimum point and “Δ” denotes the local maximum or the saddle point. From Figure 1, when the initial stimulus x < (x1 + x2)/2, the states converge to x1; when x > (x1 + x2)/2, the states converge to x2. If the initial stimulus x = (x1 + x2)/2, the states can converge to either x1 or x2, which depends on the random noise v1 in (7).

Bottom Line: To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points.Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided.The simulation results show the effectiveness of the proposed method.

View Article: PubMed Central - PubMed

Affiliation: Centre for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing 100084, China.

ABSTRACT
As can be represented by neurons and their synaptic connections, attractor networks are widely believed to underlie biological memory systems and have been used extensively in recent years to model the storage and retrieval process of memory. In this paper, we propose a new energy function, which is nonnegative and attains zero values only at the desired memory patterns. An attractor network is designed based on the proposed energy function. It is shown that the desired memory patterns are stored as the stable equilibrium points of the attractor network. To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points. Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided. The simulation results show the effectiveness of the proposed method.

No MeSH data available.