Limits...
Memory dynamics in attractor networks.

Li G, Ramanathan K, Ning N, Shi L, Wen C - Comput Intell Neurosci (2015)

Bottom Line: To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points.Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided.The simulation results show the effectiveness of the proposed method.

View Article: PubMed Central - PubMed

Affiliation: Centre for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing 100084, China.

ABSTRACT
As can be represented by neurons and their synaptic connections, attractor networks are widely believed to underlie biological memory systems and have been used extensively in recent years to model the storage and retrieval process of memory. In this paper, we propose a new energy function, which is nonnegative and attains zero values only at the desired memory patterns. An attractor network is designed based on the proposed energy function. It is shown that the desired memory patterns are stored as the stable equilibrium points of the attractor network. To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points. Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided. The simulation results show the effectiveness of the proposed method.

No MeSH data available.


The contours of the energy function for Example 3.
© Copyright Policy - open-access
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4417571&req=5

fig3: The contours of the energy function for Example 3.

Mentions: We know that (∇F(x∗))T∇F(x∗) = 0 gives x∗ = x1, x∗ = x2, x∗ = x3 or G(x∗) = 0. But G(x∗) = 0 implies that (27)tr⁡∇2Fx∗=0.Thus, x∗ will be a saddle point of F(x) but not the attractor network in (5)–(8) if ∇F(x∗) = 0 while x∗ is different from x1,…, xn. As the attractor network in (5)–(8) does not have any saddle points from Theorem 9, usually, a saddle point of F(x) is located in between two local minimum points. As seen from Figure 3, two saddle points of F(x) are in between the three local minimum points on the plane. One is located about (−1.05,0.22) and the other is about (0.58,0.31).


Memory dynamics in attractor networks.

Li G, Ramanathan K, Ning N, Shi L, Wen C - Comput Intell Neurosci (2015)

The contours of the energy function for Example 3.
© Copyright Policy - open-access
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4417571&req=5

fig3: The contours of the energy function for Example 3.
Mentions: We know that (∇F(x∗))T∇F(x∗) = 0 gives x∗ = x1, x∗ = x2, x∗ = x3 or G(x∗) = 0. But G(x∗) = 0 implies that (27)tr⁡∇2Fx∗=0.Thus, x∗ will be a saddle point of F(x) but not the attractor network in (5)–(8) if ∇F(x∗) = 0 while x∗ is different from x1,…, xn. As the attractor network in (5)–(8) does not have any saddle points from Theorem 9, usually, a saddle point of F(x) is located in between two local minimum points. As seen from Figure 3, two saddle points of F(x) are in between the three local minimum points on the plane. One is located about (−1.05,0.22) and the other is about (0.58,0.31).

Bottom Line: To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points.Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided.The simulation results show the effectiveness of the proposed method.

View Article: PubMed Central - PubMed

Affiliation: Centre for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing 100084, China.

ABSTRACT
As can be represented by neurons and their synaptic connections, attractor networks are widely believed to underlie biological memory systems and have been used extensively in recent years to model the storage and retrieval process of memory. In this paper, we propose a new energy function, which is nonnegative and attains zero values only at the desired memory patterns. An attractor network is designed based on the proposed energy function. It is shown that the desired memory patterns are stored as the stable equilibrium points of the attractor network. To retrieve a memory pattern, an initial stimulus input is presented to the network, and its states converge to one of stable equilibrium points. Consequently, the existence of the spurious points, that is, local maxima, saddle points, or other local minima which are undesired memory patterns, can be avoided. The simulation results show the effectiveness of the proposed method.

No MeSH data available.