Limits...
Analysis of synaptic scaling in combination with hebbian plasticity in several simple networks.

Tetzlaff C, Kolodziejski C, Timme M, Wörgötter F - Front Comput Neurosci (2012)

Bottom Line: Here we analyze the development of synaptic connections and the resulting activity patterns in different feed-forward and recurrent neural networks, with plasticity and scaling.These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling.This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties.

View Article: PubMed Central - PubMed

Affiliation: III. Institute of Physics - Biophysics, Georg-August-Universität Göttingen Göttingen, Germany.

ABSTRACT
Conventional synaptic plasticity in combination with synaptic scaling is a biologically plausible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in different feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly (Hebb, 1949) by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network. These processes are strongly influenced by the underlying connectivity. For example, when embedding recurrent structures (excitatory rings, etc.) into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties.

No MeSH data available.


Related in: MedlinePlus

Signals travel along layers in a pure feed-forward network depending on the used plasticity mechanisms. (A,B) Spatial smearing of a signal is analyzed by presenting one spatially restricted input to a small group of neurons in the feed-forward network. Constant weights ωc lead to smearing (compare activity profile of given layer in red with the footprint of the original signal shown by the dashed lines). The SPaSS rule produces much less smearing and the signal propagates along layers mostly remaining inside the originally stimulated region until it vanishes. (C,D) This focusing effect of the SPaSS rule increases the resolution to distinguish two spatially distant signals in higher (e.g., blue bars of layer 1) and deeper layers (e.g., layer 3). Parameters: ωc = 0.2, κ = 2, vT = 10−2, ℑ = 0.1, ℑbackground = 10−4, N = 20, M = 10 the first six layers are shown.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3376471&req=5

Figure 5: Signals travel along layers in a pure feed-forward network depending on the used plasticity mechanisms. (A,B) Spatial smearing of a signal is analyzed by presenting one spatially restricted input to a small group of neurons in the feed-forward network. Constant weights ωc lead to smearing (compare activity profile of given layer in red with the footprint of the original signal shown by the dashed lines). The SPaSS rule produces much less smearing and the signal propagates along layers mostly remaining inside the originally stimulated region until it vanishes. (C,D) This focusing effect of the SPaSS rule increases the resolution to distinguish two spatially distant signals in higher (e.g., blue bars of layer 1) and deeper layers (e.g., layer 3). Parameters: ωc = 0.2, κ = 2, vT = 10−2, ℑ = 0.1, ℑbackground = 10−4, N = 20, M = 10 the first six layers are shown.

Mentions: We therefore test the capability of a feed-forward network (Figure 2C) for signal representation and transportation. The synapses of the network are modified by the SPaSS rule. Constant weights represent the control (to mimic, for instance, weight hard-bounds). In the first of our two scenarios we present a spatially restricted signal (transport signal) to the first layer and then measure the dispersion along layers (Figures 5A,B). In the second scenario we inject as an input two adjacent signals with a small spatial gap between them (resolution signal; Figures 5C,D). With both scenarios we are able to analyze the network’s capability to differentiate its inputs after a certain number of layers. Different from the previously analyzed network the feed-forward network is constructed as follows: each neuron projects and in turn receives synapses to and from a neuron at the same position and its two neighbors in the next and the former layer.


Analysis of synaptic scaling in combination with hebbian plasticity in several simple networks.

Tetzlaff C, Kolodziejski C, Timme M, Wörgötter F - Front Comput Neurosci (2012)

Signals travel along layers in a pure feed-forward network depending on the used plasticity mechanisms. (A,B) Spatial smearing of a signal is analyzed by presenting one spatially restricted input to a small group of neurons in the feed-forward network. Constant weights ωc lead to smearing (compare activity profile of given layer in red with the footprint of the original signal shown by the dashed lines). The SPaSS rule produces much less smearing and the signal propagates along layers mostly remaining inside the originally stimulated region until it vanishes. (C,D) This focusing effect of the SPaSS rule increases the resolution to distinguish two spatially distant signals in higher (e.g., blue bars of layer 1) and deeper layers (e.g., layer 3). Parameters: ωc = 0.2, κ = 2, vT = 10−2, ℑ = 0.1, ℑbackground = 10−4, N = 20, M = 10 the first six layers are shown.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3376471&req=5

Figure 5: Signals travel along layers in a pure feed-forward network depending on the used plasticity mechanisms. (A,B) Spatial smearing of a signal is analyzed by presenting one spatially restricted input to a small group of neurons in the feed-forward network. Constant weights ωc lead to smearing (compare activity profile of given layer in red with the footprint of the original signal shown by the dashed lines). The SPaSS rule produces much less smearing and the signal propagates along layers mostly remaining inside the originally stimulated region until it vanishes. (C,D) This focusing effect of the SPaSS rule increases the resolution to distinguish two spatially distant signals in higher (e.g., blue bars of layer 1) and deeper layers (e.g., layer 3). Parameters: ωc = 0.2, κ = 2, vT = 10−2, ℑ = 0.1, ℑbackground = 10−4, N = 20, M = 10 the first six layers are shown.
Mentions: We therefore test the capability of a feed-forward network (Figure 2C) for signal representation and transportation. The synapses of the network are modified by the SPaSS rule. Constant weights represent the control (to mimic, for instance, weight hard-bounds). In the first of our two scenarios we present a spatially restricted signal (transport signal) to the first layer and then measure the dispersion along layers (Figures 5A,B). In the second scenario we inject as an input two adjacent signals with a small spatial gap between them (resolution signal; Figures 5C,D). With both scenarios we are able to analyze the network’s capability to differentiate its inputs after a certain number of layers. Different from the previously analyzed network the feed-forward network is constructed as follows: each neuron projects and in turn receives synapses to and from a neuron at the same position and its two neighbors in the next and the former layer.

Bottom Line: Here we analyze the development of synaptic connections and the resulting activity patterns in different feed-forward and recurrent neural networks, with plasticity and scaling.These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling.This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties.

View Article: PubMed Central - PubMed

Affiliation: III. Institute of Physics - Biophysics, Georg-August-Universität Göttingen Göttingen, Germany.

ABSTRACT
Conventional synaptic plasticity in combination with synaptic scaling is a biologically plausible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in different feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly (Hebb, 1949) by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network. These processes are strongly influenced by the underlying connectivity. For example, when embedding recurrent structures (excitatory rings, etc.) into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties.

No MeSH data available.


Related in: MedlinePlus