Limits...
A Unified Framework for Reservoir Computing and Extreme Learning Machines based on a Single Time-delayed Neuron.

Ortín S, Soriano MC, Pesquera L, Brunner D, San-Martín D, Fischer I, Mirasso CR, Gutiérrez JM - Sci Rep (2015)

Bottom Line: The reservoir is built within the delay-line, employing a number of "virtual" neurons.One key advantage of this approach is that it can be implemented efficiently in hardware.We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

View Article: PubMed Central - PubMed

Affiliation: Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, E-39005 Santander, Spain.

ABSTRACT
In this paper we present a unified framework for extreme learning machines and reservoir computing (echo state networks), which can be physically implemented using a single nonlinear neuron subject to delayed feedback. The reservoir is built within the delay-line, employing a number of "virtual" neurons. These virtual neurons receive random projections from the input layer containing the information to be processed. One key advantage of this approach is that it can be implemented efficiently in hardware. We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

No MeSH data available.


Schematic illustration of different types of random-projection machines: (a) original ESNs, with sparse inter-neuron connectivity, (b) ESNs with simple chain connectivity, and (c) ELMs, with no connectivity. Panels (d,e) are equivalent to (b,c), respectively, but considering a single neuron with delay as nonlinear processor. The virtual neurons (virtual nodes), circles with dashed lines in (d,e), are addressed via time-multiplexing and form a reservoir. The dashed arrow in (e) indicates that the virtual neurons (virtual nodes) are time-multiplexed versions of the single neuron with delay but are not connected among them through the feedback loop. Weights trained during the learning process are indicated by black arrows, whereas predefined (or random) weights are depicted with gray or red arrows.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4597340&req=5

f1: Schematic illustration of different types of random-projection machines: (a) original ESNs, with sparse inter-neuron connectivity, (b) ESNs with simple chain connectivity, and (c) ELMs, with no connectivity. Panels (d,e) are equivalent to (b,c), respectively, but considering a single neuron with delay as nonlinear processor. The virtual neurons (virtual nodes), circles with dashed lines in (d,e), are addressed via time-multiplexing and form a reservoir. The dashed arrow in (e) indicates that the virtual neurons (virtual nodes) are time-multiplexed versions of the single neuron with delay but are not connected among them through the feedback loop. Weights trained during the learning process are indicated by black arrows, whereas predefined (or random) weights are depicted with gray or red arrows.

Mentions: This general form is schematically represented in Fig. 1(a). Although it is not explicitly stated in the figure, the d-dimensional input x is augmented with an additional constant neuron accounting for the bias term. Learning from data is efficiently achieved through the random projection “trick”, since the only weights to be trained in this approach are those corresponding to the reservoir-output connections, Wout (shown in black color in the figure).


A Unified Framework for Reservoir Computing and Extreme Learning Machines based on a Single Time-delayed Neuron.

Ortín S, Soriano MC, Pesquera L, Brunner D, San-Martín D, Fischer I, Mirasso CR, Gutiérrez JM - Sci Rep (2015)

Schematic illustration of different types of random-projection machines: (a) original ESNs, with sparse inter-neuron connectivity, (b) ESNs with simple chain connectivity, and (c) ELMs, with no connectivity. Panels (d,e) are equivalent to (b,c), respectively, but considering a single neuron with delay as nonlinear processor. The virtual neurons (virtual nodes), circles with dashed lines in (d,e), are addressed via time-multiplexing and form a reservoir. The dashed arrow in (e) indicates that the virtual neurons (virtual nodes) are time-multiplexed versions of the single neuron with delay but are not connected among them through the feedback loop. Weights trained during the learning process are indicated by black arrows, whereas predefined (or random) weights are depicted with gray or red arrows.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4597340&req=5

f1: Schematic illustration of different types of random-projection machines: (a) original ESNs, with sparse inter-neuron connectivity, (b) ESNs with simple chain connectivity, and (c) ELMs, with no connectivity. Panels (d,e) are equivalent to (b,c), respectively, but considering a single neuron with delay as nonlinear processor. The virtual neurons (virtual nodes), circles with dashed lines in (d,e), are addressed via time-multiplexing and form a reservoir. The dashed arrow in (e) indicates that the virtual neurons (virtual nodes) are time-multiplexed versions of the single neuron with delay but are not connected among them through the feedback loop. Weights trained during the learning process are indicated by black arrows, whereas predefined (or random) weights are depicted with gray or red arrows.
Mentions: This general form is schematically represented in Fig. 1(a). Although it is not explicitly stated in the figure, the d-dimensional input x is augmented with an additional constant neuron accounting for the bias term. Learning from data is efficiently achieved through the random projection “trick”, since the only weights to be trained in this approach are those corresponding to the reservoir-output connections, Wout (shown in black color in the figure).

Bottom Line: The reservoir is built within the delay-line, employing a number of "virtual" neurons.One key advantage of this approach is that it can be implemented efficiently in hardware.We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

View Article: PubMed Central - PubMed

Affiliation: Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, E-39005 Santander, Spain.

ABSTRACT
In this paper we present a unified framework for extreme learning machines and reservoir computing (echo state networks), which can be physically implemented using a single nonlinear neuron subject to delayed feedback. The reservoir is built within the delay-line, employing a number of "virtual" neurons. These virtual neurons receive random projections from the input layer containing the information to be processed. One key advantage of this approach is that it can be implemented efficiently in hardware. We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

No MeSH data available.