Limits...
A Unified Framework for Reservoir Computing and Extreme Learning Machines based on a Single Time-delayed Neuron.

Ortín S, Soriano MC, Pesquera L, Brunner D, San-Martín D, Fischer I, Mirasso CR, Gutiérrez JM - Sci Rep (2015)

Bottom Line: The reservoir is built within the delay-line, employing a number of "virtual" neurons.One key advantage of this approach is that it can be implemented efficiently in hardware.We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

View Article: PubMed Central - PubMed

Affiliation: Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, E-39005 Santander, Spain.

ABSTRACT
In this paper we present a unified framework for extreme learning machines and reservoir computing (echo state networks), which can be physically implemented using a single nonlinear neuron subject to delayed feedback. The reservoir is built within the delay-line, employing a number of "virtual" neurons. These virtual neurons receive random projections from the input layer containing the information to be processed. One key advantage of this approach is that it can be implemented efficiently in hardware. We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

No MeSH data available.


Normalized error (in logarithmic scale) of the Mackey-Glass time-series prediction for: (top) ELM and ESN machines with D = 1500 neurons and different input scaling values, from 0.1 to 5; (bottom) ELM and ESN models with input scalings 1 and 0.3, respectively. Results for noisy output are indicated by dashed lines.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4597340&req=5

f4: Normalized error (in logarithmic scale) of the Mackey-Glass time-series prediction for: (top) ELM and ESN machines with D = 1500 neurons and different input scaling values, from 0.1 to 5; (bottom) ELM and ESN models with input scalings 1 and 0.3, respectively. Results for noisy output are indicated by dashed lines.

Mentions: In order to check the sensitivity of the ELM results on the number of inputs, we consider three different configurations: d = 3, 6 and 12, with insufficient, sufficient and excessive input information, respectively. Since the model (5) depends on two tunable parameters γ and β, which control the scaling of the input and feedback components, respectively, we start by analyzing the influence of these parameters on the results. Note that in the standard implementation of ELMs, β = 0 and γ = 1. To this aim, we fixed the values of κ = 0.9, ϕ = 0.07π, D = 1500 β = 0 (ELM) or β = 1 (ESNs) and computed the validation errors for the models resulting from different input scaling values, with γ ranging from 0.1 to 5. Figure 4(a) shows the normalized validation errors in logarithmic scale for three ELM configurations (with 3, 6 and 12 delayed inputs) and a ESN (only one input) as a function of the input scaling. Results obtained by adding system noise and quantization noise (resolution of 7 bits) to the reservoir values are indicated by dashed lines. This figure shows that the results for ESNs are much more sensitive with optimum performance for smaller input scaling values than for the ELMs. This can be explained by the amount of memory required by the ESN to solve the Mackey-Glass prediction task, which is a function of the input scaling, with larger memories corresponding to small input scaling9. However, the optimal parameter sets for the Mackey Glass and the MC are not the same because the Mackey Glass prediction task also requires nonlinear computation capacity. It is worth noting that since noise degrades the MC, in the presence of a large amount of noise the MC for the optimal phase can be close to the amount of memory required by the ESN to solve the Mackey-Glass task. In this situation the optimal phases for the Mackey Glass and the memory capacity tasks will be close to each other.


A Unified Framework for Reservoir Computing and Extreme Learning Machines based on a Single Time-delayed Neuron.

Ortín S, Soriano MC, Pesquera L, Brunner D, San-Martín D, Fischer I, Mirasso CR, Gutiérrez JM - Sci Rep (2015)

Normalized error (in logarithmic scale) of the Mackey-Glass time-series prediction for: (top) ELM and ESN machines with D = 1500 neurons and different input scaling values, from 0.1 to 5; (bottom) ELM and ESN models with input scalings 1 and 0.3, respectively. Results for noisy output are indicated by dashed lines.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4597340&req=5

f4: Normalized error (in logarithmic scale) of the Mackey-Glass time-series prediction for: (top) ELM and ESN machines with D = 1500 neurons and different input scaling values, from 0.1 to 5; (bottom) ELM and ESN models with input scalings 1 and 0.3, respectively. Results for noisy output are indicated by dashed lines.
Mentions: In order to check the sensitivity of the ELM results on the number of inputs, we consider three different configurations: d = 3, 6 and 12, with insufficient, sufficient and excessive input information, respectively. Since the model (5) depends on two tunable parameters γ and β, which control the scaling of the input and feedback components, respectively, we start by analyzing the influence of these parameters on the results. Note that in the standard implementation of ELMs, β = 0 and γ = 1. To this aim, we fixed the values of κ = 0.9, ϕ = 0.07π, D = 1500 β = 0 (ELM) or β = 1 (ESNs) and computed the validation errors for the models resulting from different input scaling values, with γ ranging from 0.1 to 5. Figure 4(a) shows the normalized validation errors in logarithmic scale for three ELM configurations (with 3, 6 and 12 delayed inputs) and a ESN (only one input) as a function of the input scaling. Results obtained by adding system noise and quantization noise (resolution of 7 bits) to the reservoir values are indicated by dashed lines. This figure shows that the results for ESNs are much more sensitive with optimum performance for smaller input scaling values than for the ELMs. This can be explained by the amount of memory required by the ESN to solve the Mackey-Glass prediction task, which is a function of the input scaling, with larger memories corresponding to small input scaling9. However, the optimal parameter sets for the Mackey Glass and the MC are not the same because the Mackey Glass prediction task also requires nonlinear computation capacity. It is worth noting that since noise degrades the MC, in the presence of a large amount of noise the MC for the optimal phase can be close to the amount of memory required by the ESN to solve the Mackey-Glass task. In this situation the optimal phases for the Mackey Glass and the memory capacity tasks will be close to each other.

Bottom Line: The reservoir is built within the delay-line, employing a number of "virtual" neurons.One key advantage of this approach is that it can be implemented efficiently in hardware.We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

View Article: PubMed Central - PubMed

Affiliation: Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, E-39005 Santander, Spain.

ABSTRACT
In this paper we present a unified framework for extreme learning machines and reservoir computing (echo state networks), which can be physically implemented using a single nonlinear neuron subject to delayed feedback. The reservoir is built within the delay-line, employing a number of "virtual" neurons. These virtual neurons receive random projections from the input layer containing the information to be processed. One key advantage of this approach is that it can be implemented efficiently in hardware. We show that the reservoir computing implementation, in this case optoelectronic, is also capable to realize extreme learning machines, demonstrating the unified framework for both schemes in software as well as in hardware.

No MeSH data available.