Limits...
A novel single neuron perceptron with universal approximation and XOR computation properties.

Lotfi E, Akbarzadeh-T MR - Comput Intell Neurosci (2014)

Bottom Line: The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules.The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets.Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.

View Article: PubMed Central - PubMed

Affiliation: Department of Computer Engineering, Torbat-e-Jam Branch, Islamic Azad University, Torbat-e-Jam, Iran.

ABSTRACT
We propose a biologically motivated brain-inspired single neuron perceptron (SNP) with universal approximation and XOR computation properties. This computational model extends the input pattern and is based on the excitatory and inhibitory learning rules inspired from neural connections in the human brain's nervous system. The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules. The main features of proposed single layer perceptron are universal approximation property and low computational complexity. The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets. Various comparisons with multilayer perceptron (MLP) with gradient decent backpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy, lower time, and spatial complexity, as well as faster training. Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.

Show MeSH
Proposed SNP.
© Copyright Policy - open-access
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4020563&req=5

fig1: Proposed SNP.

Mentions: Figure 1 shows the proposed SNP. In the figure, the model is presented as n + 1-inputs single-output architecture. The variable p is the input pattern and the variable T is related target applied in the learning process (3). Let us extend the input pattern as follows:(1)pn+1=max⁡j=1,…,n⁡(pj).


A novel single neuron perceptron with universal approximation and XOR computation properties.

Lotfi E, Akbarzadeh-T MR - Comput Intell Neurosci (2014)

Proposed SNP.
© Copyright Policy - open-access
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4020563&req=5

fig1: Proposed SNP.
Mentions: Figure 1 shows the proposed SNP. In the figure, the model is presented as n + 1-inputs single-output architecture. The variable p is the input pattern and the variable T is related target applied in the learning process (3). Let us extend the input pattern as follows:(1)pn+1=max⁡j=1,…,n⁡(pj).

Bottom Line: The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules.The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets.Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.

View Article: PubMed Central - PubMed

Affiliation: Department of Computer Engineering, Torbat-e-Jam Branch, Islamic Azad University, Torbat-e-Jam, Iran.

ABSTRACT
We propose a biologically motivated brain-inspired single neuron perceptron (SNP) with universal approximation and XOR computation properties. This computational model extends the input pattern and is based on the excitatory and inhibitory learning rules inspired from neural connections in the human brain's nervous system. The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules. The main features of proposed single layer perceptron are universal approximation property and low computational complexity. The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets. Various comparisons with multilayer perceptron (MLP) with gradient decent backpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy, lower time, and spatial complexity, as well as faster training. Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.

Show MeSH