Limits...
Asymmetric variate generation via a parameterless dual neural learning algorithm.

Fiori S - Comput Intell Neurosci (2008)

Bottom Line: In a previous work (S.Fiori, 2006), we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs).The new method proposed here proves easier to implement and relaxes some previous limitations.

View Article: PubMed Central - PubMed

Affiliation: Dipartimento di Elettronica, Intelligenza Artificiale e Telecomunicazioni (DEIT), Università Politecnica delle Marche Via Brecce Bianche, Ancona I-60131, Italy. fiori@deit.univpm.it

ABSTRACT
In a previous work (S. Fiori, 2006), we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs). The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations.

No MeSH data available.


Related in: MedlinePlus

Behavior of the “cumsum” operator for look-up tables.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2376097&req=5

fig2: Behavior of the “cumsum” operator for look-up tables.

Mentions: In order to describe the numerical learning algorithm,the following operators are defined for a generic look-up table (h, y) ∈ ℝN+1 × ℝN+1:(6)cumsum(h)0=0 ,   cumsum(h)k=def∑i=0k−1giΔy ,affscale{h;a,b}k=defa+(hk−min{h})(b−a)max{h}−min{h}, where the subscript k denotes the kth entry of thevectors (h) and {h; a, b}. The behavior of the “cumsum” operator isillustrated in Figure 2, which also provides a visual representation of look-up tables. In practice, theconsidered numerical version of the learning rule (4) writes(7)(A0)  g0⁢:=0,(A1)  g′n+1:=pypx(gn), n≥0, (A2)  gn+1:=cumsum{g′n+1} ,(A3)  gn+1:=affscale{gn+1;−Rx,Rx} ,where symbol := denotes vectorvalues assignment and py denotes thevector of N + 1 entriescontaining the values of py(⋅) correspondingto the values in y, and its entries may be denoted as pyk, with k ∈ {0, 1,…, N}.


Asymmetric variate generation via a parameterless dual neural learning algorithm.

Fiori S - Comput Intell Neurosci (2008)

Behavior of the “cumsum” operator for look-up tables.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2376097&req=5

fig2: Behavior of the “cumsum” operator for look-up tables.
Mentions: In order to describe the numerical learning algorithm,the following operators are defined for a generic look-up table (h, y) ∈ ℝN+1 × ℝN+1:(6)cumsum(h)0=0 ,   cumsum(h)k=def∑i=0k−1giΔy ,affscale{h;a,b}k=defa+(hk−min{h})(b−a)max{h}−min{h}, where the subscript k denotes the kth entry of thevectors (h) and {h; a, b}. The behavior of the “cumsum” operator isillustrated in Figure 2, which also provides a visual representation of look-up tables. In practice, theconsidered numerical version of the learning rule (4) writes(7)(A0)  g0⁢:=0,(A1)  g′n+1:=pypx(gn), n≥0, (A2)  gn+1:=cumsum{g′n+1} ,(A3)  gn+1:=affscale{gn+1;−Rx,Rx} ,where symbol := denotes vectorvalues assignment and py denotes thevector of N + 1 entriescontaining the values of py(⋅) correspondingto the values in y, and its entries may be denoted as pyk, with k ∈ {0, 1,…, N}.

Bottom Line: In a previous work (S.Fiori, 2006), we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs).The new method proposed here proves easier to implement and relaxes some previous limitations.

View Article: PubMed Central - PubMed

Affiliation: Dipartimento di Elettronica, Intelligenza Artificiale e Telecomunicazioni (DEIT), Università Politecnica delle Marche Via Brecce Bianche, Ancona I-60131, Italy. fiori@deit.univpm.it

ABSTRACT
In a previous work (S. Fiori, 2006), we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs). The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations.

No MeSH data available.


Related in: MedlinePlus