Limits...
Distributed representations accelerate evolution of adaptive behaviours.

Stone JV - PLoS Comput. Biol. (2007)

Bottom Line: Using linear neural network models, it is proved that if skills are stored as distributed representations, then within-lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill.More importantly, it is shown that this "free-lunch" learning (FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn.Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL-induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate.

View Article: PubMed Central - PubMed

Affiliation: Psychology Department, Sheffield University, Sheffield, United Kingdom. j.v.stone@shef.ac.uk

ABSTRACT
Animals with rudimentary innate abilities require substantial learning to transform those abilities into useful skills, where a skill can be considered as a set of sensory-motor associations. Using linear neural network models, it is proved that if skills are stored as distributed representations, then within-lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill. More importantly, it is shown that this "free-lunch" learning (FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn. Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL-induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate.

Show MeSH

Related in: MedlinePlus

Prevalence and Amount of Free-Lunch LearningThe lines in each graph correspond to the conditions: FLL = solid line, NoFLL = dashed line. Performance error on A1 was tested after learning A2 in conditions FLL and NoFLL. Each plotted line is the mean of ten computer simulation runs (see Figure 4 for details).(A) Prevalence of FLL: the proportion of networks which showed improved performance on A1 after learning A2. In condition FLL, the prevalence of FLL was non-zero in the first generation, as expected (see text), and increased across subsequent generations. In condition NoFLL, the prevalence remained at zero, as expected.(B) Amount of FLL: in condition FLL, the amount of FLL increased dramatically over the first 30 generations. The absence of FLL in condition NoFLL is as expected (see text). The amount of FLL defined for this graph only is the difference in fitness error on A1 before and after learning A2, expressed as a proportion of the error on A1 before learning A2; or, equivalently, as [Epre(A1) − Epost(A1)]/Epre(A1).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC1937014&req=5

pcbi-0030147-g006: Prevalence and Amount of Free-Lunch LearningThe lines in each graph correspond to the conditions: FLL = solid line, NoFLL = dashed line. Performance error on A1 was tested after learning A2 in conditions FLL and NoFLL. Each plotted line is the mean of ten computer simulation runs (see Figure 4 for details).(A) Prevalence of FLL: the proportion of networks which showed improved performance on A1 after learning A2. In condition FLL, the prevalence of FLL was non-zero in the first generation, as expected (see text), and increased across subsequent generations. In condition NoFLL, the prevalence remained at zero, as expected.(B) Amount of FLL: in condition FLL, the amount of FLL increased dramatically over the first 30 generations. The absence of FLL in condition NoFLL is as expected (see text). The amount of FLL defined for this graph only is the difference in fitness error on A1 before and after learning A2, expressed as a proportion of the error on A1 before learning A2; or, equivalently, as [Epre(A1) − Epost(A1)]/Epre(A1).

Mentions: Condition FLL (solid line): mean performance error Epost(A1) on the ten associations in A1 after learning the ten associations in subset A2. Learning A2 had a beneficial effect on performance on A1 over 100 generations, corresponding to an increase in the amount and prevalence of FLL (see Figure 6). Condition NoFLL (dashed line): mean performance error Epost(A1) was evaluated as in condition FLL, except that the input vectors in A1 were orthogonal to those in A2, so that learning A2 could not have any effect on performance on A1 (see text). Condition NoLearn (dotted line): mean performance error Epre on A1 was evaluated “at birth” (i.e., no within-lifetime learning was allowed).


Distributed representations accelerate evolution of adaptive behaviours.

Stone JV - PLoS Comput. Biol. (2007)

Prevalence and Amount of Free-Lunch LearningThe lines in each graph correspond to the conditions: FLL = solid line, NoFLL = dashed line. Performance error on A1 was tested after learning A2 in conditions FLL and NoFLL. Each plotted line is the mean of ten computer simulation runs (see Figure 4 for details).(A) Prevalence of FLL: the proportion of networks which showed improved performance on A1 after learning A2. In condition FLL, the prevalence of FLL was non-zero in the first generation, as expected (see text), and increased across subsequent generations. In condition NoFLL, the prevalence remained at zero, as expected.(B) Amount of FLL: in condition FLL, the amount of FLL increased dramatically over the first 30 generations. The absence of FLL in condition NoFLL is as expected (see text). The amount of FLL defined for this graph only is the difference in fitness error on A1 before and after learning A2, expressed as a proportion of the error on A1 before learning A2; or, equivalently, as [Epre(A1) − Epost(A1)]/Epre(A1).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC1937014&req=5

pcbi-0030147-g006: Prevalence and Amount of Free-Lunch LearningThe lines in each graph correspond to the conditions: FLL = solid line, NoFLL = dashed line. Performance error on A1 was tested after learning A2 in conditions FLL and NoFLL. Each plotted line is the mean of ten computer simulation runs (see Figure 4 for details).(A) Prevalence of FLL: the proportion of networks which showed improved performance on A1 after learning A2. In condition FLL, the prevalence of FLL was non-zero in the first generation, as expected (see text), and increased across subsequent generations. In condition NoFLL, the prevalence remained at zero, as expected.(B) Amount of FLL: in condition FLL, the amount of FLL increased dramatically over the first 30 generations. The absence of FLL in condition NoFLL is as expected (see text). The amount of FLL defined for this graph only is the difference in fitness error on A1 before and after learning A2, expressed as a proportion of the error on A1 before learning A2; or, equivalently, as [Epre(A1) − Epost(A1)]/Epre(A1).
Mentions: Condition FLL (solid line): mean performance error Epost(A1) on the ten associations in A1 after learning the ten associations in subset A2. Learning A2 had a beneficial effect on performance on A1 over 100 generations, corresponding to an increase in the amount and prevalence of FLL (see Figure 6). Condition NoFLL (dashed line): mean performance error Epost(A1) was evaluated as in condition FLL, except that the input vectors in A1 were orthogonal to those in A2, so that learning A2 could not have any effect on performance on A1 (see text). Condition NoLearn (dotted line): mean performance error Epre on A1 was evaluated “at birth” (i.e., no within-lifetime learning was allowed).

Bottom Line: Using linear neural network models, it is proved that if skills are stored as distributed representations, then within-lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill.More importantly, it is shown that this "free-lunch" learning (FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn.Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL-induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate.

View Article: PubMed Central - PubMed

Affiliation: Psychology Department, Sheffield University, Sheffield, United Kingdom. j.v.stone@shef.ac.uk

ABSTRACT
Animals with rudimentary innate abilities require substantial learning to transform those abilities into useful skills, where a skill can be considered as a set of sensory-motor associations. Using linear neural network models, it is proved that if skills are stored as distributed representations, then within-lifetime learning of part of a skill can induce automatic learning of the remaining parts of that skill. More importantly, it is shown that this "free-lunch" learning (FLL) is responsible for accelerated evolution of skills, when compared with networks which either 1) cannot benefit from FLL or 2) cannot learn. Specifically, it is shown that FLL accelerates the appearance of adaptive behaviour, both in its innate form and as FLL-induced behaviour, and that FLL can accelerate the rate at which learned behaviours become innate.

Show MeSH
Related in: MedlinePlus