Limits...
A Unifying Probabilistic View of Associative Learning.

Gershman SJ - PLoS Comput. Biol. (2015)

Bottom Line: They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories.This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning.Each perspective captures a different aspect of associative learning, and their synthesis offers insight into phenomena that neither perspective can explain on its own.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology and Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America.

ABSTRACT
Two important ideas about associative learning have emerged in recent decades: (1) Animals are Bayesian learners, tracking their uncertainty about associations; and (2) animals acquire long-term reward predictions through reinforcement learning. Both of these ideas are normative, in the sense that they are derived from rational design principles. They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories. This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning. Each perspective captures a different aspect of associative learning, and their synthesis offers insight into phenomena that neither perspective can explain on its own.

No MeSH data available.


Kalman filter simulation of latent inhibition.(A) Reward expectation following pre-exposure (Pre) and no pre-exposure (No-Pre) conditions. (B) The Kalman gain as a function of pre-exposure trial.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4633133&req=5

pcbi.1004567.g002: Kalman filter simulation of latent inhibition.(A) Reward expectation following pre-exposure (Pre) and no pre-exposure (No-Pre) conditions. (B) The Kalman gain as a function of pre-exposure trial.

Mentions: One implication of the Kalman filter is that repeated CS presentations will attenuate posterior uncertainty and therefore reduce the Kalman gain. As illustrated in Fig 2, this reduction in gain produces latent inhibition, capturing the intuition that CS pre-exposure reduces “attention” (associability or learning rate). The Kalman filter can also explain why interposing an interval between pre-exposure and conditioning attenuates latent inhibition [40]: The posterior variance grows over the interval (due to random diffusion of the weights), increasing the Kalman gain. Thus, the Kalman filter can model some changes in learning that occur in the absence of prediction error, unlike the Rescorla-Wagner model.


A Unifying Probabilistic View of Associative Learning.

Gershman SJ - PLoS Comput. Biol. (2015)

Kalman filter simulation of latent inhibition.(A) Reward expectation following pre-exposure (Pre) and no pre-exposure (No-Pre) conditions. (B) The Kalman gain as a function of pre-exposure trial.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4633133&req=5

pcbi.1004567.g002: Kalman filter simulation of latent inhibition.(A) Reward expectation following pre-exposure (Pre) and no pre-exposure (No-Pre) conditions. (B) The Kalman gain as a function of pre-exposure trial.
Mentions: One implication of the Kalman filter is that repeated CS presentations will attenuate posterior uncertainty and therefore reduce the Kalman gain. As illustrated in Fig 2, this reduction in gain produces latent inhibition, capturing the intuition that CS pre-exposure reduces “attention” (associability or learning rate). The Kalman filter can also explain why interposing an interval between pre-exposure and conditioning attenuates latent inhibition [40]: The posterior variance grows over the interval (due to random diffusion of the weights), increasing the Kalman gain. Thus, the Kalman filter can model some changes in learning that occur in the absence of prediction error, unlike the Rescorla-Wagner model.

Bottom Line: They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories.This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning.Each perspective captures a different aspect of associative learning, and their synthesis offers insight into phenomena that neither perspective can explain on its own.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology and Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America.

ABSTRACT
Two important ideas about associative learning have emerged in recent decades: (1) Animals are Bayesian learners, tracking their uncertainty about associations; and (2) animals acquire long-term reward predictions through reinforcement learning. Both of these ideas are normative, in the sense that they are derived from rational design principles. They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories. This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning. Each perspective captures a different aspect of associative learning, and their synthesis offers insight into phenomena that neither perspective can explain on its own.

No MeSH data available.