Limits...
Perceptual learning shapes multisensory causal inference via two distinct mechanisms.

McGovern DP, Roudaia E, Newell FN, Roach NW - Sci Rep (2016)

Bottom Line: Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects.We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities.These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.

View Article: PubMed Central - PubMed

Affiliation: Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, College Green, Dublin 2, Ireland.

ABSTRACT
To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.

No MeSH data available.


Related in: MedlinePlus

Accounting for the dual effects of training on audiovisual integration using a Bayesian causal inference model.(A) Improvement in the precision of SOA estimates explains narrowing of the temporal binding window, but predicts a concomitant increase in the magnitude of the ventriloquist effect with near-synchronous stimuli. Model simulations are shown for three levels of σΔt on audiovisual simultaneity discrimination (left panel), the temporal tuning of the ventriloquist effect (middle panel) and the spatial tuning of the ventriloquist effect (right panel). (B) Changes in prior expectations explain the reduction in the amplitude of the ventriloquist effect. Model simulations are shown for the same three tasks, with different functions representing different levels of prior expectation about whether audiovisual stimuli relate to a common cause (pcommon). (C) Coupling reductions in both σΔtand pcommon captures training-induced changes in performance across all three tasks. Unless otherwise indicated, simulations were carried out using the following parameter set: σΔt = 80 ms, συ = 1deg., σa = 2deg., pcommon = 0.5.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4835789&req=5

f5: Accounting for the dual effects of training on audiovisual integration using a Bayesian causal inference model.(A) Improvement in the precision of SOA estimates explains narrowing of the temporal binding window, but predicts a concomitant increase in the magnitude of the ventriloquist effect with near-synchronous stimuli. Model simulations are shown for three levels of σΔt on audiovisual simultaneity discrimination (left panel), the temporal tuning of the ventriloquist effect (middle panel) and the spatial tuning of the ventriloquist effect (right panel). (B) Changes in prior expectations explain the reduction in the amplitude of the ventriloquist effect. Model simulations are shown for the same three tasks, with different functions representing different levels of prior expectation about whether audiovisual stimuli relate to a common cause (pcommon). (C) Coupling reductions in both σΔtand pcommon captures training-induced changes in performance across all three tasks. Unless otherwise indicated, simulations were carried out using the following parameter set: σΔt = 80 ms, συ = 1deg., σa = 2deg., pcommon = 0.5.

Mentions: To investigate the mechanisms through which perceptual learning alters multisensory integration, we simulated the effect of manipulating key parameters in a spatiotemporal causal inference model (see Methods for details). In each case, we carried out trial-by-trial simulations of task performance, replacing participant decisions with those of the model. Given that participants trained on a task requiring the discrimination of simultaneous and asynchronous audiovisual stimuli, the most obvious mechanism for driving changes in performance is an increase in the precision of their SOA estimates (i.e. reduction in variability). The left panel in Fig. 5A shows the effect of manipulating σΔt in the model on the 2-IFC audiovisual simultaneity task performance. As expected, reducing σΔtproduces a systematic improvement in response accuracy and reduction in SOA discrimination thresholds consistent with those shown during training. It is worth noting that the overall effect of this improvement is a steepening of the psychometric function relating accuracy to SOA. This change is therefore sufficient to explain previous reports of a narrowing of the temporal binding window [e.g.9], when quantified using this method.


Perceptual learning shapes multisensory causal inference via two distinct mechanisms.

McGovern DP, Roudaia E, Newell FN, Roach NW - Sci Rep (2016)

Accounting for the dual effects of training on audiovisual integration using a Bayesian causal inference model.(A) Improvement in the precision of SOA estimates explains narrowing of the temporal binding window, but predicts a concomitant increase in the magnitude of the ventriloquist effect with near-synchronous stimuli. Model simulations are shown for three levels of σΔt on audiovisual simultaneity discrimination (left panel), the temporal tuning of the ventriloquist effect (middle panel) and the spatial tuning of the ventriloquist effect (right panel). (B) Changes in prior expectations explain the reduction in the amplitude of the ventriloquist effect. Model simulations are shown for the same three tasks, with different functions representing different levels of prior expectation about whether audiovisual stimuli relate to a common cause (pcommon). (C) Coupling reductions in both σΔtand pcommon captures training-induced changes in performance across all three tasks. Unless otherwise indicated, simulations were carried out using the following parameter set: σΔt = 80 ms, συ = 1deg., σa = 2deg., pcommon = 0.5.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4835789&req=5

f5: Accounting for the dual effects of training on audiovisual integration using a Bayesian causal inference model.(A) Improvement in the precision of SOA estimates explains narrowing of the temporal binding window, but predicts a concomitant increase in the magnitude of the ventriloquist effect with near-synchronous stimuli. Model simulations are shown for three levels of σΔt on audiovisual simultaneity discrimination (left panel), the temporal tuning of the ventriloquist effect (middle panel) and the spatial tuning of the ventriloquist effect (right panel). (B) Changes in prior expectations explain the reduction in the amplitude of the ventriloquist effect. Model simulations are shown for the same three tasks, with different functions representing different levels of prior expectation about whether audiovisual stimuli relate to a common cause (pcommon). (C) Coupling reductions in both σΔtand pcommon captures training-induced changes in performance across all three tasks. Unless otherwise indicated, simulations were carried out using the following parameter set: σΔt = 80 ms, συ = 1deg., σa = 2deg., pcommon = 0.5.
Mentions: To investigate the mechanisms through which perceptual learning alters multisensory integration, we simulated the effect of manipulating key parameters in a spatiotemporal causal inference model (see Methods for details). In each case, we carried out trial-by-trial simulations of task performance, replacing participant decisions with those of the model. Given that participants trained on a task requiring the discrimination of simultaneous and asynchronous audiovisual stimuli, the most obvious mechanism for driving changes in performance is an increase in the precision of their SOA estimates (i.e. reduction in variability). The left panel in Fig. 5A shows the effect of manipulating σΔt in the model on the 2-IFC audiovisual simultaneity task performance. As expected, reducing σΔtproduces a systematic improvement in response accuracy and reduction in SOA discrimination thresholds consistent with those shown during training. It is worth noting that the overall effect of this improvement is a steepening of the psychometric function relating accuracy to SOA. This change is therefore sufficient to explain previous reports of a narrowing of the temporal binding window [e.g.9], when quantified using this method.

Bottom Line: Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects.We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities.These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.

View Article: PubMed Central - PubMed

Affiliation: Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, College Green, Dublin 2, Ireland.

ABSTRACT
To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.

No MeSH data available.


Related in: MedlinePlus