Limits...
Forced fusion in multisensory heading estimation.

de Winkel KN, Katliar M, Bülthoff HH - PLoS ONE (2015)

Bottom Line: In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis.For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference.An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

View Article: PubMed Central - PubMed

Affiliation: Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany.

ABSTRACT
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

No MeSH data available.


Related in: MedlinePlus

Individual data points and the fitted models for the inertial-only condition.The panels show the data for each of the participants separately. The abscissa represents the stimulus heading angle; the ordinate represents the difference between the reported heading and the stimulus heading. The orange line is the model mean response μ minus stimulus heading for the range of stimuli; the shaded area represents the 95% CI. Each dot is a single data point.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4418840&req=5

pone.0127104.g006: Individual data points and the fitted models for the inertial-only condition.The panels show the data for each of the participants separately. The abscissa represents the stimulus heading angle; the ordinate represents the difference between the reported heading and the stimulus heading. The orange line is the model mean response μ minus stimulus heading for the range of stimuli; the shaded area represents the 95% CI. Each dot is a single data point.

Mentions: An overview of the unisensory data is presented in Figs 5 and 6, for the visual and inertial conditions respectively. The panels of each figure show the data and fitted unisensory models for each participant. The obtained R2 statistics ranged between 0.042 and 0.780, indicating small to large improvement over a model assuming zero bias and a constant kappa (Supporting Information S1 Table).


Forced fusion in multisensory heading estimation.

de Winkel KN, Katliar M, Bülthoff HH - PLoS ONE (2015)

Individual data points and the fitted models for the inertial-only condition.The panels show the data for each of the participants separately. The abscissa represents the stimulus heading angle; the ordinate represents the difference between the reported heading and the stimulus heading. The orange line is the model mean response μ minus stimulus heading for the range of stimuli; the shaded area represents the 95% CI. Each dot is a single data point.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4418840&req=5

pone.0127104.g006: Individual data points and the fitted models for the inertial-only condition.The panels show the data for each of the participants separately. The abscissa represents the stimulus heading angle; the ordinate represents the difference between the reported heading and the stimulus heading. The orange line is the model mean response μ minus stimulus heading for the range of stimuli; the shaded area represents the 95% CI. Each dot is a single data point.
Mentions: An overview of the unisensory data is presented in Figs 5 and 6, for the visual and inertial conditions respectively. The panels of each figure show the data and fitted unisensory models for each participant. The obtained R2 statistics ranged between 0.042 and 0.780, indicating small to large improvement over a model assuming zero bias and a constant kappa (Supporting Information S1 Table).

Bottom Line: In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis.For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference.An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

View Article: PubMed Central - PubMed

Affiliation: Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany.

ABSTRACT
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

No MeSH data available.


Related in: MedlinePlus