Limits...
Forced fusion in multisensory heading estimation.

de Winkel KN, Katliar M, Bülthoff HH - PLoS ONE (2015)

Bottom Line: In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis.For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference.An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

View Article: PubMed Central - PubMed

Affiliation: Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany.

ABSTRACT
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

No MeSH data available.


Related in: MedlinePlus

Individual data points for the multisensory condition.The panels show the data for each of the participants separately. The abscissa represents the visual heading angle of multisensory stimuli; the ordinate represents the inertial heading angle. The color of each dot corresponds to the response heading angle. Note that the responses lie on the circle, hence dots colored dark green and bright yellow have a minor angular difference.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4418840&req=5

pone.0127104.g007: Individual data points for the multisensory condition.The panels show the data for each of the participants separately. The abscissa represents the visual heading angle of multisensory stimuli; the ordinate represents the inertial heading angle. The color of each dot corresponds to the response heading angle. Note that the responses lie on the circle, hence dots colored dark green and bright yellow have a minor angular difference.

Mentions: For the five participants for whom the BIC scores were in favor of the FF model, the strength of the evidence for preference of this model over the best fitting CI model ranged between a △BIC of 1.58 and 6.54. For the four participants for whom the BIC scores favored the MA model, the △BIC ranged between 0.83 and 5.00. A △BIC ranging between 0–2 is considered weak evidence; 2–6 positive; 6–10 strong; and > 10 is considered decisive evidence [50]. The evidence to discern between the FF and MA models was weak for participants two, five, seven and nine, and positive for participant four. A visual representation of the data of the multisensory conditions is provided in Fig 7.


Forced fusion in multisensory heading estimation.

de Winkel KN, Katliar M, Bülthoff HH - PLoS ONE (2015)

Individual data points for the multisensory condition.The panels show the data for each of the participants separately. The abscissa represents the visual heading angle of multisensory stimuli; the ordinate represents the inertial heading angle. The color of each dot corresponds to the response heading angle. Note that the responses lie on the circle, hence dots colored dark green and bright yellow have a minor angular difference.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4418840&req=5

pone.0127104.g007: Individual data points for the multisensory condition.The panels show the data for each of the participants separately. The abscissa represents the visual heading angle of multisensory stimuli; the ordinate represents the inertial heading angle. The color of each dot corresponds to the response heading angle. Note that the responses lie on the circle, hence dots colored dark green and bright yellow have a minor angular difference.
Mentions: For the five participants for whom the BIC scores were in favor of the FF model, the strength of the evidence for preference of this model over the best fitting CI model ranged between a △BIC of 1.58 and 6.54. For the four participants for whom the BIC scores favored the MA model, the △BIC ranged between 0.83 and 5.00. A △BIC ranging between 0–2 is considered weak evidence; 2–6 positive; 6–10 strong; and > 10 is considered decisive evidence [50]. The evidence to discern between the FF and MA models was weak for participants two, five, seven and nine, and positive for participant four. A visual representation of the data of the multisensory conditions is provided in Fig 7.

Bottom Line: In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis.For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference.An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

View Article: PubMed Central - PubMed

Affiliation: Department of Human Perception, Cognition, and Action, Max Planck Institute for Biological Cybernetics, Spemanstrasse 38, 72076 Tübingen, Germany.

ABSTRACT
It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.

No MeSH data available.


Related in: MedlinePlus