Limits...
Depth discrimination of constant angular size stimuli in action space: role of accommodation and convergence cues.

Naceri A, Moscatelli A, Chellali R - Front Hum Neurosci (2015)

Bottom Line: We replicated the task in virtual and real environments and we found that the performance was significantly different between the two environments.Whereas, in virtual reality (VR) the responses were significantly less precise, although, still above chance level in 16 of the 20 observers.The values of Weber fractions estimated in our study were compared to those reported in previous studies in peripersonal and action space.

View Article: PubMed Central - PubMed

Affiliation: Department of Cognitive Neuroscience, Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University Bielefeld, Germany.

ABSTRACT
In our daily life experience, the angular size of an object correlates with its distance from the observer, provided that the physical size of the object remains constant. In this work, we investigated depth perception in action space (i.e., beyond the arm reach), while keeping the angular size of the target object constant. This was achieved by increasing the physical size of the target object as its distance to the observer increased. To the best of our knowledge, this is the first time that a similar protocol has been tested in action space, for distances to the observer ranging from 1.4-2.4 m. We replicated the task in virtual and real environments and we found that the performance was significantly different between the two environments. In the real environment, all participants perceived the depth of the target object precisely. Whereas, in virtual reality (VR) the responses were significantly less precise, although, still above chance level in 16 of the 20 observers. The difference in the discriminability of the stimuli was likely due to different contributions of the convergence and the accommodation cues in the two environments. The values of Weber fractions estimated in our study were compared to those reported in previous studies in peripersonal and action space.

No MeSH data available.


Experimental setups for both real environment and virtual reality (VR) settings.(A) VR setup. (B) Real environment setup. The robot arm has been covered with black material during the experiment and was not visible to the participant.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4584972&req=5

Figure 1: Experimental setups for both real environment and virtual reality (VR) settings.(A) VR setup. (B) Real environment setup. The robot arm has been covered with black material during the experiment and was not visible to the participant.

Mentions: The left and right images of resolution 1280 × 1024 were generated using the library OpenGL® from a NVIDIA™ Quadro™ FX 3800 graphics card on a PC Dell® (Intel® Quad Core(Q7600), 2.4-GHz, 2-GB RAM) running under Linux operating system and were displayed simultaneously at 50 Hz. The observers, wearing light passive polarized glasses, were seated 2.2 m in front of the projection screen (see Figure 1A) so that visual accommodation occurred at approximately the same screen distance. A dark gray background was used in order to minimize undesirable objects’ ghost in the projected scene. Observers were asked to maintain their head position fixed on the chair headrest (Figure 1A) and to face towards the screen, which resulted in both eyes being positioned in the coronal and axial planes.


Depth discrimination of constant angular size stimuli in action space: role of accommodation and convergence cues.

Naceri A, Moscatelli A, Chellali R - Front Hum Neurosci (2015)

Experimental setups for both real environment and virtual reality (VR) settings.(A) VR setup. (B) Real environment setup. The robot arm has been covered with black material during the experiment and was not visible to the participant.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4584972&req=5

Figure 1: Experimental setups for both real environment and virtual reality (VR) settings.(A) VR setup. (B) Real environment setup. The robot arm has been covered with black material during the experiment and was not visible to the participant.
Mentions: The left and right images of resolution 1280 × 1024 were generated using the library OpenGL® from a NVIDIA™ Quadro™ FX 3800 graphics card on a PC Dell® (Intel® Quad Core(Q7600), 2.4-GHz, 2-GB RAM) running under Linux operating system and were displayed simultaneously at 50 Hz. The observers, wearing light passive polarized glasses, were seated 2.2 m in front of the projection screen (see Figure 1A) so that visual accommodation occurred at approximately the same screen distance. A dark gray background was used in order to minimize undesirable objects’ ghost in the projected scene. Observers were asked to maintain their head position fixed on the chair headrest (Figure 1A) and to face towards the screen, which resulted in both eyes being positioned in the coronal and axial planes.

Bottom Line: We replicated the task in virtual and real environments and we found that the performance was significantly different between the two environments.Whereas, in virtual reality (VR) the responses were significantly less precise, although, still above chance level in 16 of the 20 observers.The values of Weber fractions estimated in our study were compared to those reported in previous studies in peripersonal and action space.

View Article: PubMed Central - PubMed

Affiliation: Department of Cognitive Neuroscience, Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University Bielefeld, Germany.

ABSTRACT
In our daily life experience, the angular size of an object correlates with its distance from the observer, provided that the physical size of the object remains constant. In this work, we investigated depth perception in action space (i.e., beyond the arm reach), while keeping the angular size of the target object constant. This was achieved by increasing the physical size of the target object as its distance to the observer increased. To the best of our knowledge, this is the first time that a similar protocol has been tested in action space, for distances to the observer ranging from 1.4-2.4 m. We replicated the task in virtual and real environments and we found that the performance was significantly different between the two environments. In the real environment, all participants perceived the depth of the target object precisely. Whereas, in virtual reality (VR) the responses were significantly less precise, although, still above chance level in 16 of the 20 observers. The difference in the discriminability of the stimuli was likely due to different contributions of the convergence and the accommodation cues in the two environments. The values of Weber fractions estimated in our study were compared to those reported in previous studies in peripersonal and action space.

No MeSH data available.