Limits...
Modelling human visual navigation using multi-view scene reconstruction.

Pickup LC, Fitzgibbon AW, Glennerster A - Biol Cybern (2013)

Bottom Line: Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back.We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models.Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

View Article: PubMed Central - PubMed

Affiliation: School of Psychology and Clinical Language Sciences, University of Reading, Reading, RG6 6AL, UK. l.c.pickup@reading.ac.uk

ABSTRACT
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer's prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

Show MeSH

Related in: MedlinePlus

Three example conditions from the navigation experiment illustrated in Fig. 11 using corresponding colours. Each end-point is linked by a black line to its corresponding goal point (black circle); these goals are not exactly coincident because each participant was able to select a goal from anywhere within the viewing box of interval one. The three coloured dots to the left of each box show the locations of the poles, and each box is 4 m  4 m in size. a The red condition is well explained with a radial distribution, and so favours the shape model; b the blue condition shows a high uncertainty laterally and so favours the basic 3D model with its crescent-like distributions; c participants performed consistently and well on this condition, and end-points lay close to the means of both models without much spread, so both models performed well
© Copyright Policy - OpenAccess
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3755223&req=5

Fig12: Three example conditions from the navigation experiment illustrated in Fig. 11 using corresponding colours. Each end-point is linked by a black line to its corresponding goal point (black circle); these goals are not exactly coincident because each participant was able to select a goal from anywhere within the viewing box of interval one. The three coloured dots to the left of each box show the locations of the poles, and each box is 4 m 4 m in size. a The red condition is well explained with a radial distribution, and so favours the shape model; b the blue condition shows a high uncertainty laterally and so favours the basic 3D model with its crescent-like distributions; c participants performed consistently and well on this condition, and end-points lay close to the means of both models without much spread, so both models performed well

Mentions: The main results of Experiment 1 are shown at the end of the paper in Figs. 11 and 12 where they can be interpreted in relation to the modelling which is described in subsequent sections. However, Fig. 2 illustrates a portion of the data and shows what the main characteristics are that need to be modelled. The black dot shows the goal point to which participants had to return in the homing interval, and the crosses show their end-points. It is clear that the spatial distribution of end-points is affected by the layout of the poles. Figure 2c, d are extreme examples. In (c), the spread of points is mainly along the line joining the goal point and the central pole, while in (d) the pattern in reversed.Fig. 2


Modelling human visual navigation using multi-view scene reconstruction.

Pickup LC, Fitzgibbon AW, Glennerster A - Biol Cybern (2013)

Three example conditions from the navigation experiment illustrated in Fig. 11 using corresponding colours. Each end-point is linked by a black line to its corresponding goal point (black circle); these goals are not exactly coincident because each participant was able to select a goal from anywhere within the viewing box of interval one. The three coloured dots to the left of each box show the locations of the poles, and each box is 4 m  4 m in size. a The red condition is well explained with a radial distribution, and so favours the shape model; b the blue condition shows a high uncertainty laterally and so favours the basic 3D model with its crescent-like distributions; c participants performed consistently and well on this condition, and end-points lay close to the means of both models without much spread, so both models performed well
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3755223&req=5

Fig12: Three example conditions from the navigation experiment illustrated in Fig. 11 using corresponding colours. Each end-point is linked by a black line to its corresponding goal point (black circle); these goals are not exactly coincident because each participant was able to select a goal from anywhere within the viewing box of interval one. The three coloured dots to the left of each box show the locations of the poles, and each box is 4 m 4 m in size. a The red condition is well explained with a radial distribution, and so favours the shape model; b the blue condition shows a high uncertainty laterally and so favours the basic 3D model with its crescent-like distributions; c participants performed consistently and well on this condition, and end-points lay close to the means of both models without much spread, so both models performed well
Mentions: The main results of Experiment 1 are shown at the end of the paper in Figs. 11 and 12 where they can be interpreted in relation to the modelling which is described in subsequent sections. However, Fig. 2 illustrates a portion of the data and shows what the main characteristics are that need to be modelled. The black dot shows the goal point to which participants had to return in the homing interval, and the crosses show their end-points. It is clear that the spatial distribution of end-points is affected by the layout of the poles. Figure 2c, d are extreme examples. In (c), the spread of points is mainly along the line joining the goal point and the central pole, while in (d) the pattern in reversed.Fig. 2

Bottom Line: Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back.We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models.Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

View Article: PubMed Central - PubMed

Affiliation: School of Psychology and Clinical Language Sciences, University of Reading, Reading, RG6 6AL, UK. l.c.pickup@reading.ac.uk

ABSTRACT
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer's prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.

Show MeSH
Related in: MedlinePlus