Wilkie (2003) Controlling steering and judging

North Conway, NH: Whitehorse Press. Regan, D., & Beverley, K. I. (1982). How do we ... Warren, W. H., & Hannon, D. J. (1990). Eye movements and optical flow.
633KB taille 1 téléchargements 282 vues
Journal of Experimental Psychology: Human Perception and Performance 2003, Vol. 29, No. 2, 363-378

Controlling Steering and Judging Heading: Retinal Flow, Visual Direction, and Extraretinal Information Richard Wilkie and John Wann University of Reading

The contribution of retinal flow (RF), extra retinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths and the availability and veracity of VD was manipulated with either degraded or systematically biased RF. Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.

We routinely locomote through the world and reach our goals successfully despite course changes and variable eye or body positions. This requires a highly efficient system for processing visual information to pick a path, maintain a course, and arrive at our desired destination. Solutions to these problems have centered on the use of optic flow and the detection of locomotor heading. Gibson (1958) proposed that to aim locomotion at a target object, the observer should “keep the center of flow of the optic array as close as possible to the form which the object projects” (p. 187). This is now recognized as a very limited solution for two reasons: First, Gibson was addressing the case of a linear locomotor path, which produces a pattern of optic flow expanding radially with a clear focus of expansion (FoE; see Figure 1A). In the case of radial flow the FoE is coincident with the locomotor heading and the locomotor path is indicated by vertical vectors in the optic flow field. In the case of a curved path there is no FoE for optic flow, but it may be possible to discern “locomotor flow lines” (Lee & Lishman, 1977; see Figure 1C). A simple generalization of Gibson, therefore, would be to steer by keeping

the target object centered within the locomotor flow lines, irrespective of whether they are radial (linear path) or curved. A critical problem is that both Gibson and Lee and Lishman were referring to optic flow, which is the pattern of flow that would be projected on an image plane that is fixed with respect to the locomotor axis (e.g., a car windshield). The pattern of flow on the retinal image is affected by gaze motion (Regan & Beverley, 1982). If observers stabilize gaze with respect to their vehicle, they should indeed be able to perceive locomotor flow lines. However, if they adopt the more natural behavior of fixating features within the environment, such as a target object or road features, then gaze rotation transforms the retinal flow (RF) pattern and displaces the FoE from the direction of heading (Regan & Beverley, 1982; Warren & Hannon, 1990; see also Figure Correspondence concerning this article should be addressed to either Richard Wilkie or John Wann, Department of Psychology, University of Reading, Earley Gate, Whiteknights, RG6 6AL, United Kingdom. E-mail: [email protected] or [email protected]

365

WILKIE AND WANN

1D). A considerable body of research has been directed at how the observer may recover heading from RF and whether extraretinal (ER) information specifying gaze motion is required (Banks, Ehrlich, Backus, & Crowell, 1996; Crowell, Banks, Shenoy, & Andersen, 1998; Lappe, Bremmer, & van den Berg, 1999; Li & Warren, 2000; Royden, Crowell, & Banks, 1994; van den Berg, 1993; Warren & Hannon, 1990). Considerably less research has been directed toward exploring the use of RF and ER information in locomotor steering (Fajen & Warren, 2000; Rushton, Harris, Lloyd, & Wann, 1998; Rushton & Salvucci, 2001; Wann & Land, 2000, 2001; Warren, Kay, Zosh, Duchon, & Sahuc, 2001). In this article we address the use of flow and ER information in both judgments of heading and active steering.

Heading and ER Information: Gaze Sweep vs. Fixation If an observer is moving on a linear path with gaze stable, then the RF field is radial (Figure 1A). If the observer executes a gaze sweep during forward locomotion, then a rotation is introduced into the RF field. The combination of a radial expansion and a rotation around the observer results in a displacement of the flow lines, and the resulting pattern (Figure 1B) may look very similar to what would arise from locomotion on a curved path with gaze fixed (Figure 1C). The two RF patterns have very similar statistical properties, and it is plausible that even nominal knowledge that “my eyes and head are rotating to the right” may assist in dissociating the two fields. There is evidence that recovery of heading in during a gaze sweep (Figure 1B) may require ER information (Banks et al., 1996; Crowell et al., 1998), but there is also strong evidence that heading can be recovered without this information in simulations of fixation on a stable object in the environment (Li & Warren, 2000; van den Berg, 1993, 1996). This may in part be due to the reduced ambiguity in the retinal pattern: If the observer fixates a stable feature in the environment, such as a point on the ground, then the rate of gaze rotation varies over time as a function of the distance from that point. This

maintains that feature both stable and centered on the retina (a necessary condition for fixation). Hence, there is a singularity in the RF, centered on the fovea, and the global flow across left and right peripheral fields is in opposite directions (Figure 1D). Although there are some complex curved paths that would produce this retinal pattern over a short time interval, the simplest, most parsimonious interpretation of the retinal pattern is of a fixation plus rotation, and we would argue that Figure 1D is not ambiguous.1 Our concern was with the case of ground fixation. In locomoting through the world we often fixate a location that we wish to steer toward or around. Gaze behavior generally alternates between being stabilized to the locomotor axis and fixation on the environment. In contrast we argue that we seldom sweep gaze across the ground plane, except for a transient (saccadic) motion to achieve fixation. The evidence regarding the use of ER information during ground fixation is mixed, although the results from gaze-sweep and ground-fixation experiments have often been conflated. Two studies set the current agenda: Warren and Hannon (1988) presented their observers with two types of stimuli that simulated ground fixation during locomotion. In the first, real rotation (RR) condition, observers made eye movements to track a ground point as it moved to the side of their path before making a heading judgment. This added rotation to the scene, but it also supplied ER information. In the second, simulated rotation (SR), no eye movements were made, but the rendered scene was rotated as if the eye had moved to focus on the ground point.

1

The pattern shown in Figure 1D could be produced by a downward curving path component, but the paths would have to be of changing curvature to maintain this flow pattern. This would create a new singularity to the right of the visual field, so to center the singularity, gaze would also have to be offset from the tangent to be coincident with this (unspecified) location. This highly constrained geometric solution does not render Figure 1D as “ambiguous,” which requires a reasonable degree of probability of the two explanations. All humans (and animals) continually experience the equivalent of Figure 1D as a result of fixation during locomotion; few ever experience the alternative solution.

HEADING AND STEERING

Figure 1. Retinal flow patterns arising from different locomotor trajectories and gaze responses. Panels illustrate the flow arising over 0.2 s while travelling at 15 mph (or 0.1 s at 30 mph). A: Radial flow arising from moving along a linear trajectory toward the vertical marker with stable gaze. B: Moving on a linear trajectory while sweeping gaze to the left. The fixation point (circle) stays centered whereas locomotor heading (vertical bar) sweeps to the right. The gaze motion introduces curvature into the flow field. Although some ground points exhibit transient vertical motion, there is no sustained singularity at the point of gaze. C: Travelling on a curved trajectory that would carry the observer through the vertical posts, with gaze locked in a forward position. Heading and gaze eventually sweep around to be coincident with the posts. D: Moving on a linear trajectory while fixating a ground feature (circle) to the left of the trajectory. Heading sweeps to the right and there is a singularity and stationary ground feature at the point of gaze.

The two conditions therefore produced the same retinal information, but crucially, the SR condition lacked any ER eye movement information. Warren and Hannon reported that observers could discriminate headings equally well in both conditions, and therefore they concluded that ER information is not necessary for the judgment of heading. The eye movement velocities used were slow ( 0% RR) have increasing bias (SR) simulated in the screen display, and the pattern of errors across the conditions could be predicted from a hypothesis of disrupted pursuit. Banks et al. did not record eye movements, so they cannot discount this hypothesis. Ehrlich et al. (1998) included the recording of gaze movements for 2 participants to show that accurate pursuit was possible, but the heading errors during these trials were not reported and gaze tracking was not used during the main experiments. A second issue with RR displays is that the fixation target sweeps toward the display edge, which provides a visual reference for the direction of motion, given that heading always lies within the sector that subtends the largest

366

367

WILKIE AND WANN

angle.2 Royden et al. (1994) tested the role of this visual direction (VD) cue with 2 participants using a moveable clipping frame and found an advantage of VD for one participant (TRC) but not the other (MSB). This was present in Banks et al.’s (1996) Experiment 3but was removed in Experiment 4. Because the salience of the edge cue is increased when using a textured ground plane, we used a procedure that controlled for this VD cue. A final issue with the experiments of Banks et al. (1996) is that the results arose from testing three non-naïve participants (the experimenters) with a small range of headings (–4, 0, +4˚ ), and no statistical appraisal was made of the error pattern. We wished to reconfirm the role of ER signals with a larger group of naïve participants, with a wider range of headings, prior to translating the paradigm to a steering task.

Steering a Path Although considerable research has been conducted into the perception of linear heading, its role in locomotor control is debatable. Wann and Land (2000) argued that most steering tasks can be completed by using RF or ER information, but without having to recover heading per se (for follow-up debate, see Fajen & Warren, 2000; Harris, 2001; Rushton & Salvucci, 2001; Wann & Land, 2001; Wann, Swapp, & Rushton, 2000). The need to judge heading while looking at a peripheral object (the Warren–Banks task) can be negated by intermittently switching gaze back to the assumed path. For linear paths, the respective roles of egocentric VD (the angle the target subtends to one’s body or vehicle) and RF have been explored by asking participants to walk linear paths while wearing prisms, which displace VD (Rushton et al., 1998; Warren et al., 2001; Wood, Harvey, Young, Beedie, & Wilson, 2000). The current conclusion seems to be that both sources are used depending on their relative 2

In the RR condition, for a fixed speed (V), target distance (Z), and trial duration (T), the heading angle (α) is related to the final angle between the fixation point and screen border (γ) by tan(α) = sin(γ)/(Z/VTcos(γ)).

strength. In the prism paradigm the Warren et al. article schematically depicts the FoE as displaced from the target by the prism, but this would not be the case. As soon as participants fixated the target, the FoE would be centered on this and the flow would be weakly curved as in Figure 1D. The simplest solution to the task is to notice that the flow is curvilinear and adjust walking direction to make it radial. The prism paradigm, therefore, addresses the use of RF, but it makes no assessment of whether heading is recovered from flow. For curved paths, heading, defined as the current direction of travel, is the tangent to the path and is constantly changing. If heading is recovered during curved trajectories, there is still the need for a mechanism to ensure that its rate of change is sufficient to meet the steering requirement (e.g., Fajen, 2001). There is some evidence that observers can make heading judgments while on curved paths, but it is not clear how these relate to active steering control (Stone & Perrone, 1997; Warren, Mestre, Blackwell, & Morris, 1991). The alternatives that bypass the perception of heading are simple heuristics that arise from the use of active gaze (looking to where you wish to steer) and the resulting pattern of RF (Kim & Turvey, 1999; Wann & Land, 2000; Wann & Swapp, 2000) or VD information (Land & Lee, 1994; Rushton & Salvucci, 2001; Wann & Land, 2000). Effective steering of a curved path could be achieved by using information from gaze angle and gaze velocity alone, or by using the RF alone, without recovering heading (Kim & Turvey, 1999; Wann & Land, 2000; Wann & Swapp, 2000). Wann, Rushton, and Lee (1995) demonstrated that participants can steer accurately, using RF alone, if they are allowed to fixate their target. Li and Warren (2002) required participants to look at a point other than their steering target. They found that flow from the textured ground plane was not sufficient and that accurate steering required additional parallax information provided by vertical posts. This unusual gaze requirement negates the simple heuristics outlined in Wann and Land (2000) and forces the participant to recover something equivalent to instantaneous heading, which appears to be problematic using simple ground flow. In this article we address the relative roles of RF, ER information, and VD

HEADING AND STEERING

cues in steering curved paths toward a target, when participants are able to fixate their steering target.

Manipulating the Information Available for Heading and Steering Judgments For heading and steering tasks we used a different, but equivalent, method of reproducing the conditions of Banks et al. (1996) using viewport motion (VPM) to manipulate the ER information. First, SR was added to the optic array using headings of up to ±12o. The rendered scene was then presented in a moveable viewport such that the whole display could be laterally translated to cause gain pursuit commensurate with the flow rotation present in the SR display or in a manner that conflicted with the display. The displays were presented in a dark environment, with no other spatial landmarks, so the ER component could be manipulated without changing the retinal pattern (assuming accurate pursuit). This method provided a means to reproduce ER signals of 100% and 50%, but also to introduce ER signals opposite to the RF rotation as a stringent test of the subtraction model (Royden et al., 1994). These conditions are shown in Figure 2, where manipulations that result in eye movements are labeled RR, and conditions that do not cause ER signals are labeled SR. The suffix of 1 or .5 indicates the 100% and 50% conditions, respectively. If the condition is created through the use of the viewport, then the label is suffixed with a V. As a consequence, a set of conditions (SR1, SR.5V, RR1V, NR.5V) kept the fixation target centered in the display so that the angle subtended across the ground plane to the left and right of the target remained equal and constant throughout the trial. This allowed us to assess the benefit of a VD cue across displays with equivalent RF patterns. We also used real-time gaze tracking to reduce the possibility of bias arising from gaze errors. In the follow-up experiments these methods were translated to a steering task. In the case of steering, VD information provides a strong target-centering cue. The VPM method allows control over the ER information while excluding the VD cue.

368

Li and Warren (2000) demonstrated that although heading errors may be high for an SR condition with a dot display, the errors were significantly reduced by additional parallax information. They suggested that the results with dot-flow fields have been due to the absence of sufficient parallax to resolve rotation components. We used ground textures with differing levels of geometric structure to explore whether the findings of Banks et al. (1996) could be accounted for with the Li and Warren explanation.

Figure 2. Top panels: Conventional real rotation display (RR1). If the observer tracks the target circle rotational flow is introduced into the retinal image and extraretinal (ER) gaze information is also available. Lower panels: If equivalent gaze rotation is simulated on the display and the target circle is fixated the same retinal image results. Moving the complete viewport manipulates gaze, to provide 0, 100% or –50% of the ER information that would normally be associated with the retinal flow pattern. SR = simulated rotation

Experiments and Predictions All our experiments used realistically textured ground planes. White on black dot flow fields provide optimal contrast and under some conditions may even provide phosphor streaks indicative of flow. With simulations of natural ground surfaces there is an issue as to whether, given the spatial and temporal resolution of the

368

369

WILKIE AND WANN

display, a specific texture supports the perception of flow. We established three textures for use in our experiments3: an unstructured grass texture and a highly structured brick texture (with high-contrast linear elements and consistent parallel lines), both of which appeared to provide strong flow information and a faded texture that was designed to provide some impression of forward motion but made it difficult to detect flow vectors. The first experiment involved a passive heading judgment task, where the participants travelled on a linear path but fixated an eccentric target. If the Banks et al. (1996) model holds, then heading judgments would be significantly more accurate when there is an ER correlate, even if this is introduced using VPM. We also expected inappropriate ER information (VPM in the wrong direction) to have a particularly detrimental effect. The experiment used the established brick and grass textures and two different locomotor speeds, 2.4 m/s (brisk walking speed) and 8 m/s (cycling or slow driving speed). The second experiment investigated accuracy when actively steering toward a target at 8 m/s. If an ER model holds then, as proposed for Experiment 1, then VPM should help decode the rotated optic flow present in the SR condition to allow greater accuracy when steering. Quality of optic flow was manipulated using the established brick and grass textures. In principle, however, accurate steering could be performed without optic flow, using ER information alone or by using a visual reference for gaze motion. This was explored by using the faded texture. If ER information specifying gaze motion is the primary control variable, then a faded texture should not increase the steering errors. 3

In a heading judgment task with free gaze, the brick texture gave the best performance with relatively small errors (~3º). Grass with lower contrast, but visible flow, gave marginally worse errors than brick (~4º), but this difference was not significantly different, t(5) = 1.37, p > .1. The faded grass texture resulted in very poor heading judgments (~13º), indicating that although there was some flow information arising from this texture, it was not sufficient to support accurate perception of heading from a radial flow pattern.

The impact of RF was explored further in Experiment 3. We rotated the ground texture, during active locomotion, which introduced a bias into the optic/RF, without affecting ER signals. This provided a supplementary investigation as to whether active steering was influenced by RF, ER gaze information, VD information, or a combination of all three. In Experiment 4 we added a more salient VD cue, similar to the emblem on the hood of a car, and introduced a subtle manipulation of this during trials. This tested the relative weighting that participants ascribed to RF or a strong VD cue and extends the debate of Rushton et al. (1998) and Warren et al. (2001) to the task of curvilinear steering.

General Method All participants were unconnected with the study but had some general experience of viewing motion displays and making heading judgments. All had normal or corrected-to-normal vision. The platform used was a PC with dual Celeron processors (Intel Corporation, Santa Clara, CA) running Windows 2000 and DirectX libraries (Microsoft Corporation, Redmond, WA). Images were presented in a light-excluded viewing booth with a large back projection screen (200 × 145 cm) providing a potential field of view (FoV) of 90° × 72°. A moveable viewport was created within this screen area. In Experiment 1 the viewport dimensions were 131 × 98 cm (66.5° × 52.2°) allowing 11.75° (23.5° total) of lateral VPM. The total lateral image– VPM across the screen varied from 5.24 cm (0.625˚/s) for the smallest angle of heading to 23 cm (2.7˚/s) for the largest. The dimensions of the viewport were reduced to 100 × 75 cm (45° × 36.9°) for the steering experiments (Exp. 2, 3, and 4) because the participants controlled the magnitude of lateral VPM, and greater latitude may have been needed. Images were rendered at a frame rate of 50 Hz and presented at a resolution of 1248 × 984 using an Electrohome 7500 graphics projector with fast phosphor tubes. Observers used both eyes to view the nonstereo image (bi-ocular viewing) at a distance of 1 m. Care was taken to ensure that invariant frames of reference were absent. The size of the screen placed the screen frame in the periphery (± 45°), but the screen edges were also covered in matt black tape, the room interior was matt black, and all incidental light was excluded from the projection

HEADING AND STEERING

booth. Between each trial a bright screen was flashed for 2 s to prevent dark adaptation by the observer. The scenes presented comprised a ground plane tiled with a color image selected from a library of natural photographic textures (Figure 3) that provided an established amount of flow information.4 Placed within the scene was a target (a single post for heading judgments or a pair of posts for steering), which was eccentric to the observer’s initial heading and upon which the observer was instructed to fixate. The target moved in one of two ways: either as a result of observer’s translation within the scene, which resulted in changes in both RF and gaze motion information, or by artificially moving the viewport in which the scene was projected, which only had a direct effect on gaze motion. Head and body position were not stabilized, but an Applied Science Laboratories, (ASL; Bedford, MA) 5000 gaze monitoring system was used to check where the observer was directing gaze. The ASL system uses remote optics and pan-tilt tracking to follow the observer’s head and eye and overlays point of gaze on the rendered scene, which could then be recorded on PAL SVHS tape, enabling the screening of trials for loss of fixation. In addition, we recorded the gaze co-ordinates for each rendered frame of the scene (50 Hz), which could be replayed at a higher resolution or frame-by-frame to check appropriate fixation.

Figure 3. Examples of ground textures used in Experiments 1–3.

Experiment 1 The aim of this experiment was to reevaluate the findings of Banks et al. (1996) on the role of ER information in heading judgments. An additional factor was the use of VPM to create a wider range of conditions. In the experiment of Banks et al., the ends of the continuum were 100% RR and 100% SR. We recreate these under the labels of RR1 and SR1, respectively: RR1: In this condition there was no SR component; hence, observer movement along an eccentric heading caused the target to move off to one side of the screen. When the

370

observer tracked the target, the gaze of the observer rotated and thereby introduced a rotational component into the RF field. The task presented the observers with a curvilinear RF field, but they also had ER information available and therefore should make few errors in their final heading judgments. SR1: Here the scene camera was rotated prior to rendering each frame so that the target remained centered. The same flow pattern was presented to the eye as for RR1, but there was no actual gaze rotation and no ER information for the observer to use. RR1V: The addition of 100% VPM to an SR1 display meant that the whole image was displaced at each frame by an amount equivalent to the rotation that would have occurred in the pure RR1 condition, but without changing the flow components within the display. Hence, if observers tracked the target, their gaze would rotate at the correct rate for that specific heading condition (providing ER information). The difference from the RR1 condition is that the VPM method keeps the fixation target centered with respect to the display edges. SR1V: The SR condition also had a direct counterpart. We took the RR1 condition, where the target moved across the screen away from the direction of heading, but for each frame we moved the viewport back in the opposite direction an equivalent amount (RR – 100% VPM). This meant that the target remained stable and in front of the observer’s bodyline, but the ground plane was displaced relative to the point of gaze and introduced a rotation into the flow field directly equivalent to SR1 (0% VPM). This also moved the display edges relative to the fixation points and reintroduced a VD cue to the SR condition. RR.5: Two additional intermediate conditions were matched: RR.5 recreated a condition of Banks et al. (1996) where the scene camera was rotated only half of the fixation angle and as a result an executed gaze rotation was required for the remaining 50% of the target motion. RR.5V produced the equivalent gaze requirement by adding 50%VPM to an SR1 display. The RR.5V

370

371

WILKIE AND WANN

condition kept the fixation target centered in the display. NR.5V: We introduced one final condition of SR–50% VPM, where the viewport was displaced in the wrong direction, thereby providing ER signals that conflicted with the direction of rotation in the flow field. We expected that this would make judgments very difficult.

m/s, used in subsequent steering studies). At the beginning of each trial the fixation target was always centered in the display but was placed at either 12 m (for 2.4 m/s) or 24 m (for 8 m/s) to allow a range of rotation speeds and equivalent stopping distances. The eccentricity of locomotor heading was chosen to yield average gaze rotation speeds of 2, 4, 6, or 8˚/s over a 2-s duration of travel (Table 2). To avoid participants becoming accustomed to the headings, an additional random jitter of ±0.5º was added to the heading for each trial. The locomotion conditions (Table 2) were reproduced for each display condition (Table 1) and were repeated using both a brick and grass ground texture giving a total of 56 (4 × 7 × 2) conditions with 8 trials of each. To maintain consistent performance the display conditions (Table 1), speed of the locomotion conditions (2.4 or 8 m/s) and texture conditions (brick or grass) were varied in independent blocks, but the order of presentation was counterbalanced across participants. The two heading angles for the locomotion conditions were randomly interleaved. Eight participants, naive to the purpose of the experiment, completed the 448 trials.

The VPM method provided a test of varying amounts of ER information in the presence (RR1, RR.5, SR1V) or absence (RR1V, RR.5V, NR.5V, SR1) of the VD cue provided by the screen boundary.5 The properties of each condition are summarized in Table 1. Method The display simulated observer translation across a textured ground plane, with a fixation point off the path. Two speeds of locomotion were chosen to simulate brisk walking (2.4 m/s equivalent to Banks et al., 1996) and steady cycling or slow driving (8

Table 1 Display Conditions for Experiment 1. Type of Rotn

Label

VPM condition RR +0% VPM

% Rotn within display 0

% Rotn in retinal flow* (RF) 100

% Extraretinal rotn* (ER) 100

% Screen border ref (VD) 100

Real (Banks)

R1

Real (VPM)

RR1V

SR +100% VPM

100

100

100

0

Simulated (Banks)

SR1

SR +0% VPM

100

100

0

0

Simulated (VPM)

SR1V

RR -100%VPM

0

100

0

100

50% Real (Banks)

RR.5

RR50% +0%VPM

50

100

50

50

50% Real (VPM)

RR.5V

SR +50%VPM

100

100

50

0

50% Negative

NR.5V

SR –50%VPM

100

100

-50

0

* assuming accurate target fixation & pursuit Note. Rotn = rotation; VPM = viewport motion; ref = reference; VD = visual direction. See the text for an explanation of the experimental conditions in the Label column.

365

WILKIE AND WANN

Table 2: Locomotion Conditions for Experiment 1 Speed (meters/second) : travel time (seconds) 2.4 : 2

Initial target distance (meters)

Heading (degrees)

Final target distance (metres)

Mean gaze rotation (degrees/second)

Min : max gaze rotation (degrees/second)

12

±6

7.2

2.0

1.2 : 3.3

2.4 : 2

12

±12

7.3

3.9

2.4 : 6.3

8.0 : 2

24

±6

8.1

6.0

2.0 : 16.9

8.0 : 2

24

±8

8.2

7.9

2.7 : 21.4

Note. Min = minimum; max = maximum Observers were instructed to fixate and track a target for the duration of each trial. At the end of the movement phase, observers indicated where they perceived themselves to be heading using a joystick that controlled the lateral position of a vertical line within the visual scene. Sixteen practice trials of the control condition (RR1) were provided where participants received error feedback and became used to the display, controls, and eye tracker. Output of the ASL gaze tracker, overlaid on the render scene, was monitored during the experiment, and any trials where gaze did not track the target were repeated and participants were reminded of the fixation task. To further ensure that the participants were tracking effectively, the gaze data were also recorded in synchrony with the locomotor data. The eye records were low-pass filtered at 6 Hz using Fourier transforms, and the prescribed and observed gaze behaviors were compared to compute a pursuit gain for each trial. For conditions that required target pursuit, trials were excluded from further analysis if the pursuit gain was less than 0.66 or greater than 1.2 or if there was evidence of regressive saccades. The ASL 5000 has an effective accuracy of ±0.5º for head-free tracking. With targets that have a sudden onset, large excursion, and exponential speed, this produces some quantization of the smooth pursuit records. The 0.66 and 1.2 criteria were arrived at after visually inspecting all 3,584 trial records; examples at the fringes of this window are displayed in Figure 4.

Results and Discussion Gaze fixation. For the conditions that required static fixation (SR1, SR1V), trials were excluded from further analysis if the standard

deviation of gaze position over the 2-s trial duration exceeded 1º. This resulted in exclusion of only 1.9% of the trials for SR1 and 2.1% of the trials for SR1V. For the remaining 98% of trials, the mean variation was 0.4º for SR1 and SR1V with the brick texture and 0.7º for SR1 and 0.6º with SR1V with the grass texture. The pursuit gain criteria of 0.66 – 1.2 resulted in the following mean rejection rates: RR1, 4% ; RR1V, 5.5% ; RR.5, 8.6% ; RR.5V, 9.4% ; and NR.5V, 10.75%. For the remaining trials the mean gain for pursuit conditions was between 0.84 and 0.90, with no significant change across brick or grass textures (Figure 5). There was a significant variation across rotation speeds, however, with lower gains observed for higher rotation speeds: brick, F(3, 21) = 49.2, p < .01; grass, F(3, 21) = 24.4, p < .01. There was also a small but significant difference in the pursuit gains for the VPM conditions (RR1V, RR.5V: 0.89) as compared to their counterparts (RR1, RR.5: 0.85), F(1, 7) = 12.78, p < .01. Heading errors. The mean difference between the heading indicated by participants and the actual heading was calculated in two ways. Constant errors were computed such that negative errors indicated an underestimation of the eccentricity of heading, irrespective of the direction of travel. Variable error was also computed as the within-subject standard deviation of heading judgments. To address the issue of the ER contribution, the first comparison was across the conditions where there were differing proportions of gaze

HEADING AND STEERING

A Gaze Angle (deg)

0.0

Predicted Recorded

-4.0 -8.0

-12.0

Gain = 0.94

-16.0 0.0

0.4

0.8

1.2

4.0 Gaze Angle (deg)

1.6

Time (s)

B

Predicted Recorded

0.0 -4.0 -8.0

-12.0

Gain = 0.67

-16.0 0.0

0.4

0.8

1.2

1.6

Time (s)

C 9.0 Gaze Angle (deg)

motion present, but the VD cue from the screen border was absent (RR1V, RR.5V, SR1, NR.5V, giving 100%, 50%, 0%, -50% ER). The upper panel of Figure 6 displays the signed errors across conditions for both brick and grass textures. It is clear that the reduction in ER information causes a systematic underestimation of the heading eccentricity, F(3, 21) = 20.67, p < .01, with the errors in the condition with gaze motion in the wrong direction (NR.5V) tending toward the wrong side of the fixation point. An interaction between condition and texture confirms that ground plane detail has little effect when the ER signals are strong, but when ER is removed or conflicting, the errors were greater for the sparser grass texture than for brick, F(3, 21) = 6.69, p < .01. There was also a significant effect of gaze rotation rate (Figure 6B), with errors increasing as the speed of pursuit increases, F(3, 21) = 11.61, p < .01. A notable feature of Figure 6B is that the errors that occurred in the 4º/s condition were equivalent to those of 8º/s and considerably greater than 6º/s. The 4º/s and 6º/s conditions are created with different locomotor speeds (2.4 m/s and 8.0 m/s), and this suggests that it is not the speed of rotation per se that creates a difficulty for the observer; rather, the rate of rotation relative to the translation component delineates heading. The analysis of variable error did not highlight any significant variation across conditions, texture, or rotation speed. A second level of analysis was to compare the conditions where ER and VD cues were combined or selectively absent. Figure 7 presents the signed heading errors for the following conditions: RR1 (100% ER, 100% VD), RR1V (100% ER, 0 VD), SR1V (0 ER, 100% VD), and SR1 (0 ER, 0VD). When we coded the presence of ER and VD as factors in the analysis of variance, there was a significant effect of ER, F(1, 7) = 29.59, p