Skip to main content
Glendon Campus Alumni Research Giving to York Media Careers International York U Lions Accessibility
Future Students Current Students Faculty and Staff
Faculties Libraries York U Organization Directory Site Index Campus Maps


  • Friday, June 13, 2008
    Visuospatial updating after rotations and translations in three-dimensional space

    Visuospatial updating allows us to maintain a stable representation of visual space despite the fact that we are constantly moving. Without such a mechanism for spatial constancy, the world around us would seem to be constantly in motion and we would be incapable of interacting with it. In order for spatial updating to be successful it must combine retinal information about the location of an object with non-retinal information about our own movements. These movements can be both active (e.g., eye movements while watching a tennis match) and passive (e.g., whole-body motion while riding in a train).

    Over the last several years, we have undertaken a number of passive, spatial updating experiments to help us answer the following questions: (1) Are efference copy signals of the outgoing, voluntary motor command necessary for spatial updating? (2) Are gravito-inertial signals taken into account during spatial updating? (3) Can the brain take into account the complexities of the non-commutativity of rotations (i.e., a x b ` b x a) for spatial updating? (4) Can we update equally well for rotations and translations in three-dimensional space? To answer these questions we used two motion platforms capable of passively moving human subjects in all three degrees of freedom for rotations (yaw, pitch and roll) and translations (fore-aft, lateral and vertical). In all these studies, subjects were first briefly shown a target at a randomly chosen spatial location and were then passively rotated or translated to a new body orientation (during this intervening motion, the subjects fixated a central target so that the vestibuloocular reflex was cancelled). At the end of the induced motion, the subjects waited for the central target to be extinguished, which was their cue to make a saccadic eye movement to the remembered location of the flashed target.

    We found that: (1) Efference copy signals of the self-initiated, motor command are not necessary for accurate target localization after intervening whole-body rotations. Other signals, like those from the vestibular system and/or proprioceptive cues, can also be used to maintain spatial constancy. (2) Gravity signals are vital for spatial updating about axes that normally move the body relative to gravity (e.g., roll), but are not as important for updating about axes that do not change the gravity vector (e.g., yaw). (3) The brain does take into account the non-commutative properties of rotations, generating different saccade trajectories depending on the order of the whole-body rotations. (4) Accurate spatial updating was found for fore, aft, rightward, leftward and upward translations, with poorer performance for downward translations.

    Taken together, we find that the brain has developed a robust system to handle the intricacies of visuospatial updating. It can make use of both rotational and translational sensory signals to gauge the amplitude and direction of intervening movements, and it can also use gravitational signals when necessary. It can handle the extra computations associated with updating after rotations about the roll axis and after non-commutative rotations. Future research on this remarkable mechanism will focus on the pathways for vestibular/proprioceptive cues from the brainstem to the cortical regions that exhibit spatial updating at the neuronal level, and how movement information from various sources are integrated to provide a unified measure of the intervening motion

    Eliana Klier
    Washington University