Skip to main content
Glendon Campus Alumni Research Giving to York Media Careers International York U Lions Accessibility
Future Students Current Students Faculty and Staff
Faculties Libraries York U Organization Directory Site Index Campus Maps


  • Thursday, June 19, 2008
    Coding and perceiving the positions of objects

    Perceiving the positions of objects is one of the visual system's primary functions. This is complicated by the fact that the eye is almost constantly moving and objects in the world frequently move around us. This, along with the sluggish nature of visual processing, should pose difficulties when perceiving and interacting with moving objects and scenes. Our experience contradicts this, suggesting that the visual system has mechanisms that compensate for inherent delays and sluggish processing of visual information. Recent work in our lab has revealed that the visual system codes the locations of objects depending on the motion that is present in scenes, and that both passive (bottom-up) and attentive (top-down) mechanisms contribute to this compensation process. Additional fMRI and TMS studies in our lab have begun to reveal the neural mechanisms of position coding. FMRI studies have revealed motion-dependent position coding in early visual areas including V1, and a recent TMS study demonstrated that MT+ plays a necessary role as well. The emerging psychophysical and physiological evidence is beginning to clarify our view of how the visual system constructs a representation of object position-one that is apparently seamless and effortless but that nonetheless requires a great deal of computational and neural resources.

    David Whitney
    Center for Mind and Brain, and Dept of Psychology, UC Davis