Skip to main content
Glendon Campus Alumni Research Giving to York Media Careers International York U Lions Accessibility
Future Students Current Students Faculty and Staff
Faculties Libraries York U Organization Directory Site Index Campus Maps

 

  • Friday, November 13, 2009
    Multisensory self-motion perception in real and virtual environments

    When moving through space, both dynamic visual information (i.e. optic flow) and body-based cues (i.e. proprioceptive and vestibular) jointly specify information about self-motion. Little is currently known about the relative contributions of each of these cues when several are simultaneously available. In a series of experiments, we investigated participants' abilities to perceive self-motion under a variety of sensory/motor conditions (including purely imagined self-motion). Visual information was combined with body-based cues that were provided either by walking in a fully-tracked free-walking space, by walking on a large linear treadmill, or by being passively transported in a robotic wheelchair or 6 degree-of-freedom motion simulator. The importance of body-based cues will be discussed.

    In the last part of my talk, I will be introducing a new state-of-the-art research facility that we are developing at the Toronto Rehabilitation Institute called the Challenging Environments Assessment Laboratory (CEAL). This facility consists of a large (6m x 6m), 6-degree-of-freedom motion platform that can be outfitted with various, interchangeable payloads (portable, self-contained laboratories). Currently three unique payloads are being constructed, including one that has a real ice floor and can produce snow and winds up to 15 km an hour and another that consists of a 180 degree field-of-view curved projection screen that can be combined with a treadmill interface. CEAL will provide unique opportunities for researchers interested in areas ranging from, for example, perception-action coupling, multisensory integration (visual, auditory, proprioceptive and vestibular), locomotion, visual and auditory perception, etc. and collaborations are highly encouraged.

    Jenny Campos
    U of Toronto