Research

  • Current Research
  • Previous Research

Overview

Visual Orientation and Spatial Memory: Mechanisms and Countermeasures

Principal Investigator:
Charles M. Oman, Ph.D.

Organization:
Massachusetts Institute of Technology
Harvard-MIT Division of Health Sciences and Technology

Orientation, navigation and spatial memory problems can affect an astronaut’s performance during missions. Dr. Charles M. Oman and colleagues are studying three-dimensional spatial memory and navigation with the aim of developing pre-flight and in-flight visual orientation training countermeasures to help astronauts quickly learn the three-dimensional layout of the International Space Station. Their research includes studies of humans and animals and employs virtual reality techniques, tumbling rooms and parabolic flight experiments.

NASA Taskbook Entry


Technical Summary

How should spacecraft designers configure interior architectural features, work areas and the relative orientations of adjacent or docked spacecraft modules to minimize spatial disorientation problems in zero gravity? Current NASA standards offer little guidance. Can virtual reality (VR) training techniques, which astronauts currently use to plan their spacewalks, also be used to reduce the incidence of visual reorientation and inversion illusions while working inside the spacecraft? Is a fully immersive VR system needed for such training, or could simpler, portable training systems be used? Can individual performance on operationally-relevant 3D orientation, navigation and teleoperation tasks be predicted based on simple tests of individual mental rotation and perspective taking skills? What is the best way to assess and control the direction of the perceptual vertical in an environment where there is no gravitational down? Can head-movement-contingent instability of the perceived visual world (oscillopsia), experienced by most returning astronauts, be quantified using visual feedback techniques? Answers to these questions can be directly applied to the design of the NASA Crew Exploration Vehicle (CEV), Lunar Surface Access Module and eventually the Mars Transit Habitat interiors, to the physical arrangement of ground simulators and to the development of VR-based techniques for pre-flight orientation and navigation training for astronauts.

Specific Aims

  1. To quantify how environmental geometric frame and object polarity cues determine human visual orientation in order to support engineering and design of spacecraft work areas.
  2. To develop reliable means for quantifying head-movement-contingent oscillopsia.
  3. To determine whether pre-flight VR techniques can improve astronaut 3D spatial memory and navigation abilities by reducing direction vertigo and teaching International Space Station (ISS) configuration and emergency egress routes.
  4. To improve astronaut teleoperation performance by taking into account the mental object rotation and perspective taking abilities of individuals while training and during operations.
By June 2007, we completed all four specific aims and all six experimental series originally proposed. York University studied:
  1. Visual frame and polarity effects in tilted rooms and in an immersive visual virtual environment (IVY), examining the effect of room aspect ratio and observer field of view;
  2. The perceptual upright as measured using a new OCHART method ("p" vs. "d" letter recognition) and analyzed results using a linear vector summation model; and
  3. Quantified oscillopsia during Coriolis stimulation using a new visual feedback technique.
The p/d method provides us with a way of assessing the perceptual vertical without requiring the subject to make a judgment of tilt with respect to the gravitational vertical a constraint that has confounded many previous investigations of perceived vertical in ground and zero gravity experiments (e.g., Witkin, Mittlestaedt, Howard, Oman). Experiments in IVY, manipulating the floor/ceiling aspect ratio of simple frame interiors, demonstrated that the surface perceived to be "the floor" depends on the aspect ratio in a predictable way that could be mathematically modeled. Several additional experiments were also performed. One showed that the strength of the levitation visual reorientation illusion depends on scene content (scene viewed), rather than geometric field of view (view seen). Another showed that the weighting of visual and non-visual cues for orientation was affected by Parkinsonism.

Massachusetts Institute of Technology (MIT) completed a series of four "relearning, reoriented spacecraft modules" experiments, designed to simulate the training experience of astronauts who learn the interiors of individual spacecraft modules in a locally upright configuration in ground simulators, but who have to make spatial judgments when the modules are assembled in a different flight configuration. We showed that subjects remember each module in a visually upright, canonical orientation, and therefore had to make mental rotations in order to inter-relate the two modules. This year MIT tested different flight configurations and found that performance was best when visual verticals were co-aligned, intermediate for 180 degree orientations, and worst when modules were rotated through 90 degrees. Our results account for the visualization difficulties and disorientation previously reported by Apollo, Mir and ISS astronauts when transiting certain areas of their spacecraft. The result could be easily translated into a design standard for space stations and docked vehicle operations.

MIT also completed two ISS emergency egress training studies of 3D, 6 degree of freedom navigation performance, quantifying the effect of training in a locally versus globally upright configuration, with and without smoke obscuration. Most subjects learned quickly, but performance correlated with individual 3D mental rotation and perspective taking skills. This study, led by Dr. Aoki, won the 2007 Young Investigator Award from the Aerospace Medical Association's Space Medicine Branch. This year we also compared performance of subjects trained using with a non-immersive laptop display with a similar sized group tested last year using an immersive display. Although immersive displays better simulate the vestibular and haptic cues required to orient spatially, our subjects performed almost as well using the laptop. Finally, as planned, MIT completed development of a space telerobotic training simulator and showed that individual mental rotation and perspective taking abilities influence performance during training.

Results of the York University and MIT studies have been presented at several international meetings and full manuscripts have been published or are currently in submission. Dr. Oman also published a review article on visual orientation in microgravity which summarizes our research in a broader context.


Earth Applications

Results support the development of sensorimotor countermeasures for spatial orientation, navigation and spatial memory difficulties among astronauts, and impact the design of future vehicles, including the CEV, Lunar Surface Access Module and later the Mars Transit Habitat. Results have been published in scientific journals (e.g. ASEM, J. Vestib. Res, Habitation) and will be in the NASA Human Integration Design Handbook. Our results also pertain to human health on Earth, for example: origins and assessment of oscillopsia, and disorientation, spatial memory and navigation problems in vestibular, Alzheimer's and Multiple Sclerosis patients. Our results also pertain to the interior design of buildings to reduce disorientation by providing strong visual cues for orientation in both the vertical (to reduce falls in the elderly on stairs) and gravitational horizontal planes (e.g. the origins of wrong door phenomena in buildings and geographic disorientation in cities, and among sport orienteers, and in the design of visual cueing systems for civil and military flight simulators). To the extent that disorientation is reduced, motion sickness will also be alleviated.

This project's funding ended in 2004