Virtual Environments have to some degree offered a solution for these demands. Recently, it has become increasingly possible to conduct psychophysical experiments with more than one sensory modality at a time. In this thesis, Virtual Reality (VR) technology was used to design multi-sensory experiments which look into some aspects of the complex multi-modal interactions of human behavior.
Contents: The first part of this PhD thesis describes a Virtual Reality laboratory which was built to allow the experimenter to stimulate four senses at the same time: vision, acoustics, touch, and the vestibular sense of the inner ear. Special purpose equipment is controlled by individual computers to guarantee optimal performance of the modality specific simulations. These computers are connected in a network functioning as a distributed system using asynchronous data communication. The second part of the thesis presents two experiments which investigate the ability of humans to perform spatial updating. These experiments contribute new scientific results to the field and serve, in addition, as proof of concept for the VR-lab. More specifically, the experiments focus on the following main questions: A) Which information do humans use to orient in the environment and maintain an internal representation about the current location in space?; B) Do the different senses code their percept in a single spatial representation which is used across modalities, or is the representation modality specific?
Results and Conclusions: The experimental results allow the following conclusions: A) Even without vision or acoustics, humans can verbally judge the distance traveled, peak velocity, and to some degree even maximum acceleration using relative scales. Therefore, they can maintain a good spatial orientation based on proprioception and vestibular signals; B) Learning the sequence of orientation changes with multiple modalities (vision, proprioception and vestibular input) enables humans to reconstruct their heading changes from memory. In situations with conflicting cues, the maximum percept from either of the modalities had a major influence on the reconstruction. Most of the naive subjects did not notice any conflicts between modalities. In total, this seems to suggest that there is a single spatial reference frame used for spatial memory. One possible model for cue integration might be based on a dynamically weighted sum of all modalities which is used to come up with a coherent percept and memory for spatial location and orientation.