Retinotopic visual cortex mapping using a visual-to-auditory sensory-substitution device

Results based on use of The vOICe were presented in an oral presentation at

ICON 2008, the 10th International Conference on Cognitive Neuroscience

September 1-5, 2008, Bodrum, Turkey.

Authors

 Lotfi Merabet 1, Dorothe Poggel 2, William Stern 1, Ela Bhatt 1, Christopher Hemond 1, Sara Maguire 1, Peter Meijer 3, Lotfi Merabet 2, and Alvaro Pascual-Leone 1.

1 Harvard Medical School, Boston, USA.
2 Boston University, Boston, USA.
3 High Tech Campus, Eindhoven, The Netherlands.

Retinotopic visual cortex mapping using a visual-to-auditory sensory-substitution device

 Abstract (doi: 10.3389/conf.neuro.09.2009.01.273; published in the Frontiers in Human Neuroscience journal)

The occipital (visual) cortex is activated when subjects, both blind and sighted, use a visual-to-auditory sensory-substitution device (SSD) called "The vOICe". In this study, we wished to investigate whether this crossmodal recruitment of visual cortex respects its known retinotopic organization. Using fMRI, we compared response to spatially specific stimuli encoded both by sound and visually. The vOICe (www.seeingwithsound.com) transforms and encodes different aspects of a visual scene into auditory representations ("soundscapes") such that vertical displacement is represented by frequency, horizontal location is represented by stereo panning, and brightness is encoded by loudness. We report here behavioral and fMRI results in 4 sighted subjects trained to use this SSD and in one late blind expert user. Rotating wedges (clockwise and counterclockwise) and rings (expanding and contracting) were transformed into auditory soundscapes using the vOICe algorithm. At a baseline scan, sighted subjects were blindfolded and listened to the retinotopic auditory stimuli. Subjects then underwent intensive daily training with the vOICe SSD. After one week, subjects were again scanned listening to the identical auditory stimuli. The blindfold was removed and subjects were scanned again using corresponding visually presented retinotopic stimuli. Note that at no time during training did the subjects actually see the visual stimuli that were used to generate the soundscapes and that the pattern of brain activation to the visual stimuli was only obtained after the study with the soundscapes was completed. During the scan, subjects were instructed to imagine focusing in the center of their visual space and to identify (by key press) the presence of a "fixation" square appearing at random. At the end of each run, subjects were asked to identify the direction of motion of the auditory encoded stimuli. In all subjects, baseline scans (prior to training) showed robust activation of auditory cortex but no activation of visual areas in response to the auditory encoded stimuli. Following one week of training, activation was also evident in visual cortical areas in 3 out of 4 of the sighted subjects. Robust activation in response to visual retinotopic stimuli was present in all 4 subjects. In the blind user, robust activation was also evident in occipital cortex with a pattern consistent with retinotopic organization. Further analysis is being carried out to quantify the correspondence between auditory and visually evoked activation patterns in response to retinotopic stimuli. Our results show that intensive training with a SSD can lead to activation of visual areas in response to auditory stimuli that encode spatial information normally used to map retinotopic visual areas. Furthermore, these patterns of activation may correspond to patterns obtained with classic visual retinotopic stimuli suggesting that crossmodal activation of visual cortex may also reveal a retinotopic organization.


See also the ICON X presentation abstract of an oral presentation by Amir Amedi et al. titled "Audio-visual integration for objects, location and low-level dynamic stimuli: novel insights from studying sensory substitution and topographical mapping".

Note: The vOICe technology is being explored and developed under the Open Innovation paradigm together with R&D partners around the world.

Copyright © 1996 - 2025 Peter B.L. Meijer