A presentation based on use of The vOICe was given at
July 7-11, 2008, Montreal, Canada. Oral presentation July, 10th 2008 15:30-15:50h.
![]() Robert J. Zatorre (speaker) and Jung-Kyong Kim.
Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Canada.
In two behavioral and one neuroimaging experiment, we investigated visual-to-auditory substitution learning in sighted subjects using an image-to-sound conversion algorithm developed for the blind by Meijer (1992), in which the visual vertical and horizontal axes are converted into frequency and time representations, respectively. The first study sought to clarify the degree of generalization in learning of vision-to-sound conversion rules. We found that, even before any training, sighted people were able to use explicit knowledge of the conversion rules to some degree in identifying visual shapes based on their sound transformations. The ability to recognize complex visual stimuli improved with training over a three-week period, and generalized to novel stimuli not previously experienced, indicating that the conversion rules could be learned at an abstract level. A second behavioral study tested blindfolded sighted individuals using a similar learning paradigm, but using embossed tactile shapes rather than visual input. Subjects were trained to recognize the tactile information in relation to the corresponding auditory transformation, and showed significant learning as well as generalization to new tactile stimuli, as with vision. At the end of the three-week training period, subjects were given a visual task using new items. High accuracy on this visual task was observed even though the feedback subjects received throughout training was solely tactile, suggesting crossmodal transferability of the substitution learning from the tactile modality to vision. To test the neural underpinnings of crossmodal learning, we used functional MRI to scan subjects before and after learning using visual stimuli. Results showed recruitment of the intraparietal sulcus region both before and after training, whereas this was not true for a control condition in which scrambled visual images and scrambled sounds were presented. We suggest that parietal regions are engaged in spatial processing of visual images in cross-modal conversion of sounds to the corresponding visual images, but that this mechanism is unaffected by learning. We also observed after training that the auditory cortex responded differentially to the sounds of abstract figures than to the sounds of real-life objects, but equally to the two types of sounds before training. These findings imply the existence of supramodal representations that can be tapped into by the vision-to-sound conversion system. |
Related reading: J.-K. Kim and R. J. Zatorre, ``Generalized learning of visual-to-auditory substitution in sighted individuals,''
Brain Research, Vol. 242, pp. 263-275, 2008
(
DOI). Available
online (PDF file).
Note: The vOICe technology is being explored and developed under the Open Innovation
paradigm together with