Petra Vetter
Assistenzprofessorin
petra.vetter@unifr.ch
+41 26 300 7646
https://orcid.org/0000-0001-6516-4637
-
Assistenzprofessor_in,
Departement für Psychologie
RM 02 bu. S-1.141
Rue P.A. de Faucigny 2
1700 Fribourg
Forschung und Publikationen
-
Publications
19 Publikationen
Auditory guidance of eye movements toward threat-related images in the absence of visual awareness
Frontiers in Human Neuroscience (2024) | Artikel -
Forschungsprojekte
Mapping Space in the Blind Brain
Status: LaufendBeginn 01.02.2024 Ende 31.01.2029 Finanzierung SNF Projektblatt öffnen How does the brain create a representation of the space around us? In the sighted, adjacent spatial locations of the outer world are mapped onto adjacent regions in the brain. However, how are different spatial locations represented when visual input lacks since birth? This project addresses the key conceptual challenge of whether topographic spatial location maps are a universal organisation principle in the human brain, or whether they depend on vision. Recent evidence, including from my own lab, suggests that the “visual” cortex in people blind from birth is similarly organised than in the sighted and used for sound representation (Vetter et al., 2020, Current Biology). The specific challenge is therefore whether “visual” cortices are actually used for external space representation in the blind. The first goal of this project is to elicit representations of different spatial locations via audition and touch and to identify how and where in the brain spatial locations are mapped in the absence of vision. The second goal is to investigate which brain regions are causally involved in the spatial coding of auditory and tactile information. We will use beyond state-of-the-art ultra-high field functional MRI at 7 Tesla and advanced analysis methods to identify spatial location maps probed with novel auditory and tactile spatial localisation tasks in both congenitally blind and blindfolded sighted individuals. We will also use transcranial magnetic stimulation, alone and in combination with fMRI, to identify the causal role of brain areas spatially coding auditory and tactile information. This project is ground-breaking as it will establish whether topographic spatial location mapping is an vision-independent brain organisation principle in the human brain. Understanding how space representation works in blindness can significantly advance the development of visual prostheses and aids for blind and visually impaired individuals. How audition enhances visual perception and guides eye-movements
Status: LaufendBeginn 01.09.2020 Ende 31.08.2025 Finanzierung SNF Projektblatt öffnen Background and rationale: To create the visual world that we perceive, the brain uses information not only from the eyes but also from other senses. While there is evidence for top-down information from non-visual brain areas influencing visual processing, it is still unclear what kind of information is communicated top-down, and what function this influence has for actual visual perception and action. In the case of auditory influences to vision, previous multisensory research focussed on the integration of simple auditory and visual signals (beeps and flashes) across time and space. However, beeps and flashes do not carry any ecologically valid information content and thus the influence of actual semantic information content of sounds onto visual perception and action remains unexplored. In everyday life, it is crucial that the content of sound information, e.g. the sound of an approaching car behind us, is matched to the visual environment correctly such that we can see the approaching car quickly and react accordingly. Overall objectives and specific aims: The overall objective of this project is to investigate how information content of natural sounds influences and enhances visual perception and guide actions. Two specific Research Aims will be addressed: Aim 1: Can content-specific sound information guide eye-movements and as such actions and perception? Aim 2: Can content-specific sound information resolve ambiguities in vision and thus enhance visual perception? Methods: Aim 1: We will determine whether eye-movements are guided to visual natural scenes that are suppressed from conscious awareness when participants hear a semantically matched natural sound. Visual and auditory stimuli will be matched such that they are semantically more closely or more distantly related. Aim 2: We will render visual stimuli ambiguous by displaying different stimuli to each eye and determine whether semantically matching sounds can disambiguate visual stimuli and make the matching image visible. In addition, we will use functional MRI and brain decoding techniques to determine whether sound content is represented in early visual at the time the visual stimulus is disambiguated. Expected results and their impact for the field: We expect that sound content guides eye-movements to semantically matching visual scenes even in the absence of awareness of these scenes. We expect the guidance of eye movements to be modulated by the semantic relatedness between sound and image. Furthermore, we expect that sound content resolves visual ambiguities in visual ambiguous situations and that we find evidence that the brain uses this content-specific sound information to disambiguate visual perception. Ultimately, once we have demonstrated how audition can enhance visual perception and guide actions, we can use these insights to develop rehabilitation devices for visually impaired people, to improve sensory substitution devices for the blind and to enhance multi-media environments in which audition and vision have to be combined.