Posters

21) Connecting gaze and memory via phase alignment of electrophysiological signals in the human medial temporal lobe

Julia Katharina Schaefer1, Benjamin J. Griffiths2, Thomas Schreiner1, Aditya Chowdhury1, Christian Vollmar1, Elisabeth Kaufmann1, Jan Remi1, Tobias Staudigl1

1Ludwig-Maximilians-Universität in Munich, Germany, 2University of Birmingham, United Kingdom

Motion plays a crucial role in shaping brain processes, for example in how the brain anticipates the consequences of its own actions. By monitoring self-generated motion signals, the brain can predict and prepare for incoming stimuli. Because humans predominantly combine eye and head movements to direct their gaze to relevant stimuli in the environment, a coordination of brain activity and gaze behavior seems highly adaptive.

Previous studies suggest that eye movements and brain activity align, and that the alignment is linked to cognitive functions: in the hippocampus of non-human primates, saccade-related phase alignment of low frequency activity has been shown to predict performance in a memory task. It remains, however, unclear whether such motion-induced alignment predicts memory performance in humans, and how the specific contributions of head and eye movements are constituted.

This study investigates these questions by analyzing intracranial electrophysiology recorded in epilepsy patients who directed their gaze towards screens positioned in a semi-circle around them, while memorizing and later recalling images displayed on these screens. Initial findings indicate that successfully remembered images show increased low-frequency phase alignment post head movement in the medial temporal lobe. Ongoing analyses focus on dissecting the contributions of saccades and head movements to phase alignment across brain regions to understand the dynamic nature of gaze-brain interactions. Our preliminary results support the idea of motion-triggered alignment of brain activity that aids memory formation. By exploring interactions between gaze and brain oscillations, we shed light on the critical connection between motion and higher-order brain processes.

22) The impact of respiration on associative memory retrieval

Esteban Bullón Tarrasó1, Marit Petzka2, Bernhard Staresina3, Thomas Schreiner1

1Ludwig-Maximilians-Universität in Munich, Germany, 2Universität Hamburg, Germany, 3University of Oxford, United Kingdom

Respiration has been shown to modulate both brain oscillations and memory retrieval processes in humans. However, the extent to which respiration directly influences retrieval-related neural oscillations and memory reactivation remains unclear. In this study, we reanalyzed an existing dataset comprising scalp electroencephalography (EEG) and respiration recordings throughout an experiment in which participants (N = 18) engaged in an episodic learning task across two experimental sessions. During each session, participants associated verbs with images of objects or scenes (counterbalanced). We found that the phase of respiration significantly influences EEG amplitudes in the alpha/beta range as well as behavioral retrieval success. In turn, retrieval-related alpha/beta power decreases and accompanying memory reactivation were tightly locked to the exhalation troughs occurring after trial onset. While these results highlight the putative role of respiration in modulating both behavioral and neural aspects of memory retrieval, upcoming analyses will assess whether respiration directly impacts memory reactivation, hence the neural substrate underlying conscious remembering.

23) (Perceived) Social Support of Mothers and Fathers During the COVID-19 Pandemic

Jasmin Preiß1, Cristina Florea1, Monika Angerer1, Manuel Schabus1

1University of Salzburg, Austria

Social support is crucial for mental well-being in the postpartum period, and the pandemic's measures may have influenced how social support is received and perceived, necessitating the identification of key predictors.

Between May 18, 2021 and July 1, 2021, an online survey was conducted to gain insight into becoming parents during the COVID-19 pandemic in Germany and Austria. This report includes only families in which both parental figures completed the survey and membership to the family could be identified (n = 100 couples). Factors assessed included perceived stress, coping mechanisms, receiving help, partner support, and subjective social support.

For mothers, significant predictors of social support included coping mechanisms (β = .29, p < .001), partner support (β = .36, p < .001), and help from family/friends (β = .29, p = .001), with R²adj. = .31 (p < .001). For fathers, significant predictors were coping mechanisms (β = .19, p = .036) and partner support (β = .45, p < .001), with R²adj. = .24 (p < .001). 65% of mothers and 58% of fathers reported receiving assistance from grandparents and/or extended family and friends. Among couples, 66% agreed on their “help status,” while 31% showed mild disagreement, and 3% had higher disagreement.

The results indicate that coping mechanisms and partner support significantly predict social support scores for both mothers and fathers, while help from extended family and friends is only significant for mothers. These findings highlight variability in family support during the pandemic and potential communication gaps within partnerships.

24) The Role of Alpha Power Lateralization in auditory processing: Insights from an EEG Neurofeedback Study

Felix Stockar1, Nataliya Fartdinova1, Tomas Ros2, Basil Preisig2

1University of Zurich, Switzerland, 2University of Geneva, Switzerland

Auditory spatial attention, the ability to focus selectively on specific sounds while ignoring others, is crucial for various cognitive tasks and daily activities. Neural oscillations in the alpha frequency band (8-12 Hz) have been implicated in attentional modulation, especially in visual perception. However, their functional role in auditory spatial attention remains un-clear.

This study explored the relationship between alpha activity and auditory spatial attention using EEG neurofeedback (NF). Participants were trained to increase alpha power over left relative to right parieto-occipital sensors and vice versa. The training was implemented as a computer game, where participants were rewarded for modulating the lateralization of al-pha power in the desired direction. During neurofeedback, auditory probes were presented from different spatial directions (-90°, -45°, 45°, and 90°) to assess whether changes in alpha lateralization had immediate effects on auditory processing. Further, the impact of neu-rofeedback on auditory attention and resting-state alpha lateralization was assessed prior and after NF Training.

First results indicate that NF training effectively modulates alpha power lateralization to-wards the trained hemisphere. The data present a pattern where the average evoked poten-tial (i.e. absolute difference between the P1 and the N1 component) in response to auditory probes presented contralaterally to the trained hemisphere is smaller as compared to the evoked potentials elicited by ipsilateral probes. However, this NF Training x Probe direction interaction was not statistically significant.

The study provides initial evidence that neurofeedback can modulate alpha power lateraliza-tion. The potential influence on auditory sensory processing has still to be determined.

25) Influence of the sensorimotor system on auditory cortical activity in tinnitus patients

Anne Schmitt1, Stefan Rampp1, Oliver Schnell1, Nadia Müller-Voggel1

1University Hospital Erlangen, Germany

Tinnitus is the perception of a phantom sound. Research has shown that the sensorimotor system can influence tinnitus perception (Shore et al. 2016 ), how the sensorimotor system, however, interacts with auditory activity on a cortical level remains largely unsolved. We here investigate how auditory perception is modulated by the sensorimotor system in the brain.

23 tinnitus patients who performed relaxing versus tensing jaw exercises and subsequently listened to their tinnitus/four different tinnitus-like sounds, while brain activity was measured with Magnetoencephalography. Differences in oscillatory activity were identified with sensor-based cluster-based permutation tests and a beamformer approach. Connectivity between regions was estimated using Partial Directed Coherence and linear mixed effects models (lmes) were used to determine how brain activity and ratings interact.

Participants experienced their tinnitus, but not tinnitus-like sounds, less loud and more pleasant (p<.05) after relaxing versus tensing exercises. Tinnitus reduction was accompanied by a significant increase of alpha-band connectivity directed from the somatosensory to the auditory cortex and a significant gamma power reduction in the auditory cortex. Interestingly, only the right auditory gamma power decrease after relaxation was evident when patients heard tinnitus-like sounds instead of their tinnitus.

We suggest that the increase in directed alpha-band connectivity from somatosensory to auditory cortex is most likely reflecting the transmission of inhibition from somatosensory to auditory cortex during relaxation, where, in parallel, probably tinnitus-related, gamma power reduces. The lmes will give further insights into why the sensorimotor system interacts with auditory cortical activity differently during perception of external tones.

26) Combining Insomnia Therapy with Sleep Tracking Using Wearables: Effects of a CBT-I-based App on Sleep – A RCT study

Alexandra Hinterberger1, Esther-Sevil Eigl1, Robyn Schwemlein1, Pavlos Topalidis1, Manuel Schabus1

1University of Salzburg, Austria

Due to the gap in treatment, validated digital solutions are urgently needed. Here, we evaluate an innovative smartphone-app, combining i) a CBT-I-based sleep training with ii) subjective as well as iii) objective sleep monitoring via a heart rate (HR) sensor, and iv) feedback based on objective sleep.

In this RCT study, fifty-seven self-reported poor sleepers (20-76 years; M=45.67±16.38; 39 female) were randomly assigned to an experimental group (EG, n=28) or a waitlist control group (CG, n=29). During a 6-week intervention phase, the EG used the CBT-I-based app program including sleep monitoring as well as feedback on their sleep, while the CG used sleep monitoring only. Sleep was measured i) subjectively with questionnaires (Insomnia Severity Index, ISI; Pittsburgh Sleep Quality Index, PSQI), ii) objectively with ambulatory polysomnography (PSG), and iii) continuously via HR sensor and sleep diaries.

Analyses revealed interactions for ISI (p=.003, ƞ2part=.11) and PSQI (p=.050, ƞ2part=.05), indicating training-specific improvements for EG, yet not for CG. While PSG-derived outcomes appear to be less training-specific, a tendential reduction in wake after sleep onset (WASO) was found in EG (p=.061, d=0.55). Regarding changes in subjective-objective sleep discrepancies (SOSD), results indicate a reduction during intervention for total sleep time in both groups, while improvements for sleep efficiency, sleep onset latency and WASO were found in EG only (p’s≤.022, d≥0.46).

The findings indicate beneficial effects of an innovative smartphone app on sleep and SOSD. More scientific evaluation of such digital programs is needed in order to ultimately help provide effective, low-threshold treatment options.

27) Resilience and vulnerability of neural speech tracking after cochlear implantation in congenitally and acquired deaf children

Alessandra Federici1, Marta Fantoni1, Francesco Pavani2, Alice Martinelli3, Giacomo Handjaras1, Evgenia Bednaya1, Martina Berto1, Emiliano Ricciardi1, Elena Nava4, Eva Orzan5, Benedetta Bianchi6, Davide Bottari1

1IMT School for Advanced Studies Lucca, Italy, 2University of Trento, Italy, 3Fondazione Stella Maris, Istituti di Ricovero e Cura a Carattere Scientifico, Tirrenia, Italy, 4University of Milano-Bicocca, Italy, 5IRCCS Materno Infantile Burlo Garofolo, Istituti di Ricovero e Cura a Carattere Scientifico, Trieste, Italy, 6Meyer Children's Hospital, Florence, Italy

Infants are born with biological biases that favor language acquisition. One is the auditory system's ability to track the envelope of continuous speech, a pivotal feature for spoken language comprehension in adulthood. However, to which extent neural speech tracking relies on postnatal auditory experience remains unknown. We studied children with or without access to functional-hearing in the first year of life after they received cochlear implants (CIs). We measured neural speech tracking in CI users with a congenital deafness (CD) or who developmentally acquired it (AD; minimum auditory experience 12 months), as well as in hearing controls (HC; listening to original or vocoded-speech). Remarkably, neural speech tracking in children with CIs was unaffected by the absence of perinatal auditory experience. Regardless of deafness onset, CI users and HC exhibited a similar speech tracking magnitude at short-timescales ∼50–130ms; however, the tracking was delayed, and its timing depended on the age of hearing restoration in CI users. Conversely, at longer timescales ∼150–250ms, speech tracking was substantially dampened in CIs, accounting for their comprehension deficits. These differences were not accounted by the degraded acoustic stimulation as revealed by the speech tracking in HC listening to vocoded-speech. These findings highlight (i) the resilience of sensory components of speech tracking to the lack of hearing in the first year of life, (ii) the crucial role of when hearing restoration takes place in mitigating the impact of atypical auditory experience, (iii) the vulnerability of higher hierarchical levels of speech processing in CI users.

28) Long-term acoustic contexts determine if non-speech contexts induce rate normalization effects in speech perception

Andrey Zyryanov1, Johanna Oeferle2, Assaf Breska3, Yulia Oganian1

1Universitätsklinikum Tübingen, Germany, 2University of Tübingen, Germany, 3Max Planck Institute for Biological Cybernetics, Max Planck Society, Tübingen, Germany

Auditory word perception depends on contextual speech rate. In German, a fixed vowel duration is perceived as long following fast speech, but as short following slower speech, known as rate normalization (RN). For example, perception of /ban/ changes from ‘Bahn’ (‘train’, long /a/) to ‘Bann’ (‘spell’, short /a/) in slower contexts. However, whether non-speech context also drives RN is controversial. Here we investigated the conditions for non-speech contexts to induce RN. In Experiment 1, we hypothesized that greater spectro-temporal similarity to speech strengthens RN. We compared a speech context to low-pass-filtered speech and isochronous sequences of either complex (/u/ vowels) or pure (440 Hz) tones. All contexts were presented at fast (5.7 Hz) and slow (2.8 Hz) rates in counterbalanced order. RN effects were comparable for speech, complex and pure tone contexts, whereas low-pass-filtered speech did not induce RN. Surprisingly, when complex tones were presented before other contexts in Experiment 2, their effect vanished. A re-analysis of Experiment 1 by condition order suggested that this may be due to experimental context: Complex tones induced RN only when presented after low-pass-filtered speech. In contrast, low-pass-filtered speech induced RN before exposure to complex tones but not after, suggesting that long-term context may affect RN. Experiment 3 tested this hypothesis. It showed that complex tones did not induce RN regardless of condition order, and replicated that low-pass-filtered speech induced RN before but not after exposure to complex tones. Overall, our findings suggest that RN effects of non-speech stimuli depend on long-term acoustic contexts.

29) The Neural Impact of Hearing Loss on Spatial Attentional Filtering

Nataliya Fartdinova1, Mohsen Alavash2, Tzvetan Popov1, Malte Wöstmann2, Jonas Obleser2, Basil C. Preisig1

1University of Zurich, Switzerland, 2University of Lübeck, Germany

Target selection and distractor suppression are critical subprocesses in auditory spatial attention. It is known that individuals with hearing loss (HL) often struggle to focus on target listeners in crowded multi-talker environments. However, it remains unclear if these cognitive mechanisms - target selection, distractor suppression or both - are affected by hearing impairment and aging brain. For this stage 1 approved pre-registered report, we recruited three groups varying in age and hearing loss levels to investigate whether any of these neural subprocesses of attention are diminished in listeners with HL. While conducting an auditory attention task, we used electroencephalography (EEG) to measure alpha activity (8-12 Hz), which serves as a neural indicator of spatial selective attention. At the current state of data acquisition (the study employs an optional stopping approach), our results indicate that alpha oscillations implement distractor suppression independently of target selection. We observed alpha power decreasing contralaterally and increasing ipsilaterally to the target, and the opposite pattern for distractors. Current findings do not indicate any group differences. This suggests that elderly with and without HL exhibit as efficient neural filtering as NH listeners.

30) Neural synchronization with audiovisual speech relies on a sensitive period

Marta Fantoni1, Alessandra Federici1, Francesco Pavani2, Ivan Camponogara3, Giacomo Handjaras1, Emiliano Ricciardi1, Elena Nava4, Eva Orzan5, Benedetta Bianchi6, Stefan Debener7, Davide Bottari1

1IMT School for Advanced Studies Lucca, Italy, 2University of Trento, Italy, 3Zayed University, Abu Dhabi, United Arab Emirates, 4University of Milano-Bicocca, Italy, 5IRCCS Materno Infantile Burlo Garofolo, Istituti di Ricovero e Cura a Carattere Scientifico, Trieste, Italy, 6Meyer Children's Hospital, Florence, Italy, 7Carl von Ossietzky University of Oldenburg, Germany

Proper stimulations at specific time windows are crucial for shaping infants’ development (Werker, Hensch, 2015). At the behavioral level, auditory deprivation in the first years of life can alter the ability to integrate audiovisual speech cues once hearing is restored with Cochlear implants (CI; Schorr et al., 2005). Yet, it’s unclear if neural circuitries responsible for audiovisual speech integration have a developmentally sensitive period in which auditory information is required. With the EEG, we measured the neural tracking of auditory-only and audiovisual speech) in children with congenital (CD) or acquired (AD) deafness and in hearing controls (HC). Because CD and AD groups differed in their lack or presence of functional hearing in the first year of life, this allowed assessing the role of audiovisual experience within this phase of brain development. For both HC and AD groups, when the speaker's face was visible, speech tracking was anticipated at short timescales, ∼30–150ms. This facilitatory effect was absent in the CD group. Additionally, HC and AD groups exhibited higher dissimilarity between auditory-only and audiovisual neural responses, compared to the CD group. Results suggest that early acoustic deprivation hampered fast integration of audiovisual speech signals. Importantly, despite the differences of neural tracking, AD and CD groups had comparable speech comprehension enhancements with audiovisual compared to auditory speech, highlighting that neural adaptations to different deafness onsets can lead to similar behavioral outcomes. Hence, early audiovisual experience is fundamental for developing neural circuitries subtending low-level audiovisual speech signals integration.