Eleonora Marcantoni1, Danying Wang2, Robin Ince1, Lauri Parkkonen3, Satu Palva4, Simon Hanslmayr1
1University of Glasgow, United Kingdom, 2University College London, University of London, United Kingdom, 3Aalto University, Espoo, Finland, 4University of Helsinki, Finland
Hippocampal theta oscillations are considered critical for binding multisensory information into episodic memories. Recent studies suggest that entraining theta oscillations through 4-Hz audio–visual Rhythmic Sensory Stimulation (RSS) can significantly enhance memory performance in humans. This “one-size-fits-all” approach, however, neglects the differences in brain activity among individuals, which could account for the variability in results. To address this limitation, we developed a new pipeline designed to estimate the individual hippocampal theta frequency during a memory task and dynamically align the stimulation parameters to it.
The pipeline involves extracting the hippocampal signals during a MEG measurement using an LCMV beamformer. Then, theta activity is separated from the broadband signal applying a Generalized Eigenvalue Decomposition (GED). Finally, the Cyclic Homogeneous Oscillation detection method (CHO) is applied to detect the presence of an oscillation and identify its centre frequency. This frequency is then used to adjust the flickering frequency of the sensory stimuli. As the first step, we validated the feasibility of the pipeline on rodent LFP data, aiming to replicate the well-established correlation between running speed and hippocampal theta frequency. The results indicate that the pipeline was able to reproduce previous findings. After that, the pipeline was tested offline on a MEG dataset involving 4-Hz RSS during an associative memory task. Here, our objective was to assess whether the pipeline could accurately identify the entrainment effect induced by the stimulation. Our results indicate that hippocampal oscillations during the stimulation were significantly closer to 4-Hz compared to the pre-stimulus window.
Andreas Wutz1
1University of Salzburg, Austria
Is our conscious perception of seeing a flash, hearing a sound or feeling a touch associated with one common core brain activity pattern or a specific brain-body interactive state? Here, I present novel MEG, cardiac and respiratory data that investigate such supramodal neural correlates of conscious perception and its relationships to ongoing dynamics in the body. On each trial, different visual, auditory or tactile stimuli were shown at individual perceptual thresholds, such that about half of the stimuli were consciously detected, while the other half were missed. Four different stimuli per modality were used, in order to subsequently leverage representational similarity analysis (RSA) for differentiating modality-specific, sensory processes from supramodal conscious processes, which are similar across the senses. As expected, the neural data showed stronger evoked MEG-activity for detected stimuli in the respective sensory cortices. Conversely on missed trials, there was greater alpha-frequency band power for all three modalities. Moreover, the RSA was capable of distinguishing brain activity patterns related to modality-specific processes shortly after stimulus onset (<0.5 s) from later supramodal conscious processes (>0.5 s). Subsequent analyses investigate the relationship between modality-specific and supramodal brain activity patterns with the participants’ concurrent cardiac and respiratory activity. Our work aims for a multi-stage model for conscious experiences, involving alpha oscillations, modality-specific processing upon stimulus onset and then later supramodal conscious perception. This temporal processing cascade in the brain may be further modulated by ongoing state changes in the body, to serve the optimal integration of conscious experiences with the perceiver.
Tobias Hausinger1, Björn Probst1, Stefan Hawelka1, Belinda Pletzer1
1University of Salzburg, Austria
Sex and menstrual cycle related differences in holistic and detail-oriented processing strategies are well-documented across cognitive domains such as pattern recognition, navigation, and object location memory. This study is the first to employ a part-whole face recognition task while controlling for sex hormone status to investigate a potential role of strategy differences in the formation of face representations. We assessed 140 participants (49 luteal, 18 non-luteal, 73 males) and found significant sex differences in the part-whole effect between males and luteal cycle phase females. In particular, this sex difference was based on luteal phase females exhibiting higher face-part recognition accuracy than males. As this advantage was further exclusively observed for female stimulus faces, we discuss a potential relation to the own-gender bias in face recognition. In addition, exploratory analyses suggest that testosterone levels may partly mediate the observed sex differences while eye-tracking during the face recognition phase revealed more frequent eye fixations on the central interocular face region in males, indicating a stronger reliance on holistic processing strategies.
Ingmar de Vries1, Eva Berlot1, Christoph Huber-Huber2, Floris de Lange1, Moritz Wurm2
1Radboud University Nijmegen, Netherlands, 2University of Trento, Italy
In dynamic environments (e.g., traffic or sports), our brain is faced with a continuous stream of changing sensory input. Adaptive behavior in such environments requires our brain to predict unfolding external dynamics. While theories propose such dynamic prediction, empirical evidence is limited to static snapshots and indirect consequences of predictions. We apply a dynamic extension to representational similarity analysis (dRSA) that captures neural representations of unfolding events across hierarchical levels of processing (from perceptual to conceptual), by investigating the match between a temporally variable stimulus model at a given time-point and the neural representation across time. Using this novel approach, we find empirical evidence for neural predictions in MEG data across hierarchical timescales, with high-level conceptual stimulus features predicted earlier in time and low-level perceptual features predicted closer in time to the actual sensory input. Second, we demonstrate that reducing stimulus familiarity by either inversion (up-down) or temporal piecewise scrambling of simple action videos, impairs neural predictions in a hierarchical level-specific manner, such that inversion selectively impairs predictions at the highest hierarchical level, while piecewise scrambling impairs all predictions. Last, we show preliminary data of naturalistic movie watching which suggests that familiarity with the movie results in earlier perceptual predictions. To conclude, using dRSA we demonstrate how neural representations across hierarchical levels – from perceptual to conceptual – are predictive in nature, how predictions at different hierarchical levels can be manipulated independently, and how this new approach can be used to study the effect of stimulus familiarity on perceptual predictions.
Xiongbo Wu1, Tobias Staudigl1
1Ludwig-Maximilians-Universität in Munich, Germany
Humans rely heavily on their vision to explore the environment. By making saccades, humans move their fovea to increase acuity while fixating. However, even when fixating, the eyes never remain completely still. Among such fixational eye movements, microsaccades have been indicated to play an important role in visual attention and perception. Here, we ask whether and how microsaccades modulate the neural correlates of visual perception and influence performance in humans. We simultaneously recorded scalp EEG and eye tracking while participants performed a near-threshold visual detection task to detect the orientation of a brief, masked visual target stimulus. The time interval between target and mask was adapted to the participant’s individual detection threshold following a 2-up-1-down staircase procedure. Focusing on the time interval prior to target onset, we observed that microsaccades were adjusted to upcoming task demands. Strikingly, the occurrence of microsaccades close to target onset significantly reduced detection accuracy. We also found phase alignment of low-frequency brain activity when locking the data to the onset of microsaccades, similar to previous findings with large saccades. In the next step, we will examine whether the pre-stimulus phase predicts detection performance, an observation called phase bifurcation. Here, we will explore how phase bifurcation varies with the occurrence of microsaccades in relation to target onset. Together, we show that microsaccades modulated human electrophysiology and affected visual detection performance, indicating that the interaction of eye movements and neural activity jointly predicts behavior.
Jannik Heimann1, Pauline Petereit1, Anat Perry2, Ulrike Krämer1
1University of Lübeck, Germany, 2Hebrew University of Jerusalem, Israel
In recent years, social interactions have increasingly shifted to computer-mediated, online settings, with unclear implications for cognitive and affective processes. In our study, we asked how the social presence of another person influences pain empathy. We manipulated temporal presence as one dimension of social presence, which reflects the synchronicity and opportunity for interactivity within a social situation. We assumed that temporal presence affects behavioral and neural responses to others’ pain.
To investigate this, we conducted an empathy for pain experiment comparing reciprocal interaction via video camera and a unidirectional condition where a pre-recorded video was presented. Thirty-five participants alternately served as targets and observers of painful electric stimulation, while their behavioral ratings, heartbeat, skin conductance response (SCR), and electroencephalogram were recorded.
We found that observers’ perceived the immediacy and closeness of the unidirectional condition as reduced compared to the interactive condition. Nevertheless, no differences in empathic accuracy or unpleasantness ratings were found. Mu suppression, a neural index of empathy, did not differ between conditions either. However, low frontal theta activity (3-5 Hz) was reduced in the unidirectional video condition, presumably reflecting reduced processing of the aversive, salient stimuli. Observers’ SCR was increased with higher shock intensity, but did not differ between presence conditions.
In sum, our data showed that temporal presence did not modulate behavioral and electrodermal correlates of pain empathy, but had only subtle effects on empathy-related frontal theta activity. Future studies will have to clarify whether this applies also to more complex, naturalistic social interactions.
Rodrigo Donoso-San Martín1,2,3
1Pontifical Catholic University of Chile, Santiago, Chile, 2Hospital Clínico de la Universidad de Chile, Santiago, Chile, 3University of Tübingen, Germany
Acquired auditory processing disorders including age dependent hearing loss, speech discrimination deficits, tinnitus or hyperacusis, require a personalized diagnosis to assign the individual cause within the auditory hierarchy to either the periphery, subcortical or distinct cortical or cortico-fugal neuronal dysfunctions. The well-functioning feedforward and feedback PV-IN network is an essential precondition for temporal intracortical network function in audition that above all senses relies on high speed of information flow (Zajac IT and Nettelbeck T, 2018). We hypothesize disease-specific deficits in temporal intracortical network function in auditory circuits. Therefore, the diagnostic of those should have a special significance. We used time-sensitive MEG-OPM measurements and aimed to study different auditory stimulus paradigms to detect fast auditory processing in different groups of tinnitus with and without hyperacusis or presbycusis. We expect this method to become an efficient diagnostic strategy to fathom peripheral or central contribution of the distinct auditory impairments in the future to improve individualized targeted interventional therapies. Here we will present preliminary results demonstrating the usability and function of the OPM-MEG for hearing research.
Acknowledgment and funding: This work was supported by the Deutsche Forschungsgemeinschaft DFG KN 316/13-1, DFG RU 713/6-1, ERA-NET NEURON JTC 2020: BMBF 01EW2102 CoSySpeech and FWO G0H6420N
Martina Fanghella1, Guido Barchiesi1, Marta Bortoletto2, Agnese Zazio2, Alexandra Battaglia-Mayer3, Corrado Sinigaglia1
1University of Milan, Italy, 2Centro San Giovanni di Dio Fatebenefratelli, Istituti di Ricovero e Cura a Carattere Scientifico, Brescia, Italy, 3Sapienza University of Rome, Italy
Anyone who has ever walked, cooked, or crafted with a friend knows that acting jointly is not just acting side-by-side. Unlike acting side-by-side, where agents pursue individual goals, acting jointly requires that a collective goal guide their actions. Yet previous studies have largely ignored this difference, thereby failing to isolate what is distinctive of acting jointly.
Our study used a dual EEG approach to investigate the brain markers of action planning and execution specific to joint action. We recruited twenty dyads of participants and had them play a joystick video game. The game involved grabbing and transporting one object, either jointly (Joint-Action Condition, JA) or in parallel but individually (Parallel-Action Condition, PA). We designed the tasks to ensure equal coordination demands across conditions. Our behavioral measurements included success rate, reaction times (RT), movement velocity, and movement direction, while our EEG measurements focused on two event-related potentials (ERPs) —late Contingent Negative Variation (CNV) and Motor Potential (MP), following Kourtis et al., 2014. 2019.
While the two tasks exhibited non-significantly different success rate, the mean variability of RT, velocity, and direction, were significantly lower in JA than in PA. Strikingly, the mean CNV and MP amplitudes were also significantly lower in JA than in PA.
Overall, our results suggest that, in joint action, acting toward a collective goal facililitates interpersonal coordination, compared to acting side-by-side. In fact, joint action appears more predictable (as suggested by reduced behavioural variability) and less demanding (as highlighted by reduced CNV and MP) than parallel action.
Fabian Schwimmbeck1, Johannes Niediek2, Thomas Schreiner3, Eugen Trinka4, Florian Mormann5
1University of Salzburg, Austria, 2Technical University of Berlin, Germany, 3Ludwig-Maximilians-Universität in Munich, Germany, 4Paracelsus Medical University, Salzburg, Austria, 5University Hospital Bonn, Germany
Memory consolidation is assumed to rely on fine-tuned communication within and between hippocampal and cortical neuronal circuits during offline brain states. Sharp-wave ripples (SWRs) have been proposed as the pivotal signature for consolidation, triggering hippocampal replay and information transfer to cortical sites. Critically, although SWRs are precisely coupled to the cardinal NREM sleep-related brain rhythms (i.e., cortical slow oscillations and thalamocortical spindles), evidence for a link between SWRs and directed information transfer is scarce. Here, we leveraged the rare opportunity of nocturnal single-unit and LFP recordings in neurosurgical patients to uncover the SWR-triggered information flow by tracking the impact of SWRs on single-neuron activity along the hippocampal output network, including entorhinal and parahippocampal cortex.
Preliminary results indicate a consistent pattern of temporally precise increases in neuronal firing rate (FR) synchronized with hippocampal SWRs. Importantly, FR increases were not confined to local hippocampal neurons but also became apparent in single neurons in distant, downstream regions, suggesting an interregional impact of SWRs. The temporal delay along the hippocampal-neocortical pathway suggests a causal directionality, with the hippocampus as the driving hub. Interestingly, we found that concept neurons involved in a pre-sleep memory task were selectively activated at downstream targets. Finally, we show that these processes were finely tuned to cortical SO-Up-states that shaped time windows of high excitability among MTL neurons during which SWRs emerged. Together, our findings support the idea of a causal relationship between SWRs and hippocampal-neocortical information transmission during sleep, underpinning their essential mechanistic function in systems consolidation.
Gaia Lapomarda1, Alessio Fracasso2, Carmen Morawetz1, Alessandro Grecucci3, David Melcher4
1University of Innsbruck, Austria, 2University of Glasgow, United Kingdom, 3University of Trento, Italy, 4New York University Abu Dhabi, New York University, United Arab Emirates
Time perception is crucial in our life, and emotions can modulate it. Interoception influences emotional experiences, and the insula plays a key role in this process. However, the neural representation of the relationship between time, emotions, and body remains unclear. We investigated the effect of anxiety on time perception, considering individual variations in interoception and trait-anxiety. We hypothesized that better interoception would predict more intense anxiety, disrupting time perception. This would be mirrored in a modulatory effect of the amygdala on the integrative function of the insula. Thirty participants performed an auditory temporal reproduction task while undergoing fMRI. In half of the blocks, they were at risk of hearing random screams (threat blocks), whereas in the other half, they were ensured that no screams would be presented. Interoceptive accuracy and trait-anxiety were assessed outside the scanner. Our paradigm successfully induced affective changes, with higher anxiety perceived (state-anxiety) in the threat blocks (SE=1.51,t=6.22,p<.001). Higher interoceptive accuracy (SE=1.45,t=2.66,p=.008) and higher trait-anxiety (SE=4.17,t=2.36,p=.02) were also related to increased state-anxiety. In turn, increased state-anxiety predicted lower accuracy in temporal reproduction (SE=.006,t=-2.11,p=.03). Higher interoceptive accuracy also predicted lower accuracy in the reproduction of longer durations (9sec:SE=.005,t=-5.61,p<.001, 14sec:SE=.005,t=-4.98,p<.001). To determine the interaction effect of emotions and temporal experience at a neural level, we looked at the functional interplay between the amygdala and insula. These results suggest a disruptive effect of anxiety on temporal perception, considering variations in interoception. Exploring the neural underpinning of this process can inform how the brain-body interaction modulates affective and cognitive processes.