We can confirm the following speakers:
- Evelina Fedorenko (MIT)
- Peter Hagoort (Donders / MPI)
- David Poeppel (NYU / MPI)
- Virginie van Wassenhove (Neurospin)
- Melissa Võ (University of Frankfurt)
- Sven Bestmann (UCL)
- Lars Muckli (University of Glasgow)
- Jody Culham (Western University)
- Christoph Korn (University of Hamburg)
- Sophie Scott (UCL)
Evelina Fedorenko (MIT)
The language system in the human mind and brain
Abstract: Human language surpasses all other animal communication systems in its complexity and generative power. I use behavioral, neuroimaging, and computational approaches to illuminate the functional architecture of language, with the goal of deciphering the representations and computations that enable us to understand and produce language. I will discuss three discoveries about the language system. First, I will show that the language network is selective for language processing over a wide range of non-linguistic processes. Next, I will challenge current proposals of the neural architecture of language, which argue that syntax (the rules for how words combine into phrases and sentences) is cognitively and neurally dissociable from the lexicon (word meanings). I will show that syntactic processing is not localized to a particular region within the language network, and that every brain region that responds to syntactic processing is at least as sensitive to word meanings, including when probed with a high-spatial/high-temporal-resolution method (ECoG). Finally, I will provide evidence that stimuli that are not syntactically well-formed but allow for meaning composition (operationalized within an information-theoretic framework) elicit as strong a response as intact sentences, suggesting that semantic composition may be the core driver of the response in the language-selective brain regions.
Virginie van Wassenhove (Neurospin)
Making sense of time in the human mind
Abstract: The neural mechanisms supporting temporal cognition remain debated. In this talk, I will reframe temporalities from the perspective of the brain itself (as generator-observer of events) as opposed to that of the external observer. I will illustrate the importance of oscillatory activity in low-level temporal logistics of information processing for instance yielding temporal order and behavioral precision. I will also discuss recent findings showing that conscious timing may not linearly map onto neural timing – i.e., that temporalities are represented abstractly and intelligibly – and exemplify this with recent work focused on the generative nature of the psychological time arrow (mental time travel), and the ability to introspect about one’s self-generated timing productions (temporal metacognition).
David Poeppel (NYU /MPI)
Speech rhythms and their audiomotor foundations
Peter Hagoort (Donders / MPI)
Far beyond the back of the brain
Abstract: Far beyond the back of the brain is where language happens. The infrastructure of the human brain allows us to acquire a language without formal instruction in the first years of life. I will discuss the features that make our brain language-ready. Next to the neuro-architectural features I will discuss the functional aspects of language processing. A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which implies that the meaning of an utterances is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. FMRI results and results from recordings of event related brain potentials will be presented that are inconsistent with this classical model of language interpretation. Our data support a model in which knowledge about the context and the world, knowledge about concomitant information from other modalities, and knowledge about the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. The Memory, Unification and Control (MUC) model of language accounts for these data. Resting state connectivity data, and data from a large MEG study (N=204 participants) will be discussed, specifying the contributions of temporal and inferior frontal cortex. I will also discuss fMRI results that indicate the insufficiency of the Mirror Neuron Hypothesis to explain language understanding. Instead, understanding the message that the speaker wants to convey requires the contribution of the Theory of Mind network. I will sketch a picture of language processing from an embrained perspective. Overall, I will argue that a multiple network perspective is needed to account for the neurobiological underpinning of language to its full extent. Finally I will illustrate why it is hard to give a good presentation.
Melissa Võ (University of Frankfurt)
Reading Scenes: How Scene Grammar Guides Attention in Real-World Environments
Abstract: The sources that guide attention are manifold and interact in complex ways. Internal goals, task rules, or salient external stimuli have shown to be some of the strongholds of attentional control. But what guides attention in complex, real-world environments?
Following Wertheimer’s Gestalt ideas, I will argue that a scene is more than the sum of its objects. That is, attention during scene viewing is mainly controlled by generic knowledge regarding the meaningful composition of objects that make up a scene. Contrary to arbitrary target objects placed in random arrays of distractors, objects in naturalistic scenes are placed in a very rule-governed manner. Thus, scene priors — i.e. expectations regarding what objects (scene semantics) are supposed to be where (scene syntax) within a scene — strongly guide attention. Violating such semantic and syntactic scene priors results in differential ERP responses similar to the ones observed in sentence processing and might suggest some commonality in the mechanisms for processing meaning and structure across a wide variety of cognitive tasks.
In this talk, I will highlight some recent projects from my lab in which we have tried to shed more light on the influence of scene grammar on visual search, object perception and memory, its developmental trajectories, as well as its role in the ad-hoc creation of scenes in virtual reality scenarios.
Sven Bestmann (UCL)
The laminar and transient nature of sensorimotor beta activity
Abstract: Motor cortical beta activity (13-30 Hz) is a hallmark signature of healthy and pathological movement, but its behavioural relevance remains unclear. One reason for this is that slow, sustained changes in beta amplitude pre- and post-movement may not sufficiently summarize trial-wise dynamics in beta activity.
I will discuss recent approaches developed in the lab using high SNR magnetoencephalography (MEG) for laminar-specific analyses of beta signals including new approaches for obtaining better anatomical priors for MEG source reconstruction. I will present recent data on the laminar profile of average beta changes using some of these approaches, which support proposals about frequency specific channels for feedback and feedforward processing. However, the nature of high-power beta changes is transient, and dominated by punctate high-power beta events (bursts). Biophysical models and improved source reconstruction hints at a more complex laminar profile of transient beta events, with possible implication for theories of the laminar organization of these signals and their role in feedback/feedforward processing. These results indicate a necessary reappraisal of the functional role of sensorimotor beta activity in human cortex.
Lars Muckli (University of Glasgow)
Visual Predictions in different layers of visual cortex
Abstract: Normal brain function involves the interaction of internal processes with incoming sensory stimuli. We have created a series of brain imaging experiments that sample internal models and feedback mechanisms in early visual cortex. Primary visual cortex (V1) is the entry-stage for cortical processing of visual information. We can show that there are two information counter-streams concerned with: (1) retinotopic visual input and (2) top-down predictions of internal models generated by the brain. Our results speak to the conceptual framework of predictive coding. Internal models amplify and disamplify incoming information. The brain is a prediction-machinery. Healthy brain function will strike a balance between precision of prediction and prediction update based on prediction error. Our results incorporate state of the art, layer-specific ultra-high field fMRI and other imaging techniques. We argue that fMRI with it’s capability of measuring dendritic energy consumption is sensitive to record activity in different parts of layer spanning neurons which enriches our computational understanding of counter stream brain mechanisms.
Jody Culham (Western University)
“The treachery of images”: How the realness of objects affects brain activation and behavior
Abstract: Psychologists and neuroimagers commonly study perceptual and cognitive processes using images because of the convenience and ease of experimental control they provide. However, real objects differ from pictures in many ways, including the potential for interaction and richer information about distance (and thus physical size). Across a series of neuroimaging and behavioral experiments, we have shown different neural responses to real objects than pictures, in terms of the level and pattern of brain activation as well as visual preferences as indicated by eye tracking. Now that these results have shown quantitative and qualitative differences in the processing of real objects and images, the next step is to determine which aspects of real and virtual objects drive these differences.
Christoph Korn (University of Hamburg)
Modelling the trade-offs between optimal and heuristic solutions for multistep decisions and social learning
Abstract: Humans face many complex decision-making and learning situations in which the computation of optimal solutions challenges – or even surpasses – cognitive capacities. Therefore, humans often resort to heuristic solutions. Formal models that adequately capture the neuro-cognitive mechanisms of the trade-offs between optimal and heuristic solutions are lacking. Here, I focus on two pertinent scenarios:
First, I will present a series of partly published studies that show how humans combine optimal and heuristic solutions to maximize rewards in multistep decision scenarios. Results obtained from behavioral modelling and functional neuroimaging suggest a role of the medial prefrontal cortex in the computation of the employed policies and of the uncertainty associated with relying on these policies.
Second, I will describe unpublished experiments that outline how humans get to know other persons by updating the estimations of these persons’ character traits. The best-fitting models combine principles derived from reinforcement learning algorithms with participants’ world knowledge about the distributions and interrelations of different character traits. Two functional neuroimaging datasets show that these interrelations between character traits are represented in the medial prefrontal cortex.
Taken together, the to-be-presented projects aim at providing neuro-computational accounts of the trade-offs in complex decision-making and learning processes.
Sophie Scott (UCL)
Sounds, speech and actions: towards a new model of human auditory processing
Abstract: We have known for over twenty years that the auditory system in humans and non human primates is organised into anatomical and functional streams of processing: a rostral stream associated with recognition processes and a caudal stream links variously to sensors-motor processing and/or spatial processing of sound. However, there were two clear limitations to this approach. The first being that there was no unifying, domain general framework emerging around these studies – all the theoretical models were focussed on more domain specific approaches (e.g. speech, language, music, prosody etc.). The second limitation was a lack of organisational or computational principles that might distinguish the kinds of processing that occurred within these different streams that might underlie these functional differences. Using new data from non human and human studies of audition, I will present a new domain general approach to this issue, using the different temporal response characteristics of the rostral and caudal streams as an example of their different computational properties.