Photo: © Nieke Martens
Human listeners and signers reliably recover discrete, structured interpretations from a continuous and highly ambiguous acoustic and visual signal. Explaining how neural systems transform this unfolding physical input into compositional linguistic and relational structures remains a central challenge for cognitive (neuro)science. Magnetoencephalography (MEG) provides a powerful window onto the temporal whole-brain dynamics that accompany this process. Much recent work has sought to identify correlates of linguistic structure directly in neural readouts, for example by linking oscillatory activity to particular linguistic units or hierarchical levels. While such approaches have revealed important aspects of the temporal organization of language processing, they can invite an overly literal interpretation in which linguistic structures—such as syntactic trees—are treated as candidate neural encodings. In this talk, I argue for a different perspective. Linguistic structures are formal descriptions of the relational computations that language users perform; the central question for neuroscience is how those computations are implemented in neural population dynamics unfolding in time. Importantly, this perspective strengthens—rather than weakens—the role of formal theories from linguistics and psycholinguistics in cognitive science and neuroscience. For example, theories of syntax and semantics specify the relational distinctions that must be made during processing and therefore provide essential constraints on mechanistic accounts of brain function. Across studies combining naturalistic spoken language comprehension paradigms, computational modeling, and analyses of cross-frequency neural dynamics, I describe how linguistic structure and statistical expectations jointly constrain evolving neural states across interacting timescales. I argue that progress in understanding brain computation will depend less on predictive model alignment and more on interpretable models of neural dynamics that can reveal how structured cognition emerges from biological systems.
Andrea E. Martin is a Lise Meitner Research Group Leader at the Max Planck Institute for Psycholinguistics and a Principal Investigator at the Donders Centre for Cognitive Neuroimaging (DCCN) at Radboud University. She leads the Language and Computation in Neural Systems group, which investigates how linguistic structure and meaning are represented and processed in biological and artificial neural systems.
Her research integrates psycholinguistics, cognitive neuroscience, and computational modeling to develop mechanistic accounts of (spoken) language comprehension and production. A central aim of her work is to understand how formal properties of language—such as constituency and compositionality—can emerge from, and be implemented in, temporally dynamic neural systems. This approach bridges symbolic and neural perspectives and uses naturalistic language, neuroimaging, and formal modeling to advance theory-building in cognitive science.
Martin received a BA in Cognitive Science from Hampshire College (2004), and an MA (2006) and PhD (2010) in Experimental Psychology from New York University. She has held positions at the Basque Centre on Cognition, Brain, and Language, the University of Edinburgh, and the Max Planck Institute for Psycholinguistics prior to establishing her independent research group. Her work has been funded by the ESRC, Leverhulme Trust, the Netherlands Organization for Scientific Research (VIDI and Aspasia), the Max Planck Gesellschaft (MPRG and the Lise Meitner Excellence Programme), and the European Research Council (ERC Consolidator Grant DYNALANG).