9.30 Christopher Summerfield – University of Oxford
Curriculum learning for memory and inference problems
Humans learn faster when information is temporally structured in time. For example, students rarely study French and Spanish in the same class, because this would be confusing. Neural networks, by contrast, tend to learn best from diverse data that is well mixed over training trials. Why is this? We currently lack computational theories of why training curricula might work in humans and other animals, especially for supervised learning problems. Here, we study curriculum learning using tasks that involve memorisation, the learning of relational graphs (or cognitive maps), and more complex video games. We show that curricula that interleave structure and content (e.g. grammatical rules and vocabulary items) facilitate rapid and generalisable learning in humans, but not neural networks, including transformer-based models. We offer a sketch of a general theory of how to accelerate human learning.
10.10 Katrin Amunts – Research Center Jülich
Anatomical landscapes of language
Our understanding of the neuroanatomical basis of language is very much influenced by the seminal work of Paul Broca and Carl Friedrich Wernicke. However, concepts of language as a cognitive function have significantly changed since then, and, from an anatomical perspective, the concepts of Broca’s and Wernicke’s regions are ill-defined. Recent cytoarchitectonic analyses and 3D maps show a much more detailed parcellation of both regions and neighboring areas, with specific contributions to language-related tasks. Beyond brain mapping, they characterize their microstructure including cell densities and laminar pattern, and are a tool to compare this with receptorarchitecture, connectivity, genetic data or functional activations in a multi-modal approach. A prominent feature of language related regions is their considerable intersubject variability at the micro and the macro level, and, for some of its features, inter-hemispheric asymmetry. The Julich-Brain Atlas as part of the EBRAINS research platform provides maps and analysis tools to the international research community to distinguish the different components of the language network, to contribute to brain medicine related to language deficits, e.g. in tumor patients, and, more general, to get a deeper understanding about the relationship of brain structure and function.
11.20 David Poeppel – New York University, Strüngmann Institute Frankfurt
Rhythms and Algorithms: From Vibrations in the Ear to Abstractions in the Brain
The brain has rhythms – and so do music and speech. Recent research reveals that the temporal structure of speech and music and the temporal organization of various brain structures align in systematic ways. The role that brain rhythms play in perception and cognition is vigorously debated and continues to be elucidated through neurophysiological and computational studies of various types. I describe some intuitively simple but surprising results that illuminate the temporal structure of perceptual experience. From recognizing speech and planning speech to building abstract mental structures, how the brain constructs and represents time reveals unexpected puzzles in the context of speech perception and language comprehension.
12.00 Simon Fisher – Max Planck Institute Nijmegen
Genomic investigations of speech, language, and reading
The use of spoken and written language is a fundamental human capacity, involving the intertwining of nature and nurture. Decades of research have established that interindividual differences in reading- and language-related skills are influenced by variation at the DNA level. However, the relevant genetic architecture is complex, heterogeneous, multifactorial, and challenging to study in a robust manner. I will give an overview of the promise and pitfalls in this area of science, discussing how advances in molecular technologies and analytical approaches, coupled to new developments in trait characterization, have begun to transform the field. Data from such studies are integrated with emerging findings from neuroimaging genomics and investigations of gene expression patterns in postmortem brain tissue. I will illustrate with examples of work from the Big Questions of the Language in Interaction consortium.
Monday Afternoon session
Chair: Roshan Cools – Donders Institute
Language in Interaction BQ5 coordinator – The inferential cognitive geometry of language and action planning: Common computation
![](https://hils2024.nl/wp-content/uploads/2024/03/Cools-1.jpg)
14.00 Mona Garvert – Julius-Maximilians-Universität Würzburg
Hippocampal and prefrontal representations enabling flexible cognition
In our ever-changing world, the ability to adapt to novel situations is essential. This adaptability often involves leveraging our past experiences to navigate new challenges. For example, when choosing from a menu in an unfamiliar restaurant, we instinctively draw upon past dining experiences to guide our decision. Such generalization is a cornerstone of adaptive behavior, allowing us to make informed decisions without relearning strategies for every new scenario. In this talk, I explore how the human brain enables such behavior. I will demonstrate that the brain constructs hippocampal cognitive maps, traditionally known for encoding spatial relationships, to also represent other types of relational knowledge, providing a flexible foundation for generalization and novel inference. In high-dimensional decision-making scenarios, the orbitofrontal cortex selectively activates relevant cognitive maps tailored to the specific task, showcasing the brain’s dynamic information processing capability. Additionally, our research reveals that with time and consolidation, the brain forms more abstract relational maps, which transcend specific stimuli and reflect the broader relational structure of experiences. In summary, our findings illustrate the remarkable adaptability of neural representations in the human brain. They demonstrate how these representations are not just static archives of past experiences but are dynamic tools actively reshaped to aid decision-making and behavior in ever-changing environments.
14.40 Andre Bastos – Vanderbilt University
Multi-laminar, multi-area recordings in the non-human primate brain suggest Predictive Coding is implemented via Predictive Routing
To understand the neural basis of cognition, we must understand how top-down control of bottom-up sensory inputs is achieved. We have marshaled evidence for a canonical cortical control circuit that involves rhythmic interactions between different cortical layers. By performing multiple-area, multi-laminar recordings, we’ve found that local field potential (LFP) power in the gamma band (40-100 Hz) is strongest in superficial layers (layers 2/3), and LFP power in the alpha/beta band (8-30 Hz) is strongest in deep layers (layers 5/6). We call this pattern the spectrolaminar motif. We have found evidence that this spectrolaminar motif is preserved in different non-human primate species as well as humans, but is less preserved between primates and mice. The gamma-band is strongly linked to bottom-up sensory processing and neuronal spiking carrying stimulus information, while the alpha/beta-band is linked to top-down processing. Deep layer alpha/beta Granger causes that in superficial layers, and is negatively coupled to gamma. These oscillations give rise to separate channels for neuronal communication: feedforward for the gamma-band, and feedback for the alpha/beta band. Attention, working memory, and prediction processing all involve modulation of gamma and alpha/beta synchronization within and across areas. These rhythmic interactions breakdown during anesthesia-induced unconsciousness. Based on these observations, we hypothesize that the interplay between alpha/beta and gamma synchronization is a canonical mechanism to enable cognition and consciousness.
16.20 Uri Hasson – University of Princeton
Deep language models as a cognitive model for natural language processing in the human brain
Naturalistic experimental paradigms in cognitive neuroscience arose from a pressure to test, in real-world contexts, the validity of models we derive from highly controlled laboratory experiments. In many cases, however, such efforts led to the realization that models (i.e., explanatory principles) developed under particular experimental manipulations fail to capture many aspects of reality (variance) in the real world. Recent advances in artificial neural networks provide an alternative computational framework for modeling cognition in natural contexts. In this talk, I will ask whether the human brain’s underlying computations are similar or different from the underlying computations in deep neural networks, focusing on the underlying neural process that supports natural language processing in adults and language development in children. I will provide evidence for some shared computational principles between deep language models and the neural code for natural language processing in the human brain. This indicates that, to some extent, the brain relies on overparameterized optimization methods to comprehend and produce language. At the same time, I will present evidence that the brain differs from deep language models as speakers try to convey new ideas and thoughts. Finally, I will discuss our ongoing attempt to use deep acoustic-to-speech-to-language models to model language acquisition in children.
17.00 Ioanna Zioga – Donders Institute
Alpha and beta oscillations shape language comprehension and production
Alpha oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while beta oscillations are linked to the putative reactivation of content representations. Can the proposed role of those brain rhythms be generalized from lower-level to higher-level, linguistic processes? To this end, we performed two magnetoencephalography (MEG) studies. In the first study, Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies, as proxy measures for high-level cognitive processes. Forward models showed that dependency features predict alpha/beta power modulations in language-related regions beyond low-level linguistic features, but only in the native language. In the second study, we constructed a novel paradigm in which participants produced an exemplar or a feature of a target word embedded in spoken sentences (e.g., for the word tuna an exemplar from the same category—seafood—would be shrimp, and a feature would be pink). A visual cue indicated the task—exemplar or feature. Participants were slower, gave more variable and semantically distant answers for features, due to lower association strength, compared to exemplars. Alpha power correlated with reaction times, suggestive of a facilitatory role by regulating inhibition in regions linked to lexical retrieval. We observed a different spatiotemporal pattern of beta activity between tasks in right temporo-parietal regions, potentially reflecting reactivation of distinct task representations. Critically, semantic distance negatively correlated with alpha/beta power in left-hemispheric regions, associated with cognitive control, showing however a differentiated spatiotemporal pattern between tasks. Overall, our work provides evidence for the generalizability of the alpha and beta functional roles from perceptual to higher-level processes, and offers a comprehensive approach in both naturalistic story comprehension and word production.
Tuesday morning session
Chair: James McQueen – Donders Institute
Language in Interaction BQ4 coordinator – Variability in language processing and in language learning.
![](https://hils2024.nl/wp-content/uploads/2024/03/McQueen-1.jpg)
9.00 Stanislas Dehaene – NeuroSpin center, Saclay, France; and Collège de France, Paris
Multiple languages in the same brain: Dissociating natural language and the language of mathematics
The capacity to compose recursive structures is a widespread cognitive ability that may well be unique to humans. Natural language, but also mathematics and music all rely on recursive composition. I will present recent behavioral and brain-imaging evidence suggesting that mathematics and natural language map onto drastically different and dissociable brain networks. Several minimal paradigms allow to analyze the psychological representation of simple mathematical objects, in the domains of number, geometry and graphs, and to begin to map their neural representations.
9.40 Barbara Kaup – University Tübingen
Negation processing
What changes if we insert a negation marker into the famous sentence “The Dutch trains are yellow/white/sour ….” investigated by Hagoort et al (2004)? Nothing! In this talk I will give an overview of experimental work on the processing, representation, and use of negation, addressing the most important questions in current negation research, namely, (1) Does negation processing routinely involve suppression? (2) Does negation processing involve two processing steps? (3) Is negation integration delayed during comprehension and what factors facilitate negation processing? (4) What is the relationship between linguistic and non-linguistic negation? (5) When is negation typically used?
10.50 Marc Brysbaert – Ghent University
Do we still need human ratings
Traditionally, human ratings formed the basis for understanding various facets of word processing (influences of familiarity, concreteness, age of acquisition, semantic similarity to other words). The emergence of semantic vectors and large language models has revolutionized the field. These advances allow accurate automatic estimates of established metrics by using minimal human-rated examples or by using ratings from other languages. This promotes research in underrepresented languages. However, introducing new variables, such as the perceived usability of words, requires further research. While frequency undoubtedly affects ease of processing, the speed of word acquisition suggests that other factors are at play. This paper presents the first study to examine the influence of the subjective expected utility of a word on processing time.
11.30 Vera Demberg – Saarland University
Individual differences in pragmatic processing
Researchers in experimental pragmatics frequently report large variability between participants, with some drawing pragmatic inferences while others are less likely to do so. In my talk, I will report on several recent experiments within my lab which have tested different types of pragmatic reasoning alongside batteries of individual difference tests. Furthermore, I will present our efforts in cognitive modelling, which have allowed us to make more specific hypotheses about how certain types of individual differences such as memory updating and learning from negative feedback might drive the individual differences observed in specific types of pragmatic reasoning.
12.10 Antje S. Meyer – Max Planck Institute Nijmegen
Developing and validating a new battery for language skills in young adults
The core mission of Big Question 4-A of the Language in Interaction consortium was to develop a comprehensive online test battery for language skills in young adults. We report how we have been developing such an instrument, first for Dutch and then for German, English, and Spanish. We describe various theoretical and methodological challenges encountered during this work. We then report the results of a large-scale validation study of the Dutch battery, which confirmed that, by and large, the tests in the battery measured the constructs they were designed to measure. Furthermore, comparisons of data acquired in the lab and online confirmed that the battery is well-suited for online use. We then turn to studies using the data from the validation study to assess specific research questions. These questions concern in particular the impact of domain-general processing speed and domain-specific linguistic knowledge in different speaking and listening tasks. In the final part of the talk we discuss how the batteries can be used in future work and point out some of their limitations. We hope that the presentation will encourage our listeners to use the customizable online versions of our batteries in their own work and/or embark on the development of similar instruments, ideally in a broad range of languages.
Tuesday afternoon session
Chair: Aslı Özyürek – Max Planck Insitute Nijmegen
Language in Interaction Sc. Board member BQ3
Creating a shared cognitive space: How is language grounded in and shaped by communicative settings of interacting people?
![](https://hils2024.nl/wp-content/uploads/2024/03/Ozyurek-2.png)
14.00 Morten Christiansen – Cornell University, Aarhus University
The Conversational Nature of Language
On average, English speakers utter around 16,000 words per day, most of it in interactions with other people. Yet, the language sciences have predominately approached language as if we use it for monologue, studying the processing of isolated words, sentences, paragraphs, or book excerpts. Even the psycholinguistic approach to dialogue has tended to ignore the dynamic, interactive nature of conversations, treating them as if they are all the same, akin to serialized monologues. In this talk, I will argue that we should view language as being fundamentally collaborative and improvisational, like a game of charades. I present findings from recent quantitative analyses of dialogues in different contexts, illustrating how interlocutors work together and flexibly deploy different conversational devices to establish and maintain a common understanding. Specifically, we measured the tradeoff between spoken backchannels (e.g., mhm) and repairs (e.g., what did they say?) as well as both linguistic and kinetic alignment. The results show that not all conversations are the same but, rather, are sensitive to variation in conversational contexts, including the native language of the interlocutors. Thus, conversations are much more dynamic and varied than typically assumed, requiring the language sciences to adopt a more nuanced way to study language use in terms of how interlocutors collaborate to keep conversations running smoothly.
14.40 Melissa Duff – Vanderbilt University Medical Center
Hippocampal contributions to semantic representation over time
The hippocampus plays a critical role in the acquisition of new semantic memory. The traditional view has been that this role is time limited and that semantic representations become independent of the hippocampus over time via neocortical consolidation. Evidence for this view came, in part, from patients with hippocampal amnesia who did not have aphasia or semantic dementia and who performed within normal limits on neuropsychological tests of vocabulary knowledge and naming for information acquired before the onset of amnesia. I will present data across a series of patient studies demonstrating impoverished remote semantic memory, impaired naming, disruptions in metaphorical knowledge, and deficits in knowledge about the relations among words in the lexicon. These findings challenge the historical view that semantic memory becomes independent of the hippocampus over time and that remote semantic memory is intact in amnesia. Further, these finding point to a role for the hippocampus in representing and navigating semantic space in a manner similar to its role in representing and navigating physical space. Finally, hippocampal dysfunction appears to be a risk factor for disorders of semantic memory, both in its acquisition and for the long-term upkeep of the lexicon. An important implication is that there are likely more far-reaching disruptions in semantic memory representation and use than previously documented in populations where hippocampal pathology is common.
16.20 Rachael Jack – University of Glasgow
Deciphering dynamic facial expression communication across cultures
Understanding how humans use facial expressions to communicate information has remained a central question for well over a century. However, addressing this question is empirically challenging because human facial expressions are highly complex multi-component dynamic signals. New technological and methodological advances now make this problem tractable, enabling novel insights surface. Here, I will showcase one such approach that can precisely model dynamic facial expressions within and across cultures and characterize the information they represent. Specifically, we combine a modern computer-graphics based 3D generative model of human facial movements with the classic data-driven method of reverse correlation and state-of-the-art information-theoretic analysis tools. Using this approach, we have disentangled which facial movements are culturally common from those that are culture-specific, impacting central theories on the universality of facial expressions. We have also shown that facial expressions of emotion represent information in an evolving, broad-to-specific hierarchical structure over time, including multiplexing complex information. In more recent work involving multimodal signals, we also demonstrate that specific facial movements impact the interpretation of speech, including via the inference of meta-cognitive information and iconic representations of quantity. Together, by deriving these causal models, our work provides the critical missing explanatory element in deciphering dynamic facial expression communication, challenging longstanding beliefs and developing new theoretical frameworks. Finally, we show that our generative models of dynamic facial expressions can be directly transferred to artificial agents, substantially improving their social signalling capabilities.
17.00 Marieke Woensdregt – Radboud University Nijmegen
Explaining flexible inference-making between and within language users: computational constraints
Human language use is remarkably flexible. When inferring the meaning of an utterance, we flexibly use our background knowledge, reason about what’s going on in our conversation partner’s mind, or ask for clarification when needed. I will present theoretical work on the challenges that come with developing a computational explanation of these abilities, given various constraints (such as human minds being resource-bounded). The talk will address inference-making in language use on two different levels: (i) turn-by-turn inferences between language users (to resolve referential ambiguity), and (ii) phrase-level inferences within a language user (to interpret novel expressions).
On the level of conversational turns, humans use various strategies to resolve referential ambiguity. I will present a theoretical analysis that compares two such strategies: (i) pragmatic reasoning, where communicators reason about each other, and (ii) other-initiated repair, where communicators signal and resolve trouble interactively. Using agent-based simulations and computational complexity analyses, we compare the efficiency of these strategies in terms of communicative success, computation cost and interaction cost. We show that interactive agents who use a simple repair mechanism can get away with simpler reasoning by distributing cognitive labour over multiple turns in the conversation.
On the level of individual phrases, humans are able to interpret novel linguistic expressions (like “mask-shaming”) also when this requires integrating relevant background knowledge (e.g., COVID-19 pandemic). I will present a (meta-)theoretical analysis of the challenges for developing a computational explanation of such flexible linguistic inference. We lay out not just the core properties of the phenomenon, but also (i) explanatory desiderata that help make sure a theory explains this phenomenon, and (ii) cognitive constraints that ensure a theory can be plausibly realised by human cognition and the brain. By doing so, we lay bare the ‘force field’ that theories of flexible linguistic inference will have to navigate.
Wednesday session
Chair:Raquel Fernández – University of Amsterdam
Language in Interaction Sc. Board member BQ1 The nature of the mental lexicon: How to bridge neurobiology and psycholinguistic theory by computational modelling?
![](https://hils2024.nl/wp-content/uploads/2024/04/raquel_portrait.png)
9.00 Tal Linzen – New York University
Language model predictions do not explain human syntactic processing
Prediction of upcoming events has emerged as a unifying framework for understanding human cognition. Concurrently, deep learning models trained to predict upcoming words have proved to be an effective foundation for artificial intelligence. The combination of these two developments presents a prospect for a unified framework for human sentence comprehension. We present an evaluation of this hypothesis using reading times from 2000 participants, who read a diverse set of syntactically complex English sentences. We find that standard LSTM and transformer language models drastically underpredicted human processing difficulty and left much of the item-wise variability unaccounted for. The discrepancy was reduced, but not eliminated, when we considered a model that assigns a higher weight to syntax than is necessarily for the word prediction objective. We conclude that humans’ next word predictions differ from those of standard language models, and that prediction error (surprisal) at the word level is unlikely to be a sufficient account of human sentence reading patterns.
9.40 Alona Fyshe – University of Alberta
Dancing, angry, thoughtful triangles: Exploring semantic encoding using a social attribution task
In the 1940s, Heider and Simmel (1944) found that most people viewing a particular short movie of moving geometric shapes attributed human characteristics to the shapes and movements. Decades later, using the same videos, Ami (2000) found that autistic individuals identified fewer social elements in the story, made attributions that were irrelevant to the social plot, and did not afford the shapes Theory of Mind skills to the same degree as non-autistic individuals.
In our study, we use computer models of language to evaluate brain imaging data gathered while autistic and non-autistic adolescents watched a Heider-and-Simmel-like movie. We used encoding models to investigate the types of information being processed while autistic and non-autistic adolescents viewed a 7-minute animation of geometric shapes. We found that variance in autistic participant BOLD responses was strongly correlated with the presence of the three main characters (geometric shapes), while for non-autistic participants the strongest correlation was with social interaction / emotion. Using a language model we analyzed encoding model fit for representations that did or did not have access to the social interpretation of the animation. We found that encoding models without access to social language (e.g., “small triangle”) outperformed those with access to social language (e.g. “boy”) in the autistic group alone, with no difference in the non-autistic group. Altogether, we offer characterization of the different semantic information encoded in the brain during movie viewing, suggesting that representations of social information differ between autistic and non autistic participants.
10.50 Roger Levy – Massachusetts Institute of Technology
Theory of expectation-based human language processing in the era of large language models
The question of how human language produce, comprehend, and learn language poses deep scientific challenges in accounting for the capabilities of the human mind. This talk has two goals. First, I review a body of state-of-the-art theory of human language processing as rational, expectation-based probabilistic inference and goal-directed action, covering three closely related proposals I’ve worked on for many years: surprisal theory, noisy-channel language processing, and Uniform Information Density. Second, I describe how this body of theory can be connected with the spectacularly successful large language model (LLM) technology that has emerged in the past few years. I show how we can use the theory of human language processing to gain insights into the strengths and weaknesses of LLM linguistic capabilities, and also how we can use LLMs as tools within expectation-based theory to deepen our understanding of how humans process language. Looking forward, I will argue that the technological advances offered by LLMs, together with ongoing improvements in data availability, analyses techniques, and open science practices position our field ideally for increasingly rapid advances in our foundational understanding of human language processing, ultimately shedding greater light on the fundamental workings of the human mind.
11.30 Willem Zuidema – University of Amsterdam
Under the hood: what LLMs learn about our language, and what they teach us about us
Large Language Models (LLMs) and Neural Speech Models (NSMs) have made big advances in the last few years in their abilities to mimic and process human language and speech. Their internal representations, however, are notoriously difficult to interpret, limiting their usefulness for cognitive and neuroscience. However, a new generation of posthoc interpretability techniques, based on causal interventions, provide an increasingly detailed look under the hood. These techniques allow us, in some cases, to reveal the nature of the learned representations, assess how general the learned rules are and formulate new hypotheses on how humans might process aspects of language and speech. I will discuss examples on syntactic priming and phonotactics, and speculate on the future impact of AI-models on the cognitive science of language.
12.00 Joshua Hartshorne – Boston College
AI for language science, a case study: How big data and machine learning are reshaping our understanding of bilingualism, language diversity, and more
Much attention is currently being paid by both scientists and laypeople to the theoretical implications for the language sciences of the existence of recent successes in artificial intelligence, and whether in fact there are any. Much less attention has been paid to the fact that scientific theories are being reshaped by the kind of science enabled by recent advances. Data of unprecedented clarity and scope are overturning long-held assumptions, and projects that would have been unimaginably expensive a decade ago are now being assigned as undergraduate term papers.
I illustrate this fact through a series of recent and in-prep findings about the nature of bilingualism and how second-language learning changes with age. Using big data and machine learning, I show that a) the pace of syntax-learning declines precipitously in late adolescence, b) L1-L2 transfer explains the widely-observed but puzzling phenomenon that older children learn languages substantially faster than younger children, and c) neuroanatomical differences between monolingual and bilingual brains are driven by proficiency not by age of acquisition.
I conclude by discussing opportunities and low-hanging fruit for AI-assisted language science, with special attention to increasing the cross-linguistic reach of psycholinguistics and language acquisition studies.
Thursday morning session
Chair: Ivan Toni – Donders Institute
Language in Interaction BQ3 coordinator – Creating a shared cognitive space: How is language grounded in and shaped by communicative settings of interacting people?
![](https://hils2024.nl/wp-content/uploads/2024/03/Toni.jpeg)
9.00 Kate Watkins – University of Oxford
Imaging and stimulating the brain in people who stutter
Developmental stuttering is characterised by repetition of sounds, syllables or words, pauses and prolongations that interrupt speech flow. It affects about 5% of children and 1% of adults. More men than women stutter. By imaging the vocal tract in real time, we found that, even during perceptibly fluent speech, movement variability in people who stutter is higher than in typically fluent speakers. Our brain imaging studies in people who stutter revealed differences in brain activity during speech and underlying differences in white matter microstructure. Fluency can be enhanced in people who stutter by altering sensory feedback during speech production or by altering production e.g. by whispering, changing pitch or accent, or by external cueing, speaking in unison or singing. We coupled these temporary fluency enhancers with non-invasive brain stimulation to enhance speech fluency in people who stutter in a randomised controlled trial. Five days of real stimulation paired with fluency training increased fluency relative to fluency training paired with sham stimulation. This effect persisted for up to six weeks after the end of the training. Functional MRI showed increases in activity in the striatum and speech motor cortex from pre- to post-training in the group where fluency increased. A recent study of a large sample of adults who stutter used quantitative multi-parametric mapping to reveal higher levels of iron in the striatum and speech motor cortex relative to controls. Higher iron levels in brain tissue in people who stutter could be an indication of elevated dopamine levels or lysosomal dysfunction, both of which are implicated in stuttering. The affected regions overlap with those where activity increased in our trial and are spatially coincident with patterns of expression of known genetic variants related to developmental stuttering that are involved in lysosomal function.
9.40 Ghislaine Dehaene-Lambertz – CNRS, INSERM, CEA, University Paris-Saclay, Neurospin center, Gif/Yvette, France.
New insights into language acquisition: How do preverbal human infants structure a variable speech signal?
Although different human languages use different sounds, words and syntax, most children acquire their native language without difficulty following the same developmental path. Thanks to the development of brain imaging, we can now study the early functional organization of the brain and examine what cerebral and computational resources human infants have at their disposal to learn their native language. I will present experiments showing that the infant brain is expert in the analysis and representation of phonetic categories of speech, on which powerful statistical analyses can be applied to retrieve their distribution in a speech stream, but also that it is capable of symbolic calculations, enabling the compression of sensory input into explicit variables that can be stored in a working memory for later manipulation. Brain imaging thus provides a refined description of infants ‘skills during linguistic tasks, which when compared with the results obtained in animals and human adults should help clarify the emergence of language in the human species.
10.50 Sophie Valk – MPI Leipzig
Exploring Microstructural and Functional Asymmetry in the Human Cortex: Implications for Language and Mental Health
Understanding hemispheric asymmetry of the human brain, both structurally and functionally, is crucial for elucidating its role in language processing, mental health, and neurodevelopmental conditions such as autism. In this series of studies, we utilized a multiscale approach, including post-mortem cellular architecture data and in-vivo microstructural imaging, extending to behavioral genetic approaches, to investigate cortical asymmetry patterns across layers and their relationship with gender, age, and heritability. Additionally, we examined how functional asymmetries manifest in autism spectrum disorder (ASD), focusing on language networks and developmental trajectories. Our findings reveal a layer-specific microstructural asymmetry with implications for brain function and mental health, including altered intrinsic functional organization in ASD. Moreover, we identified heritable and phylogenetically conserved asymmetries in cortical functional organization, shedding light on the genetic basis underlying higher cognitive functions possibly unique to humans. These insights contribute to our understanding of brain asymmetry and its significance in both typical and atypical neurodevelopmental contexts.
11.30 Stephanie Forkel -Donders Institute
Neurovariability and language – insights from the healthy brain and clinical populations
Building on our prior work exploring the individual differences in white matter, our most recent research takes a closer look at the variability present within the language network, leveraging the distinctive Language in Interaction dataset. This step marks a significant advancement in understanding the brain’s mechanisms responsible for language processing and how these vary across individuals.
Utilising cutting-edge neuroimaging techniques, including high angular resolution diffusion imaging and sophisticated tractography, our team has mapped out the white matter pathways critical for language. These pathways link key areas involved in language processing across the cortex, revealing a detailed network of fibres crucial for comprehending and producing language. Our findings highlight a wide range of variability in the ways these cortical areas connect, offering new insights into the anatomical diversity of language networks.
By integrating these anatomical discoveries with functional data, we are on the verge of developing the first functional white matter atlas highlighting the networks supporting language. This atlas, enriched by insights from the Language in Interaction dataset, sets the stage for groundbreaking research into how cortical and subcortical structures collectively contribute to language. More than just a result of our investigations, this atlas serves as a pivotal tool for furthering our understanding of language as a dynamic function.
Our studies emphasise the importance of acknowledging individual differences in the brain’s language architecture. The findings stress the need for tailored approaches in clinical applications, enhancing the potential for their application in neurology and neurosurgery.
Thursday afternoon: Special session in honor of Peter Hagoort
Chair: Pienie Zwitserlood – University of Münster
![](https://hils2024.nl/wp-content/uploads/2024/04/pienie-zwitserlood-h205.jpg)
13.00 Opening by Chair Pienie Zwiterlood
Honourable, honourful PeHa: It is the morphology, stupid!
13.10 Ping Li – The Hong Kong Polytechnic University
Language Sciences in the Era of Digital Technology and Generative AI: Some Examples
With the rapid developments in digital technology and generative AI, researchers are
assessing the impacts that these developments bring to various domains of scientific studies.
For language scientists, such impacts can present a number of challenges to the traditional
methods of doing language research. In this talk, I describe how we make use of the positive
dimensions of digital technology and generative AI in our research. Specifically, I will
discuss examples in which our experiments enhance second language learning through the
use of immersive virtual technologies, and the benefits that such learning may bring to the
linguistic brain. Further, I will discuss the model-brain alignment approach that leverages
progress in large language models (LLMs), especially in the context of naturalistic language
comprehension in both native and non-native languages. This approach also allows us to
uncover the role of individual differences underlying language learning and representation.
Overall, our work suggests that digital technology and generative AI, while presenting
serious challenges to our sciences and the society, can positively influence the future of
language science research.
13.35 Jeffrey Binder – Medical College of Wisconsin
White trains and colorless ideas: On the distinction between pragmatic and semantic knowledge
A qualitative distinction is often made between semantic knowledge, comprising “core” lexical meaning, and pragmatic knowledge, comprising extra-linguistic knowledge of the world. An elegant 2004 ERP and fMRI study by Hagoort and colleagues (Science, 304: 438-441) has sometimes been interpreted as providing neural evidence for this distinction. On the other hand, there is substantial evidence that the brain regions representing word meaning, sentence and discourse meaning, and non-linguistic conceptual knowledge are largely overlapping. A theory is presented that reconciles these views by proposing that all linguistic meaning derives from the same neural process of constructing experience-based situation models (ESMs), which also support a variety of less linguistic phenomena like theory-of-mind, prospection, and episodic memory retrieval. ESMs range in complexity from simple object models to multi-constituent spatial-temporal-causal event representations. On this view, the semantic-pragmatic distinction is best understood as a quantitative rather than a qualitative difference, reflecting construction of more or less complex ESMs. The apparently qualitative differences observed by Hagoort et al., which the authors recognized were likely to have arisen after initial word recognition, can be explained as due to post-recognition verification/decision processes that are engaged differently depending on the extent of meaning violation.
14.00 Ron Mangun – University of California, Davis
Thirty Years of Cognitive Neuroscience: Wiggles, Blobs, Networks and Models of the Mind
For the past three-plus decades, cognitive neuroscience has evolved from an inspiration to a major disciplinary area of scholarship. National funding agencies and private foundations all now have cognitive neuroscience programs. During this time, many different methods have risen to become major tools to investigate the human mind and brain, and we are now blessed by an array of methods to probe human cognition. Hagoort has employed a wide array in his career, and indeed, with his colleagues at the MPI and the Donders Centre, has helped to develop and enhance many of these tools. In this talk, I will present a highly selected view of his work, ranging from “wiggles” (EEG and ERPs), “blobs” (fMRI), and language “network” architecture to arrive at models of human cognition. My goal, however, is not to attempt to do justice to his work (others will do a better job of that), but rather to use it to frame some findings from our own research that over the same thirty-year period used wiggles, blobs, networks, and models to understand human selective attention, thus paralleling Hagoort’s illustrious career, and gaining inspiration from it along the way.
14.55 Manuel Carreiras – BCBL. Basque Center on Cognition, Brain and Language
Neural processing in healthy Spanish-Basque bilinguals and in bilingual patients with low grade gliomas
Research into the neural effects of bilingualism, essential for understanding how the brain handles both native (L1) and secondary languages (L2), remains inconclusive. Conflicting evidence suggests either similarities or differences in neural processing, which has implications for bilingual patients with brain tumors. Maintaining bilingual language abilities post-surgery necessitates considering neuroplastic changes before diagnosis. In our study, we utilized both univariate and multivariate fMRI techniques to examine healthy Spanish-Basque bilinguals and bilingual patients with gliomas affecting their language-dominant hemisphere while they produced and understood sentences in either their L1 or L2.
Results from healthy participants revealed a common neural system for both languages, alongside areas with distinct language-specific activation and lateralization patterns. While the L1 showed left-lateralized activation, the production of L2 involved a bilateral basal ganglia-thalamo-cortical circuit. Bilingual experience leaves a neuroanatomical imprint. In addition, analyses of brain tumor patients reveals flexibility within the production network affecting both languages,
These findings emphasize a flexible system capable of functional reorganization for both L1 and L2 production in health and disease. They have significant implications for the evaluation and treatment of bilingual patients with brain tumors, underscoring the importance of personalized interventions based on individual linguistic profiles. Identifying specific brain regions crucial for bilingual language processing opens avenues for targeted therapeutic approaches, ultimately improving patient outcomes.
15.20 Edward de Haan – Donders Institute
Is consciousness singular? A neuropsychological approach.
In common sense experience, based on introspection, consciousness is singular. There is only one “me” and that is the one that is conscious. Philosophers such as Descartes have often stipulated that “singularity” is a defining aspect of “consciousness”, and the two main theories of consciousness, Integrated Information and Global Workspace theory also assume that it is indivisible. In this review, I will re-examine the theoretical implications of neuropsychological impairments in conscious awareness, such as covert recognition, neglect, split-brain and anosognosia, and propose a new way to conceptualise consciousness of the singularity. I will argue that the subjective feeling of singularity can co-exist with several disunified conscious experiences. That is, perceptual, language, memory and attentional processes may proceed unintegrated and in parallel. Conscious awareness is achieved in all of these mental systems separately at the highest level of processing. The level of awareness may differ depending on the priority position of the mental system or the specific content. The sense of unity only arises when organisms need to respond coherently constrained by a single body and the affordances of the environment. The sense of singularity, the experience of a “Me-ness”, thus emerges in the interaction between the world and (motor) planning of a singular person. The first testable hypothesis that follows is that one could lose the sense of “Me-ness” while remaining consciously aware. At first sight, such a condition has not been described in the neuropsychological literature, but further afield, there are clear examples of this experiential state of “ego dissolution”, for instance in psychiatric patients and under the influence of psychedelic drugs, such as psilocybin and mescaline. A suggestion that goes back to great Greek philosopher Plotines. This – somewhat controversial – proposal opens new venues for studies of conscious awareness.
15.45 Marta Kutas – University of California San Diego
Sure, why not?
I’ve known Herr prof. dr. Peter Hagoort from the time he was a graduate student working on his dissertation at the MPI for Psycholinguistics, Nijmegen to the present day when we are honoring his retirement as a Professor of Cognitive Neuroscience, Radboud University. I should add that I’ve been fortunate to have PEHA as a dear friend throughout this 35-year period as well. We met at UC, San Diego where I spent my scientific career first as a postdoctoral researcher with Professors Galambos and Hillyard in the Department of Neurosciences, 10 years on “soft” money as Professor of Neuroscience, In Residence at UCSD-SOM, and as a Professor of Cognitive Science (1989-2022). Spanning PEHA’s scientific career and life as I have, I am taking this opportunity to honor him by sharing some of our history including emails, photos, and poetry.
16.40 Ulman Lindenberger – Max Planck Institute for Human Development, Berlin
A Few Notes on Causality and Mechanisms in Lifespan Cognitive Neuroscience
Science seeks to infer causal relations from data and identify mechanisms that explain these relations. The conceptual and methodological challenges in trying to attain these goals differ in kind and extent from discipline to discipline. I will highlight some of the specific challenges faced by lifespan cognitive neuroscience, which works on three major integrative tasks: (a) understand the mechanisms that link short-term variations to long-term change; (b) integrate theorizing and research practice across functional domains; (c) link behavioral and neuronal levels of analysis. I will discuss each of these tasks, and will compare them occasionally to analogous considerations about causality and mechanisms in the study of language.
17.05 Angela Friederici- Max Planck Institute for Human Cognitive and Brain Sciences
The Language Pendulum: Four decades of cognitive neuroscience
In a recent article Peter Hagoort describes the research on the cognitive neuroscience of language conducted during the past decades as a pendulum. The weight has swung from the position of language of thought representing the human mind to embodiment accounts where language is a byproduct of perception and action. In these decades, scientists across the world have delivered experimental work located at and between these two positions, leading to challenging debates. When considering experimental work on the neural basis of language and beyond, it appears that aspects of embodiment come into play whenever the communicative function of language is at issue, rather than the language system itself. Here an evolutionary view on the neurobiology of the language could provide an additional perspective. Recent cross-species comparisons, between the language-ready human brain and the brains of non-human primates, seem to be offering a new opportunity to answer questions about the origin and representation of language in the human brain.
17.30 Pienie Zwitserlood – University of Münster