Creating a shared cognitive space: How is language grounded in and shaped by communicative settings of interacting people?
Language is a key socio-cognitive human function predominantly used in interaction. Yet, linguistics and cognitive neuroscience have largely focused on individuals’ coding-decoding signals according to their structural dependencies. Understanding the communicative use of language requires shifting the focus of investigation to the mechanisms used by interlocutors to share a conceptual space.
This big question considers the influence of two dimensions over multiple communicative resources (speech, gestures, gaze) and linguistic structures (from phonology to pragmatics), namely the temporal structure of communicative interactions and the functional dynamics of real-life communicative interactions.
There is deep collaboration between all BQ3 subprojects. The qualitative results that follow from the simulation studies will be related to the empirical findings from the other subprojects and vice versa, the empirical observations from the other subprojects will inspire the qualitative hypotheses to be tested. The cognitive agent-based simulation studies go beyond the empirical paradigm in the BQ3 project, because they allow us to test for qualitative differences in interactive behaviour by manipulating the cognitive capacities of the agents—something that is difficult to do with human test subjects—while simultaneously leading to explicit theories of computational mechanisms.
Prof. dr. Mirjam Ernestus
Prof. dr. Asli Ozyurek
Prof. dr. Iris van Rooij
Dr. Jan-Mathijs Schoffelen
Dr. Sara Bögels
Dr. Marieke Woensdregt
Research Highlights (2022)
The multimodal nature of communicative efficiency in social interaction
Team members: Marlou Rasenberg, Wim Pouw, Asli Özyürek, Mark Dingemanse
This project proposes a synthesis of work on joint action and language use, using primary data, novel methods and theoretical insights from the CABB project to study communicative efficiency in social interaction. Combining kinematic measures and annotations derived from speech, it investigates how people divide the joint work of arriving at mutual understanding.
We have investigated whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those co-efficiency principles by revealing that they pertain to multimodal utterance design.
The findings indicate that speech and gesture efforts rise and fall together across repair types and sequential positions. This corroborates the view that speech and gesture are integral parts of a single multimodal communicative system, providing a novel and quantitative interdisciplinary perspective on studies of language use and nonlinguistic joint action. This project is a direct result of BQ3’s multidisciplinary approach, integrating different aspects of human communication in a comprehensive multimodal dataset relevant for a wide range of disciplines.