Schedules are shown according to Central European Summer Time (CEST). CEST can be converted into other time zones with a time zone converter, for instance, world time buddy.

All the videos are available online, on the LLF channel.

Thursday, May 20 (all times CEST)

11:00 Opening

11:10-12:10 Judit Gervain (U Paris Descartes, CNRS) How prosody helps infants and children to break into communication

12:20-13:20 Talya Sadeh (Ben Gurion U) How Do Pre- and Post-Encoding Processes Affect Episodic Memory?

14:10-15:10 Robin Cooper (U Gothenburg) Modelling Memory with Types: semantics and neural representation

15:20-16:20 Peter Hagoort (Radboud U, MPI for Psycholinguistics) The neuropragmatics of dialogue and discourse

16:40-17:40 Jonathan Ginzburg (U Paris) Dialogue Context in Memory

18:00—onwards free discussion

Friday, May 21

11:00-12:00 Alistair Knott (U Otago) A neural model of sensorimotor experience, and of the representation, storage and communication of events

12:10-13:10 Christine Bastin (GIGA-CRC in vivo imaging, U Liège) Episodic memory and the importance of attribution processes to assess the retrieved memory contents

14:00-15:00 Massimo Poesio (Queen Mary U), Sharid Loáiciga (U Potsdam) Universal Anaphora and Dialogue Phenomena
(In collaboration with Sopan Khosla, Ramesh Manuvinakurike, Vincent Ng, Carolyn Rose, Michael Strube, Juntao Yu, Simon Dobnik and David Schlangen)

15:10-16:10 Evelina Fedorenko (MIT) Language within the mosaic of social cognition

16:30-17:30 Andy Lücking (U Paris, U Frankfurt) Multimodality and Memory: Outlining Interface Topics in Multimodal Natural Language Processing

17:45—onwards free discussion


The neuropragmatics of dialogue and discourse
Peter Hagoort (Radboud U, MPI for Psycholinguistics)

In real life communication, language is usually used for more than the exchange of propositional content. Speakers and listeners want to get things done by their exchange of linguistic utterances. For this to be achieved, brain networks beyond those for recognizing and speaking words and establishing syntactic and thematic relations between them (who did what to whom) need to be recruited. The same holds for the alignment of speakers and listeners in conversational settings. In my presentation I will discuss some of our fMRI studies showing brain activations that were seen when processing language beyond the information given.
Bašnáková J, van Berkum J, Weber K, Hagoort P. A job interview in the MRI scanner: How does indirectness affect addressees and overhearers?
Heidlmayr K , Weber K , Takashima A , Hagoort P. No title, no theme: The joined neural space between speakers and listeners during production and comprehension of multi-sentence discourse. Epub 2020 Jun 4

Dialogue Context in Memory
Jonathan Ginzburg (U. de Paris)

Recent years have seen the emergence of theories that can be used to analyze a variety of phenomena characteristic of conversational interaction, including non-sentential utterances, manual gestures, collaborative utterances and laughter. In all these cases the content of the utterance gets much of its content from the context (eliminating the antecedent leaves the utterance highly vague). Much of the rapid progress attained by theories of semantics and pragmatics in recent decades has involved a dynamic strategy where meaning emerges from gradual accumulation of information and referents in the context. In this talk I will point to conversational phenomena that highlight two significant shortcomings of existing theories of meaning in conversation, the lack of explicit interface with long-term memory and the absence of orgetting. I will sketch a synthesis between one approach to describing dialogical interaction and meaning, the framework KoS, and existing theories of memory, in which these shortcomings can be addressed.
Ginzburg, Jonathan, Chiara Mazzocconi, and Ye Tian (2020). Laughter as language. Glossa: a journal of general linguistics 5, no. 1.
Ginzburg, J and A. Lücking (2020). On Laughter and Forgetting and Reconversing: A neurologically-inspired model of conversational context. In: Proceedings of SemDial 2020.

A neural model of sensorimotor experience, and of the representation, storage and communication of events
Alistair Knott (U Otago)

Many cognitive scientists have advanced ‘embodied’ models of human language, in which language is connected in some way to the sensorimotor (SM) mechanisms that engage with the world. I’ll introduce a particular version of this idea, that has relevance for models of how language interfaces with long-term memory and with the emotional system.
The foundation for my model is Dana Ballard’s (1997) proposal that the SM processes through which an agent engages with the world are structured as deictic routines: well-defined sequences of relatively discrete atomic attentional, sensory or motor actions (called deictic operations). I propose that agents experience sentence-sized ‘events’ in the world through deictic routines, whether they are observing them or participating in them. I further propose that agents represent events in working memory (WM) as prepared deictic routines: that is, as ‘executable’ representations, that can be performed, or simulated. In the model I propose, these executable event representations provide the interface between language and long-term memory (LTM). When an event has been experienced, its complete WM representation can be registered in LTM, and the WM representation can be cleared, ready for the next event. (It will be registered more strongly if it has strong emotional connotations.) A complete WM event representation can also be communicated, by simulating it in a special ‘language mode' where SM signals can trigger output phonology.
This model supports an interesting account of how memory operations surface in language. The key idea here is that operations accessing memory, or putting the agent into other cognitive modes, should also be regarded as ‘deictic operations’ - ones that happen at the very start of a deictic routine. On this model, when experiencing an event, the first thing the agent must do is to decide whether to retrieve an event from memory (or some other cognitive modality like imagination), or to engage with the sensorimotor here-and-now. These different options are each implemented by a deictic operation. (Thus there are separate deictic operations establishing ‘LTM retrieval mode’, ‘experience mode’ and so on.) Crucially, these mode-setting deictic operations are also stored in the WM medium encoding events, which interfaces with language.
This model of the SM system and its interfaces to memory offers some interesting ideas for linguists. At the level of syntax, stored mode-setting operations provide interesting possible denotations for tense mophology in sentences, and of several closed-class/modal verbs (including verbs expressing emotional experience, like ‘feel’). At the level of discourse, the model provides a neural implementation of several ideas from update semantics. I’ll focus on monologue in my talk, but I will outline possible extensions to dialogue.
A. Knott and M. Takac (2021). Roles for Event Representations in Sensorimotor Experience, Memory Formation, and Language Processing. Topics in Cognitive Science 13(1):187-205
M. Takac and A. Knott (2016). Mechanisms for storing and accessing event representations in episodic memory, and their expression in language: a neural network model. Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci) , pp. 532-537
A Knott (2014). How Infants Learn Word Meanings and Propositional Attitudes: A Neural Network Model. In T-W Hung (ed) Communicative Action, Springer, pp. 107-124
Edited volume: Special Issue of TOPICS: Event-Predictive Cognition: From Sensorimotor via Conceptual to Language-Based Structures and Processes

Episodic memory and the importance of attribution processes to assess the retrieved memory contents
Christine Bastin (GIGA-CRC in vivo imaging, U Liège)

The Integrative Memory model describes the core mechanisms leading to recollection (i.e., to recall qualitative details about a past event) and familiarity (i.e., to identify some event as previously encountered) as specific computational operations applying to specific types of representation. Critically, the model distinguishes them from the subjective experiences of remembering and knowing that only emerge following additional attribution operations. I will present evidence supporting several principles from the Integrative Memory model, notably data indicating that one can report a highly vivid experience of memory despite recalling only a few episodic details. I will also present some recent data moving towards inter-personal mechanisms underlying the sharing of memories.
Bastin, C., Besson, G., Simon, J., Delhaye, E., Geurten, M., Willems, S., & Salmon, E. (2019). An integrative memory model of recollection and familiarity to understand memory deficits. Behavioral and Brain Sciences, 42, E281
Adrien Folville, Arnaud D’Argembeau & Christine Bastin (2020). Deciphering the relationship between objective and subjective aspects of recollection in healthy aging. Memory, 28:3, 362-373

How prosody helps infants and children to break into communication
Judit Gervain (U. Paris-Descartes, CNRS)

The talk will present four sets of studies with young infants and children to show who prosody helps them learn about different aspects of language, from learning basic word order through understanding focus to decoding emotional valence. The sets of studies are losely connected, but common to them is how prosody, an overaching feature of language, already encountered prenatally in the womb and manifesting in newborns' communicative cries, helps infants break into language and guides them through different developmental steps from grammar to communication.
Judit Gervain: The role of prenatal experience in language development. Current Opinion in Behavioral Sciences 2018, 21:62-6

How Do Pre- and Post-Encoding Processes Affect Episodic Memory?
Talya Sadeh (Ben Gurion University)

What post-encoding processes cause forgetting? For decades there had been controversy as to whether forgetting is caused by decay over time or by interference from irrelevant information, and a coherent account for forgetting was lacking. My colleagues and I have proposed the Representation Theory of Forgetting, according to which forgetting can occur either due to decay or due to interference, depending on the nature of the memory-representation and the brain-structure supporting it. The hippocampus—a structure playing a crucial role in recollection—has a unique neurobiological property, termed pattern-separation, which enables it to represent similar memories in orthogonal patterns. In contrast, familiarity-based memories, supported by extrahippocampal structures are not represented in orthogonal patterns. Therefore, hippocampal-memories will be relatively resistant to interference from one another, but susceptible to decay over time; the reverse would be true for extrahippocampal-memories. In my talk, I will present behavioural evidence in support of our theory.
In addition, I will present a related research program, in which I ask whether memory is affected not only by post-encoding processes (like decay and interference), but also by processes occurring prior to encoding. I hypothesize that the scaffold of a memory engram is spontaneously laid even before the experience occurs. In support of this hypothesis, we have shown— using multivoxel-pattern-analysis of fMRI data—that the mnemonic fate of information depends on whether spontaneous neural representations prior to perceiving the information are reinstated during encoding.
Talya Sadeh, Janice Chen, Yonatan Goshen-Gottstein, Morris Moscovitch: Overlap between hippocampal pre-encoding and encoding patterns supports episodic memory. Hippocampus. 2019;29:836–847.
Talya Sadeh, Jason D. Ozubko, Gordon Winocur, and Morris Moscovitch: How we forget may depend on how we remember. Trends in Cognitive Sciences, January 2014, Vol. 18, No. 1

Modelling Memory with Types: semantics and neural representation
Robin Cooper (U. Gothenburg)

I will argue that record types in TTR (a type theory with records) can be used to model mental states such as memory or belief. For example, a type modelling a belief or memory state is a type of the way the world would be if our beliefs or memories were true. A sentence like:

Sam thinks that Kim left

is true just in case the type which is the content of "Kim left" matches the type modelling Sam's belief/memory state.
I will discuss some of the details of developing a semantics where propositions are matched against memories in this way where both propositions and memories are modelled as types in TTR. The claim that types can be used to model memory would be empty if it turns out that the types are in principle impossible to represent on a finite network of neurons. In the second part of this talk I will discuss how types might be represented in terms of neural events on a network.
Cooper, Robin and Jonathan Ginzburg (2015). Type Theory with Records for Natural Language Semantics, in Handbook of Contemporary Semantic Theory (second edition), ed. by Shalom Lappin and Chris Fox, Wiley-Blackwell, pp. 375--407
Cooper, Robin (2019). Representing Types as Neural Events, Journal of Logic, Language and Information, (28), 131–155, DOI 10.1007/s10849-019-09285-4


Universal Anaphora and Dialogue Phenomena
Massimo Poesio (Queen Mary U), Sharid Loáiciga (U Potsdam)

The objective of the Universal Anaphora initiative is to facilitate progress in the empirical study of anaphora by covering not just identity anaphora, but all aspects of anaphoric interpretation from identity of sense anaphora to bridging to discourse deixis in all languages and covering not just written language, but spoken dialogue as well. In fact, the first shared task associated with the initiative, the CODI/CRAC 2021 Shared Task, will be focused on dialogue. However, many of the characteristics of current anaphoric annotation schemes have been developed for written text, and there is very little anaphorically annotated dialogue corpora besides our own annotations of the Pear Stories and Trains corpora as part of ARRAU, and the situated dialogue corpora Tell-me-more and Cups (Dobnik, Kelleher & Howes, 2020). In this talk we will describe work currently underway as part of the organisation of the CODI/CRAC shared task and work on the extension of the ARRAU guidelines for situated dialogue. We will discuss some of the limitations of current annotation schemes while annotating new dialogue data, including the AMI, Light, Persuasion and Switchboard corpora; and situated dialogue, including the Tell-me-more and Cups corpora.

Language within the mosaic of social cognition
Evelina Fedorenko (MIT)

In spite of high genetic overlap and broadly similar neural organization between humans and non-human primates, humans surpass all other species in their abilities to solve novel problems, in the sophistication of their social and emotional reasoning mechanisms, and in the richness and flexibility of their communication system. How exactly these cognitive capacities evolved in humans remains debated. I will discuss three brain networks that support high-level cognition and the relationship among them: (i) the domain-general Multiple Demand (MD) network that has been linked to general reasoning abilities, novel problem solving, and fluid intelligence, (ii) the domain-specific network that supports social cognition, and iii) the domain-specific network that supports language processing. I will argue that although the three networks are highly neurally dissociable, a stronger relationship holds between the language and the social-cognition networks compared to the relationship between each of these networks and the domain-general MD network. In particular, the language and the social-cognition networks a) show broadly similar topography within the temporal and frontal cortex manifesting as parallel interdigitated networks, b) exhibit reliable synchronization in their activity in naturalistic cognition paradigms, c) pattern together in some developmental and acquired disorders, and d) may be interchangeable in the course of development in the face of extensive early brain damage. I will therefore argue that our sophisticated linguistic mechanisms were parasitic on the social mechanisms rather than on mechanisms that support general fluid general reasoning and abstract hierarchical thought.
Fedorenko, E. and Varley, R. (2016). Language and thought are not the same thing: evidencecfrom neuroimaging and neurological patients. Ann. N.Y. Acad. Sci., 1369: 132-153.
Paunov AM, Blank IA, Fedorenko E. (2019).Functionally distinct language and Theory of Mind networks are synchronized at rest and during language comprehension. J Neurophysiol. 2019 Apr 1;121(4):1244-1265
Evelina Fedorenko, Idan A. Blank (2020). Broca’s Area Is Not a Natural Kind. Trends in Cognitive Sciences, Volume 24, Issue 4, pp. 270-284.

Multimodality and Memory: Outlining Interface Topics in Multimodal Natural Language Processing
Andy Lücking (U Paris, U Frankfurt)

Multimodal dialogue, the use of speech and non-speech signals, is the basic form of interaction. As such, it is couched in the basic interaction mechanism of grounding and repair. This apparently straightforward view already has a couple of repercussions: firstly, non-speech gestures need representations that are subject to parallelism constraints for clarification requests known from verbal expressions. Secondly, co-activity between speaker and addressee on some channel is the rule virtually for the whole timecourse of interaction and leads to multimodal overlap as a norm, thereby questioning the orthodox notion of sequential turns. Thirdly, if turns are difficult to maintain, a new form of interaction has to be given: we propose to think of it in terms of polyphonic interaction inspired by (classical) music. The multimodal speech and non-speech examples given throughout the talk all seem to be explainable only when considering at least the interaction of contents and dialogue semantics, working memory constraints, and attentional mechanisms, and hence are examples of interface topics for cognitive science.