NO.256 Grounding for real world human-AI interactions: models, methods, and modalities
March 1 - 4, 2027 (Check-in: February 28, 2027 )
Organizers
- Kristiina Jokinen
- AIRC / AIST, Japan
- Justine Cassell
- INRIA, France
- David Traum
- ICT / USC, USA
Overview
Description of the meeting
“Grounding” refers to the process by which conversational participants relate their communications to concepts and observable objects and phenomena in such a way as to ensure understanding by their interlocutors. Grounding is crucial in interactions where participants must develop mutual understanding and shared knowledge concerning the conversational situation. It enables the partners to identify each other’s intentions and knowledge, to make future references to shared information, and to create social relations through mutual knowledge. In order to have productive and coherent dialogues with people, it is important for AI systems to engage in grounding using similar means, to establish common ground and ensure that what one says is understood by both parties.
We can distinguish different types of grounding depending on whether the grounding focusses on exchanged information in conversations (Conversational Grounding), selecting correct referents in the physical environment (Visual Grounding), or finding support and evidence for the information delivered (Knowledge Grounding). Beyond simple information exchange using words, grounding also involves visual cues, inferential reasoning, and dynamic feedback mechanisms.
Despite extensive prior research, challenges persist across contexts and conversational scenarios. The number of interlocutors, their roles and relationships with each other, their emotional state, goals and cognitive load have impact on how grounding takes place and how it reveals itself across the participants’ turns as their dialogue progresses.
In the era of LLMs and interactive AI agents, the notion of grounding has become one of the most important topics as Generative AI has radically changed dialogue modelling research by making fluent chatting a common model of interaction. This has also raised concerns due to the LLMs’ tendency to fabricate facts, generate false information and fail to use situational knowledge for contextually appropriate responses. The focus of research has thus shifted towards challenges that concern models and methodologies for the grounding process, system architectures, and knowledge representation, as well as sustainability issues, which are important when building models and developing applications for real-world targets.
Although grounding has been recognized as one of the important design issues in LLMbased interaction models, construction and use of shared knowledge has not yet been widely studied in the new AI paradigm. LLMs usually take the dialogue context into account in prompt design, but struggle to reason e.g. about different numerical, temporal, and real-world references in conversations. Agentive architectures, with hybrid approaches to knowledge representation, knowledge graphs and reasoning, offer promising and exciting perspectives to tackle complex issues of how humans interact in dialogues and co-create shared understanding to reach their goals and complete tasks, while issues on episodic and semantic memory, forgetting, and memory retrieval relate grounding to cognitive models.
With the interdisciplinary group of experts on LLMs, grounding and interaction (language, cognition, computer science, AI, and engineering), the workshop aims to investigate how shared knowledge is constructed in the course of a conversation, how this information is stored in the dialogue memory, and how it is used in future interactions. The workshop also aims to discuss if the models of human cognition can shed light on the way AI agents could model and store knowledge, and what are the ethical implications and privacy concerns for practical AI-systems which strive for reliable and useful applications trying to avoid irrelevant or false information.
This workshop will allow academic and industrial researchers as well as language and communication technology professionals to come together to explore multiple aspects of grounding and share best practices in design, modeling, and application within dialogue systems and related AI applications.
Topics will include – but are not limited to:
1. Fundamental Issues in Conversational Grounding
2. Cognitive Models for Grounding
3. Knowledge Grounding and use of LLMs
4. Situated and Multimodal Grounding
5. Multiparty Grounding
6. Grounding for Specific Application areas
The workshop will include plenary keynote talks and discussions as well as breakout groups devoted to subtopics and included detailed analysis sessions and groundingrelated activities.
Workshop findings will be disseminated in a workshop final report and ideally in other joint publications from participants.
The workshop addresses timely issues. Its goal is to broaden our understanding of the grounding processes and related conversational phenomena as well as to make further progress in the design and development of natural and symbiotic AI-human interactions. It is expected that workshop will survey issues, methods, and results conducted in the various research fields, and that the discussions will offer novel solutions to integrate the different approaches into a coherent interdisciplinary roadmap which can guide future research. For instance, explorations of the use of LLMs in combination with other techniques can create transparent grounding processes which include multiple grounding types and which in turn will improve the performance of interactive agents and LLMs more generally.
The workshop will be a rare opportunity for interdisciplinary collaboration where individual participants can explore and expand their own horizons simultaneously as they contribute to the shared understanding, with outcome that would be difficult to achieve from one’s own standpoint alone.