Modeling the Quality of Dialogical Explanations

Collecting corpora for human explanation dialogues and modeling different aspects of explanation quality

The Transregional Collaborative Research Centre 318 “Constructing Explainability” (TRR 318), a large interdisciplinary research initiative funded by the German Research Foundation (DFG), explores how people and AI systems can co-construct explanations together. TRR 318 brings together linguistics, computer science, psychology, sociology, media studies, and philosophy to study the mechanisms of explanation in human-machine interaction and to design AI systems that support transparent, user-centered explanatory processes. By investigating both theoretical foundations and practical methods for explanation, the project aims to empower users to better understand and critically engage with complex algorithmic systems.

Within this context, I co-authored “Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations, where we introduced a novel corpus of dialogical explanations to support NLP research on how humans actually explain concepts in dialogue — a key resource for building models that can learn human-like explanatory interaction. I also contributed to Modeling the Quality of Dialogical Explanations, a study that analyzes interaction flows in everyday explanation dialogues and proposes computational models that correlate these interaction patterns with explanation quality, advancing our understanding of what makes dialogical explanations effective.