As misinformation, fake news, and conspiracy theories circulate ever more freely, trust in media, science, and government is eroding. This challenge will intensify further, as we increasingly communicate with, and by aid of, large language models.
Our three-year research project seeks to devise principles for responsible LLM communication: Philosophically and empirically informed rules, that specify what LLMs should and should not say. To do so, we aim to understand what people expect in conversations with AI, how they respond when these expectations aren’t met, and whether their expectations and reactions differ across languages and cultures. Subsequently, we’ll propose guidelines for designing AI systems that communicate responsibly and transparently. We will also test the envisioned novel guidelines with industry partners who provide language-based AI applications.
The envisioned research is funded with over 1 Mio Euros from the ERC Chanse/Hera Program.
The project will be conducted by Prof. Markus Kneer (project leader), University of Graz, PD Dr. Markus Christen (PI), University of Zurich, Prof. Mihaela Constantinescu (PI), University of Bucharest, Prof. Izabela Skoczen (PI), Jagiellonian University. Polaris News, led by award-winning journalist Hannes Grassegger, is a key collaborator.