Norms of Assertion in Human-AI Communication (NIHAI)

As misinformation, fake news, and conspiracy theories circulate ever more freely, trust in media, science, and government is eroding. This challenge will intensify further, as we increasingly communicate with, and by aid of, large language models. 

Our three-year research project seeks to devise principles for responsible LLM communication: Philosophically and empirically informed rules, that specify what LLMs should and should not say. To do so, we aim to understand what people expect in conversations with AI, how they respond when these expectations aren’t met, and whether their expectations and reactions differ across languages and cultures. Subsequently, we’ll propose guidelines for designing AI systems that communicate responsibly and transparently. We will also test the envisioned novel guidelines with industry partners who provide language-based AI applications.  

Comments are closed.