AI Assertions, Perspectives from Philosophy, Cognitive Science, Linguistics & HCI (University of Graz, Austria, June 22-23, 2024)
TOPIC
We are organizing a conference on questions related to AI assertion (or AI “testimony”). The topic will be addressed from the perspectives of philosophy, cognitive science, linguistics and human-computer interaction. Key questions include, though are not limited to the following:
- Can LLMs (such as ChatGPT) make assertions?
- Are our normative expectations towards AI interlocutors similar to those which govern linguistic human-human interaction?
- If they differ, what are the norms of AI assertion?
- Can artificial agents make “dark moves” in linguistic communication (e.g. lie, bullshit, deceive etc.), or do those require a full-fledged human agent?
- Who is responsible for shortcomings in linguistic AI-human communication?
- How do our normative expectations towards AI assertors interact with our dispositions to trust and rely on them?
- Does the concept of trust make sense in linguistic human-AI interaction?
More details here.