{"id":34,"date":"2024-12-03T22:05:34","date_gmt":"2024-12-03T22:05:34","guid":{"rendered":"https:\/\/talkingtobots.net\/?page_id=34"},"modified":"2026-03-19T08:20:41","modified_gmt":"2026-03-19T08:20:41","slug":"research","status":"publish","type":"page","link":"https:\/\/talkingtobots.net\/?page_id=34","title":{"rendered":"Research"},"content":{"rendered":"\n<p><\/p>\n\n\n\n<p class=\"has-black-color has-text-color has-link-color has-large-font-size wp-elements-a9140c02f236a44432a182dec21ac02d\"><strong>Papers<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/philarchive.org\/go.pl?id=KNETHP&amp;proxyId=&amp;u=https%3A%2F%2Fphilpapers.org%2Farchive%2FKNETHP.pdf\"><strong>The Hard Problem of AI Alignment: Value Forks in Moral Judgment<\/strong><\/a>, Markus Kneer &amp; Juri Viehoff, <em>Proceedings of the 2025 Acm Conference on Fairness, Accountability, and Transparency<\/em> (2025)<\/p>\n\n\n\n<p><strong>Abstract<\/strong>: Complex moral trade-offs are a basic feature of human life: for example, confronted with scarce medical resources, doctors must frequently choose who amongst equally deserving candidates receives medical treatment. But choosing what to do in moral trade-offs is no longer a \u2018humans-only\u2019 task, but often falls to AI agents. In this article, we report findings from a series of experiments (N=1029) intended to establish whether agent-type (Human vs. AI) matters for what should be done in moral trade-offs. We find that, relative to a human decision-maker, participants more often judge that AI agents should opt for fairness at the expense of maximizing utility. In our discussion, we explain how the reported differences (we call them agent-type \u2018value forks\u2019) matters for the study of moral value alignment, and we hypothesize what could explain these value forks. We close by reflecting on limits of our results and indicate avenues of further research.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong><a href=\"https:\/\/www.researchgate.net\/publication\/397299637_Trust_and_Responsibility_in_Human-AI_Interaction\">Trust and Responsibility in Human-AI Interaction<\/a><\/strong>, Markus Kneer, Michele Loi &amp; Markus Christen, preprint. <\/p>\n\n\n\n<p>Two topics at the center of Ethics of AI and HRI regard trust in AI agents as well as the adjudication of moral responsibility in situations where AI causes harm. In this paper we aim to advance the state of the art concerning these topics in several regards: First, we propose and evaluate a new empirical paradigm for measuring appropriate or calibrated trust in AI, that is, attitudes which are neither too trusting nor too cautious. The best way to measure calibrated trust, we argue, is by contrasting trust vested in AI agents when their relevant capacities equal those of a human expert in the domain. A second shortcoming of extant work regards generalizability: Trust in, and reliance on, AI are standardly explored with respect to a single context or domain. To investigate context-sensitivity, we ran experiments (total N=1276) across five key areas of AI application. Finally, we explored perceived moral responsibility for harm caused in human-AI interaction, with a particular focus on recent philosophical debates on the topic. Our findings suggest that approximately half of the participants vest equal trust in AI and human agents when their capacities are the same. However, there is considerable variation in trust calibration across domains, suggesting that context-sensitivity needs more attention. Human agents are attributed more moral responsibility than AI agents, whereas their supervisors are blamed less than those of AI agents. This suggests that, at least according to folk morality, there are no perceived &#8220;responsibility gaps&#8221; (Matthias 2004; Sparrow 2007) and that &#8220;retribution gaps&#8221; are a genuine possibility (Danaher 2016).<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11245-026-10371-z\">The Sorrows of Young Chatbot Users: Harm and Responsibility in Human-AI Relationships<\/a><\/strong>, Cristina Voinea, Christopher Register, Sebastian Porsdam Mann, Julian Savulescu &amp; Brian D. Earp, <em>Topoi <\/em>(2026)<\/p>\n\n\n\n<p>This paper argues that interactions with chatbots are a form of engaging with fictional characters; so, by comparing chatbots with novels and video games as mediums of fictional engagement, we can gain a clearer understanding of who, if anyone, is responsible when users\u2019 interactions with chatbots lead to self-harm or harm to others. We explore the differences between novels, video games, and chatbots across four dimensions: the degree of creators\u2019 control over the content and user experience, the nature of the fictional world, the type of engagement each medium fosters, and the structure of the engagement experience. We take a minimal account of what it takes to be morally responsible and consider how responsibility can be assigned when engagement with fictional worlds results in harm caused to or by users. We argue that because AI companies have some control over chatbots after public release, and because they can monitor user engagement, they are morally responsible when chatbot use leads to harm, even if they can\u2019t perfectly control chatbots\u2019 outputs. In the last section, we point to what AI companies can do to mitigate chatbots\u2019 negative influence on users.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.researchgate.net\/publication\/401860668_The_Concept_of_Fake_News_The_Roles_of_Falsity_Deception_and_Politics\"><strong>The Concept of Fake News: The Roles of Falsity, Deception, and Politics<\/strong><\/a>, Mikl\u00f3s K\u00fcrthy and Markus Kneer, preprint. <\/p>\n\n\n\n<p>Philosophers disagree about what makes news &#8220;fake&#8221;: some believe it is falsity, whereas others emphasize intentional deception about the content or the source. We report two preregistered studies (total N=1200) testing whether falsity and source deception predict fake-news classification. Participants evaluated scenarios in which a claim was either true or false, and the article appeared on either the official New York Times website or that of a near-identical impersonator. In both studies, falsity and source deception independently increased fake-news classifications, indicating that many participants did not treat &#8220;fake news&#8221; as a synonym for &#8220;false news&#8221;. Whereas with a predictive claim (Study 1), the factors amplified each other, with a past-tense claim (Study 2), falsity was near-determinative and the interaction not reliable. Across both studies, conservatism predicted increased fake-news classifications, but the association disappeared once perceived source reliability was included, suggesting politics operates via biased priors rather than differences in meaning.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p class=\"has-large-font-size\"><strong>Talks<\/strong><\/p>\n\n\n\n<p><strong>Markus Kneer<\/strong> delivered the keynote address:<em>Value Forks and AI Alignment<\/em>, at the annual conference of the <strong>Society for Philosophy of AI (PhAI)<\/strong>, held in <strong>Amsterdam<\/strong> on <strong>October 23, 2025<\/strong>. The program is available: <a href=\"https:\/\/www.pt-ai.org\/2025\/programme\/\">https:\/\/www.pt-ai.org\/2025\/programme\/<\/a><\/p>\n\n\n\n<p><strong>Cristina Voinea<\/strong> presented the poster: <em>Can We Talk? The Ethics of Human\u2013AI Communication<\/em>, for <strong>LLMs @ Oxford<\/strong>, Department of Computer Science, University of Oxford, on September 14, 2025.<\/p>\n\n\n\n<p><strong>Mihaela Constantinescu<\/strong> delivered the keynote: <em>How should we live well with LLMs?<\/em>, for the <strong>AI for Flourishing<\/strong> conference, University of Navarra, on June 30, 2025. Event details <a href=\"https:\/\/en.unav.edu\/news\/-\/contents\/15\/07\/2025\/la-universidad-acoge-un-encuentro-internacional-sobre-inteligencia-artificial-y-desarrollo-humano\/content\/lovPblW1fC70\/174416903\">here<\/a>.<\/p>\n\n\n\n<p><strong>Jakub Figura <\/strong>gave the talk: <em>A great research problem! Legal evaluation of epistemic risks caused by sycophancy of LLMs<\/em>, for the Institute of Law and Technology, Masaryk University (Brno), and the European Academy of ICT Law (Vienna), on November 29, 2025.<\/p>\n\n\n\n<p><strong>Izabela Skocze\u0144<\/strong> gave the talk:<em>The legal aspects of responsibility gaps<\/em> at the workshop&nbsp; \u201cArtificial agency and responsibility: the rise of LLM-powered avatars workshop\u201d, Faculty of Philosophy, University of Bucharest, 22 &amp; 23 May 2025, <a href=\"https:\/\/avataresponsibility.ccea.ro\/2025-edition\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/avataresponsibility.ccea.ro\/2025-edition\/<\/a><\/p>\n\n\n\n<p><strong>Izabela Skocze\u0144<\/strong> and <strong>Jakub Figura<\/strong> participated in the&#8221;Free Will, Robots, and Criminal Responsibility&#8221; workshop at Jagiellonian University, organized by Kamil Mamak. Their talk,&#8221;What is reasonable for AI?&#8221;, detailed recent empirical study on laypeople&#8217;s expectations for artificial agents. <\/p>\n\n\n\n<p class=\"has-large-font-size\"><strong>Events \/ Outreach <\/strong><\/p>\n\n\n\n<p><strong>Cristina Voinea<\/strong> gave the talk, <em>Automated Moral Reasoning<\/em>, for BiteSize Ethics 2025: Ethics in the Age of AI, a public outreach program organized by the Uehiro Oxford Institute, University of Oxford, on August 13, 2025.<\/p>\n\n\n\n<p><strong>Cristina Voinea<\/strong> presented her research on the ethics of human\u2013AI interaction to a delegation from the Department for Science, Innovation and Technology (DSIT), a ministerial department of the Government of the United Kingdom, during their visit to the University of Oxford on September 30, 2025.<\/p>\n\n\n\n<p><strong>Mihaela Constantinescu<\/strong> discussed the responsible use of LLMs in a fireside chat on the Inspire Stage of the IMPACT Hub Bucharest, a business and innovation event, on September 17, 2025. More details <a href=\"https:\/\/www.putereafinanciara.ro\/exclusiv-dragos-stanca-despre-culisele-big-tech-sper-ca-ai-ul-bun-sa-lupte-cu-ai-ul-rau-am-putea-gresi-din-nou-cum-am-gresit-la-aparitia-retelor-socia-24110.html\">here<\/a>.<\/p>\n\n\n\n<p><strong>Markus Christen<\/strong> presented and discussed at the public event, <em>Zwischen Freiheit und Verantwortung \u2013 KI-Regulierung in der Schweiz<\/em>, on October 30, 2025. Event details <a href=\"https:\/\/www.paulusakademie.ch\/programm\/?eid=40731&amp;event-title=KI-REGULIERUNG+IN+DER+SCHWEIZ+%E2%80%93+30.10.2025\">here<\/a>. <\/p>\n\n\n\n<p><strong>Alexandra Zoril\u0103 <\/strong>participated, as an invited guest, in the round-table discussion \u201cKnowledge in the Age of Artificial Intelligence,\u201d an event organized within the Alifanti Library project and hosted at Reziden\u021ba 9, December 5, 2025.<\/p>\n\n\n\n<p><strong>The NIHAI team<\/strong> participated in the <em><a href=\"https:\/\/heranet.info\/knowledge-exchange-for-slow-hope\/\">Knowledge Exchange for Slow Hope<\/a><\/em> conference held in Nottingham on <strong>24\u201325 November 2025<\/strong>, an event that gathered all HERA\/CHANSE Crisis research teams for two days of exchange, reflection, and community-building. The meeting provided a rich opportunity to share developments across the projects and to strengthen the collaborative spirit that drives this wider research network.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"Papers The Hard Problem of AI Alignment: Value Forks in Moral Judgment, Markus Kneer &amp; Juri Viehoff, Proceedings&hellip;\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":{"0":"post-34","1":"page","2":"type-page","3":"status-publish","5":"cs-entry","6":"cs-video-wrap"},"_links":{"self":[{"href":"https:\/\/talkingtobots.net\/index.php?rest_route=\/wp\/v2\/pages\/34","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/talkingtobots.net\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/talkingtobots.net\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/talkingtobots.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/talkingtobots.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=34"}],"version-history":[{"count":5,"href":"https:\/\/talkingtobots.net\/index.php?rest_route=\/wp\/v2\/pages\/34\/revisions"}],"predecessor-version":[{"id":509,"href":"https:\/\/talkingtobots.net\/index.php?rest_route=\/wp\/v2\/pages\/34\/revisions\/509"}],"wp:attachment":[{"href":"https:\/\/talkingtobots.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=34"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}