The Role of Explainability in Collaborative Human-AI Disinformation Detection

Vera Schmitt, Luis-Felipe Villa-Arenas, Nils Feldhus, Joachim Meyer, Robert P Spang, Sebastian Möller

Abstract

Manual verification has become very challenging based on the increasing volume of information shared online and the role of generative Artificial Intelligence (AI). Thus, AI systems are used to identify disinformation and deep fakes online. Previous research has shown that superior performance can be observed when combining AI and human expertise. Moreover, according to the EU AI Act, human oversight is inevitable when using AI systems in a domain where fundamental human rights, such as the right to free expression, might be affected. Thus, AI systems need to be transparent and offer sufficient explanations to be comprehensible. Much research has been done on integrating eXplainability (XAI) features to increase the transparency of AI systems; however, they lack human-centered evaluation. Additionally, the meaningfulness of explanations varies depending on users’ background knowledge and …

Nils Feldhus
Nils Feldhus
Senior Researcher