Vera Schmitt, Balázs Patrik Csomor, Joachim Meyer, Luis-Felipe Villa-Areas, Charlott Jakob, Tim Polzehl, Sebastian Möller
The rapidly increasing amount of online information and the ad-vent of Generative Artificial Intelligence (GenAI) make the manualverification of information impractical. Consequently, AI systemsare deployed to detect disinformation and deepfakes. Prior studieshave indicated that combining AI and human capabilities yieldsenhanced performance in detecting disinformation. Furthermore,the European Union (EU) AI Act mandates human supervision forAI applications in areas impacting essential human rights, like free-dom of speech, necessitating that AI systems be transparent andprovide adequate explanations to ensure comprehensibility. Exten-sive research has been conducted on incorporating explainability(XAI) attributes to augment AI transparency, yet these often miss ahuman-centric assessment. The effectiveness of such explanationsalso varies with the user’s prior knowledge and personal attributes.Therefore, we developed a framework for validating XAI featuresfor the collaborative human-AI fact-checking task. The frameworkallows the testing of XAI features with objective and subjectiveevaluation dimensions and follows human-centric design principleswhen displaying information about the AI system to the users. Theframework was tested in a crowdsourcing experiment with 433participants, including 406 crowdworkers and 27 journalists for thecollaborative disinformation detection task. The tested XAI featuresincrease the AI system’s perceived usefulness, understandability,and trust. With this publication, the XAI evaluation framework ismade open source.