FakeXplain

Transparent and meaningful explanations in the context of disinformation detection

Project Overview

FakeXplain aims to develop transparent and meaningful explanations for AI-based systems that detect disinformation, addressing the challenge of making these explanations understandable for non-technical users like journalists and citizens. Given the rising influence of generative AI models in spreading misinformation, this project focuses on creating explanations that build trust and comply with the upcoming EU AI regulations.The project will investigate various explanation methods, such as attribution techniques and chain-of-thought prompting, while developing criteria to evaluate these explanations based on factors like transparency, robustness, and user trust.

Partners:

  • Fraunhofer HHI
  • DFKI
  • Q&U Lab Technische Universität Berlin
  • Tel Aviv University
Dr. Vera Schmitt
Dr. Vera Schmitt
Founder and Head of the XplaiNLP Group