XplaiNLP Research Group
XplaiNLP Research Group
People
Publications
Projects
News
Contact
Light
Dark
Automatic
Admin
Latest
Explainability of Large Language Model
Mis- and Disinformation Detection
Interview with DW: "Fact check: AI influencers targeting German elections"
Live on rbb24 & ARD Mediathek – Fighting Disinformation Together!
COLING 2025 - Cross-Refine:Improving Natural Language Explanation Generation by Learning in Tandem
AI_Berlin Interview - “AI should not only help to disseminate information more quickly, but also to check its validity immediately.”
Announcing Our New Research Group Lead Vera Schmitt for Disinformation Narrative Detection and eXplainable AI (xAI)
EMNLP2024! CoXQL - A Dataset for Parsing Explanation Requests in Conversational XAI Systems
News-polygraph at Falling Walls Science Summit
CLEF 2024 - Towards a Computational Framework for Distinguishing Critical and Conspiratorial Texts by Elaborating on the Context and Argumentation with LLMs
“Deepfakes - Our New Reality?” Talk at re:publica 24
Cite
×