At the XplaiNLP research group, we are shaping the future of Intelligent Decision Support Systems (IDSS) by developing AI that is explainable, trustworthy, and human-centered. Our research spans the entire IDSS pipeline, integrating advances in natural language processing (NLP), large language models (LLM), explainability (XAI), evaluation, legal frameworks, and human-computer interaction (HCI) to ensure AI-driven decision-making aligns with ethical and societal values.
We focus on high-risk AI applications where human oversight is critical, including disinformation detection, social media analysis, medical data processing, and legal AI systems. Our interdisciplinary research tackles the following key challenges:
We develop and refine AI methodologies that improve decision-making under uncertainty, including:
We apply our AI advancements to critical, real-world decision-making scenarios, including:
Through interdisciplinary collaboration, hands-on research, and mentorship, XplaiNLP is at the forefront of shaping AI that is not only powerful but also transparent, fair, and accountable. Our goal is to set new standards for AI-driven decision support, ensuring that these technologies serve society responsibly and effectively.
Advancing Transparent and Trustworthy AI for Decision Support in High-Stakes Domains
Development of explanations (such as post-hoc explanations, causal reasoning, and Chain-of-Thought Prompting) for transparent AI models. Human-centered XAI is prioritized to develop explanations that can be personalized for user needs at different levels of abstraction and detail. Development of methods to verify model faithfulness, ensuring that explanations or predictions accurately reflect the actual internal decision-making process.
Beyond fine-tuning LLMs for several use cases we also work on human interaction with LLMs to make the results useful for the above-mentioned use cases: Design and validate IDSS for fake news detection. Implementation and validation of human-meaningful eXplanations to improve transparency and trust in the system’s decisions. Legal requirement analysis based on AI Act, DSA, DA, and GDPR to comply with the legal obligations in the IDSS design and LLM implementation.
Develop and apply LLMs for fake news and hate speech detection. Develop and utilize knowledge bases with known fakes and facts. Utilise RAGs for supporting human fact-checking tasks. Factuality analysis of generated content for summarization or knowledge enrichment.
Develop and utilise LLMs for proper anonymization of text-based medical records for open-source publishing. LLM-based text anonymization of text data for various sensitive use cases for open-source publishing.
Verification and Extraction of Disinformation Narratives with Individualized Explanations
Trustworthy Anonymization of Sensitive Patient Records for Remote Consultation (VERANDA)
Verification and Extraction of Disinformation Narratives with Individualized Explanations