XplaiNLP at ACL 2025: A Week of Learning, Sharing, and Connecting ๐Ÿš€

In July 2025, eight members of the XplaiNLP research group had the opportunity to attend the Annual Meeting of the Association for Computational Linguistics (ACL 2025) in Vienna. What began with great anticipation turned into a week filled with inspiring discussions, cutting-edge research insights, and valuable connection.

We were inspired by the keynotes delivered by Verena Rieser, Isabelle Augenstein, Andreas Vlachos, Barbara Plank, Jean-Rรฉmi King, and Luke Zettlemoyer, each of them offered valuable insights into the current and future directions of NLP, responsible AI, and how language is processed in the human brain.

Dr. Jing Yang, Dr. Veronika Solopova, Dr. Nils Feldhus, Max Upravitelev, Premtim and Ariana Sahitaj, Qianli Wang and the research group lead Dr. Vera Schmitt presented multiple papers on a wide range of topics. Our group presented research on explainability methods for NLP models, approaches to disinformation narrative detection, and evaluation frameworks for model faithfulness. We particularly valued the vibrant discussions around interpretability, robustness, and human-centered evaluation, topics that lie at the core of our research agenda.

Link to papers:

  1. Findings: “FitCF: A Framework for Automatic Feature Importance-guided Counterfactual Example Generation” https://aclanthology.org/2025.findings-acl.64/
  2. FEVER Workshop: “Exploring Semantic Filtering Heuristics for Efficient Claim Verification” https://aclanthology.org/2025.fever-1.17/
  3. NLP4PositivImpact Workshop: “Hybrid Annotation for Propaganda Detection: Integrating LLM Pre-Annotations with Human Intelligence” https://aclanthology.org/2025.nlp4pi-1.18/
  4. UNLP Workshop: “Improving Sentiment Analysis for Ukrainian Social Media Code-Switching Data” https://aclanthology.org/2025.unlp-1.18/
  5. SDP Workshop:“Comparing LLMs and BERT-based Classifiers for Resource-Sensitive Claim Verification in Social Media” https://aclanthology.org/2025.sdp-1.26/

Inspired by the discussions at ACL, our team is now working on a follow-up study on narrative shifts in multimodal disinformation and expanding our evaluation frameworks for explainable AI. Weโ€™re also deepening collaborations with international partners we connected with in Vienna.