Shengyun Si received his B.Sc. in Engineering from Zhejiang University in 2021. He subsequently pursued an M.Sc. in Robotics, Cognition, and Intelligence at the Technical University of Munich. During his master’s studies, he worked as a research assistant in the NLP group at Yale University, collaborating with Yilun Zhao under the supervision of Prof. Arman Cohan. This research focused on table-to-text generation using large language models. His master’s thesis, titled “Think Before Refusal: Triggering Safety Reflection in LLMs to Mitigate False Refusal Behaviour”, explored the safety alignment of large language models and was conducted in collaboration with Xinpeng Wang and Prof. Barbara Plank in the MaiNLP group at Ludwig Maximilian University of Munich.
He is currently a doctoral researcher at the Quality and Usability Lab (QU Lab) in the XplaiNLP group at Technical University of Berlin, and a guest researcher affiliated with the Speech and Language Technology (SLT) group at DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz), working on the VeraXtract project. His research interests include multimodal language models, retrieval-augmented generation, disinformation detection, LLM-based multi-agent systems, and the alignment of large language models.