Date: November 26, 2025
The Machine Learning Security Laboratory MLSec is hosting its next seminar!
Join the online event on “Explanation-Aware Attacks and the Limits of using XAI in Computer Security” with Christian Wressnegger from the Karlsruhe Institute of Technology (KIT).

How to Register
Sign up via the official landing page.
Abstract
Learning-based systems effectively assist in various computer security challenges, such as preventing network intrusions, reverse engineering, vulnerability discovery, or detecting malware. However, modern (deep) learning methods often lack understandable reasoning in their decision process, making crucial decisions less trustworthy.
Recent advances in “Explainable AI” (XAI) have turned the tables, enabling precise relevance attribution of input features for otherwise opaque models. This progression has raised expectations that these techniques can also benefit defense against attacks on computer systems and even machine learning models themselves. This talk disucsses explanation-aware attacks against neural networks and explores limits of XAI in computer security, demonstrating where it can and cannot (yet) be used reliably.
More About the MLSec Lab Series
The MLSec Laboratory is a research branch of the Pattern Recognition and Application Laboratory (PRALab) at the University of Cagliari (Italy). The topics investigated in our research are at the intersection of machine learning and computer security.
Learn more:

