MLSec Seminar Series: Online Seminar on Learning Safety Constraints for Large Language Models

ELSA supports the MLSec event series
Date: October 22, 2025

The Machine Learning Security Laboratory MLSec is hosting its next seminar!
Join the online event on “Learning Safety Constraints for Large Language Models” with Xin Chen from the ETH Zurich.

How to Register

Sign up via the official landing page.

Abstract

Large language models (LLMs) have emerged as powerful tools but pose significant safety risks through harmful outputs and vulnerability to adversarial attacks. We propose SaP, short for Safety Polytope, a geometric approach to LLM safety that learns and enforces multiple safety constraints directly in the model’s representation space.

Xin Chen and their team develop a framework that identifies safe and unsafe regions via the polytope’s facets, enabling both detection and correction of unsafe outputs through geometric steering. Unlike existing approaches that modify model weights, SaP operates post-hoc in the representation space, preserving model capabilities while enforcing safety constraints. Experiments across multiple LLMs demonstrate that our method can effectively detect unethical inputs, reduce adversarial attack success rates while maintaining performance on standard tasks, thus highlighting the importance of having an explicit geometric model for safety. Analysis of the learned polytope facets reveals emergence of specialization in detecting different semantic notions of safety, providing interpretable insights into how safety is captured in LLMs’ representation space.