Date: December 2, 2025
ELSA members are involved in this exciting workshop at the ELLIS UnConference 2025: LLM Safety and Security Workshop.
This workshop brings together leading researchers to investigate the safety and security vulnerabilities of large language models (LLMs). As the threat landscape evolves—driven by ever-larger model scales, ubiquitous deployment, and increasingly agentic behaviour—there is a pressing need for principled mitigation strategies grounded in empirical evidence. By providing a focused forum for rigorous discussion and collaboration, the workshop aims to sharpen our collective understanding of emerging risks and to catalyse robust, technically sound defences.
The workshop will last 1.5 hours and will consist of a keynote presentation and a poster session, combined with networking opportunities for participants.
Discussion Topics
Expected discussion themes include:
- Safety and security of LLMs and LLM-based agents
- Evaluation frameworks, metrics, and open benchmarks
- Explainability and interpretability methods
- Robustness to adversarial prompts and distribution shifts
- Fairness and bias mitigation
- Alignment and deceptive-alignment challenges
- Data-poisoning and supply-chain attacks
- Guardrails, red-teaming, and secure deployment practice
Call for Posters
We invite posters for works previously accepted at one of the following venues or associated LLM Safety/Security workshops:
| NeurIPS 2025 | ICLR 2025 | ICML 2025 | IEEE S&P 2025 |
| USENIX Security 2025 | ACM CCS 2025 | NDSS 2025 | ACL 2025 |
| NAACL 2025 | ICCV 2025 | R:SS 2025 | ICRA 2025 |
| EMNLP 2025 | COLT 2025 | CVPR 2025 | AISTATS 2025 |
| AAAI 2025 | IROS 2025 | UAI 2025 | TMLR and JMLR |
To submit a poster, please keep an eye on the official event website.
Dates
October 28, 2025: Submission Deadline
October 31, 2025: Notification
December 2, 2025: Workshop
Learn more
You can learn more about the ELSA co-organizers, the schedule, and the poster submissions on the official website.

