On behalf of the University of Cagliari and the University of Genoa, ELSA is exited to share the news about SecML-Torch:
SecML-Torch (SecMLT) is an open-source Python library designed to facilitate research in the area of Adversarial Machine Learning (AML) and robustness evaluation. The library provides a simple yet powerful interface for generating various types of adversarial examples, as well as tools for evaluating the robustness of machine learning models against such attacks.
SecML-Torch has been developed at sAIfer lab (University of Cagliari and University of Genoa) – by the research group on AI Security and Adversarial Machine Learning, led by prof. Fabio Roli and prof. Battista Biggio. Recently, the library has been implemented and version 1.3 has been released in September 2025, thanks to the work of the main maintainers Maura Pintor, Battista Biggio (University of Cagliari) and Luca Demetrio (University of Genoa).
The research on AI Security emerged after the discovery of vulnerabilities specific to AI systems. In domains such as Cybersecurity, securing AI against these issues is essential to provide reliable and resilient systems. Thus, research in AI security is crucial to design technologies that can withstand evolving adversarial tactics.
Evasion Attacks occur at test time, where adversaries craft malicious inputs designed to fool already-trained models. These attacks manipulate input data in subtle ways that are often imperceptible to humans but cause the model to make incorrect predictions. Examples include adversarial examples in image classification where slight pixel modifications can cause a model to misclassify a stop sign as a speed limit sign, or adversarial text that bypasses spam filters while maintaining semantic meaning.
Poisoning Attacks target the training phase, where adversaries inject malicious data into the training set to compromise the model’s learning process. Poisoning can take various forms, from label flipping attacks that change the labels of training samples to backdoor attacks that embed hidden triggers causing specific behaviors under certain conditions.
The evaluation process can be fundamentally difficult, especially for practitioners. First, the configuration of these algorithms requires specific knowledge and expertise, which is not always available even for ML experts. Second, many evaluations rely on gradient-based attacks that may fail to find adversarial examples not because the model is robust, but because the optimization process gets trapped in local minima, faces gradient masking, or suffers from misconfiguration. When gradient-based attacks fail, practitioners should investigate the transferability of attacks between different models and architectures to reveal potential vulnerabilities that regular attack techniques might miss. However, this analysis is often neglected and not performed.
The sAIfer Lab – Joint Lab on Safety and Security of AI, provides tools and solutions to advance research and address the technological gap on these topics, includingSecML-Torch, a PyTorch-powered Python library to assess the security of AI/ML technologies. SecMLT represents a significant evolution from our previous SecML library [6], designed to meet modern industry requirements for AI security research and evaluation.
Building upon the foundations established by SecML, SecMLT addresses three critical gaps in existing tools:
(a) comprehensive deep learning support through PyTorch integration,
(b) advanced debugging capabilities to ensure evaluation trustworthiness, and
(c) enhanced user interfaces with comprehensive documentation for broader accessibility.
SecML-Torch can be installed (with or without extra features) following the instructions provided in the official documentation page. It has the following key features:
- Built for Deep Learning and efficient native implementation of attacks and wrappers for other libraries: SecMLT is fully compatible with PyTorch, the most widely adopted deep learning framework in both research and industry, ensuring seamless integration with existing workflows and models.
- Various types of adversarial attacks and advanced functionalities for robustness evaluation of ML models: the library supports an extensive range of adversarial attack algorithms, incorporating implementations from established libraries such as Foolbox and Adversarial Library, while providing unified interfaces for consistent evaluation across different attack types.
- Modular and customizable attack implementations to facilitate adaptive and trustworthy evaluations: SecMLT offers multiple levels of analysis through modular attack implementations, allowing researchers to extend existing attacks with different loss functions, optimizers, and constraint formulations to explore the full threat landscape.
- Debugging tools for attacks: the library features built-in debugging capabilities that log detailed events and metrics throughout attack execution, including integration with TensorBoard for real-time visualization. These tools help identify attack failures, optimization issues, and potential evaluation biases that could compromise robustness assessment.
Lately, SecML-Torch has been included in the OWASP AI testing guide for adversarial evasion attacks, and this represents a major milestone for the library evolution, since OWASP (Open Worldwide Application Security Project), is one of the most recognized community-driven organizations dedicated to improving software security by providing free and open-source tools, documentation, and education. It is best known for its OWASP Top 10, a regularly updated list of the most critical security risks to web applications, which helps organizations and developers prioritize and mitigate security threats.
The work on the SecML-Torch library empowers both researchers and practitioners to conduct more reliable evaluations of ML robustness. This is possible also thanks to the EU HORIZON EUROPE funding, within the ELSA project, which shares with sAIfer Lab the mission of addressing the challenges on security and robustness of AI and Machine Learning, with the goal of developing training-time defenses against both evasion and poisoning attacks, and to make a step towards ensuring compliance to the provisions of the EU AI Act and recent related regulations.
SecML-Torch welcomes contributions from the research community to expand the library’s capabilities or add new features. SecML-Torch is available on GitHub with comprehensive documentation and examples. The community is invited to explore the library, provide feedback, and contribute to advancing AI security research. Downloads, GitHub followers, stars to the repo, and contributions help support the project’s continued development and visibility within the research community.
Useful Links:
https://www.saiferlab.ai/research/ai-security
https://www.saiferlab.ai/theoretical-foundations/adversarial-machine-learning
https://github.com/pralab/secml-torch
https://secml-torch.readthedocs.io/en/latest/readme_link.html

