Software and Tools


As a Network of Excellence and a European Lighthouse, ELSA is committed to transparently sharing the network’s research results. Foundational AI and ML research and its results are key to increasing the safety of AI in Europe.

On this page, we share ELSA-affiliated software and tools. You can find out more about ELSA-affiliated research on our publications landing page.

Tools, repositories, Plugins and more

Below, we provide a collection of software, data sets, papers, code and models for AI and ML auditing which were both funded by ELSA and originate form the broader ELSA network.

AI Attacks
Auditing Code Generation for Vulnerabilities
  • CodeLMSec Benchmark
    • Code repository containing data for “CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models”. The paper presents a method to systematically study the security issues of code language models to assess their susceptibility to generating vulnerable code.
    • https://github.com/codelmsec/codelmsec
  • (SVEN) Large Language Models for Code: Security Hardening and Adversarial Testing
    • Code repository containing data for the paper “Large Language Models for Code: Security Hardening and Adversarial Testing”
    • https://github.com/eth-sri/sven
Auditing Explainability
  • b-cos explainability
    • Code repository for the paper “B-cos Networks: Alignment is all we need for Interpretability”, that presents a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training.
    • https://github.com/moboehle/B-cos
Auditing Machine Learning
  • AdvMLPhish
    • AdvMLPhish is an open-source tool for evaluating the robustness of machine-learning phishing webpage detectors. It includes a set of functionality- and rendering-preserving adversarial manipulations, and a black-box optimization algorithm inspired to mutation-based fuzzing to optimally select which manipulations should be applied to evade the target detector.
    • https://github.com/advmlphish/raze_to_the_ground_aisec23
  • Fast Minimum-Norm Adversarial Attacks
  • Indicators of Attack Failure
  • MLDoctor
  • SecML
    • SecML is a python library for Secure and Explainable Machine Learning. It is equipped with evasion and poisoning adversarial machine learning attacks, and it can wrap models and attacks from other different frameworks.
    • https://github.com/pralab/secml
  • SecML Malware
    • SecML Malware is a python library for creating adversarial attacks against Windows Malware detectors. Built on top of SecML, SecML Malware includes most of the attack proposed in the state of the art.
    • https://github.com/pralab/secml_malware
  • Waf-a-MOLE
    • Waf-a-MOLE is a guided mutation-based fuzzer for ML-based Web Application Firewalls, inspired by AFL and based on the FuzzingBook by Andreas Zeller et al. Given an input SQL injection query, it tries to produce a semantic invariant query that is able to bypass the target WAF. You can use this tool for assessing the robustness of your product by letting WAF-A-MoLE explore the solution space to find dangerous “blind spots” left uncovered by the target classifier.
    • https://github.com/AvalZ/WAF-A-MoLE
Interpretability
  • Interpretable-through-prototypes deepfake detection for diffusion models
LLM Deliberation
LLM Vulnerabilities
  • LVE Repository
Privacy Auditing
Privacy-Preserving and Collaborative Learning
Technical Robustness and Safety
  • Adversarial Pruning Benchmark
  • AdversarialRecovery
    • AdversarialRecovery is a repository for robust adversarial sample recovery, especially for cross-domain samples (unseen datasets, unseen objects, and unseen adversarial algorithms to the training stage). 
    • https://github.com/Yukino-3/AdversarialRecovery
  • Adversarial Robustness Certification for Bayesian Neural Networks
  • Automated Design for Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks
    • The code implements a robustness verification framework for neural networks with general activation functions (e.g., Sigmoid, Tanh), focusing on enhancing the quality of linear bounds in convex relaxation techniques.
    • [URL currently not available]
  • AttackBench
  • CoDE: Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities
    • CoDE (Contrastive Deepfake Embeddings) is a novel approach that utilizes contrastive learning and global-local similarities to create an effective embedding space specifically for deepfake detection.
    • https://aimagelab.github.io/CoDE/
  • DAGER
  • FAST (FeAture SelecTion)
    • The code implements FAST (FeAture SelecTion), a method to enhance the efficiency and effectiveness of test case prioritization for deep neural networks (DNNs).
    • https://github.com/Testing4AI/FAST
  • FullCert
  • GeometricKernels
  • ModSec-AdvLearn
    • ModSec-AdvLearn is a machine-learning-based methodology that improves the detection of SQL injection attacks on Web Application Firewall (WAF) while addressing vulnerabilities to adversarial manipulations.
    • https://github.com/pralab/modsec-advlearn
  • Nebula
    • Nebula is a tool to perform dynamic analysis of Windows malware which, by generalizing across different behavioral representations and formats, combines diverse information from dynamic log reports.
    • https://github.com/dtrizna/nebula
  • PREMAP: A Unifying PREiMage APproximation Framework for Neural Networks
  • SecML-Torch
    • SecML-Torch (SecMLT) is an open-source Python library designed to facilitate research in the area of Adversarial Machine Learning (AML) and robustness evaluation.
    • https://github.com/pralab/secml-torch
  • SecML-Torch Encryption Plugin
  • SecML-Torch Fairness Plugin
    • An open-source Python plugin for the SecML-Torch library that introduces a set of methods for analyzing and mitigating discriminatory bias in machine learning models.
    • https://github.com/simoneminisi/secml-fair
  • SecML-Torch Interpretability Plugin
  • Sigma-zero
  • TaskTracker
  • Uncertainty Adversarial Robustness
  • Understanding Certified Training with Interval Bound Propagation

Deliverables

The ELSA Work Package on “Technical Robustness” and the work package “Privacy and Infrastructures” have created comprehensive documents (deliverables) that provide descriptions and references to software, models and related publications created by the ELSA partners related to the respective work packages.

Please find the deliverable here:

This list will keep growing over the course of the ELSA project.
You can also learn more about ELSA research on our publications website.