2nd Workshop on Formal Verification of Machine Learning (WFVML 2023)

About This Workshop

As machine learning-based systems are being deployed in safety-critical applications such as autonomous driving, medical imaging, or cyber-security systems, characterizing their behavior not only in the average but also worst case becomes essential. However, most existing research treats machine learning models such as deep neural networks as black boxes and uses simple empirical metrics such as their mean accuracy to quantify their performance. However, accuracy alone is not sufficient to assure that models conform to even basic safety or robustness specifications. To fill this gap, formal verification algorithms for machine learning aim to formally prove or disprove desired properties of machine learning models, including safety, fault tolerance, fairness, robustness, and correctness. 

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.