In its first year, the network of excellence ELSA – European Lighthouse on Secure and Safe AI has set an important course to make the European Union a beacon of secure and trustworthy artificial intelligence. Groundbreaking research, close exchange with industry representatives, strategic research orientation, policy advice at the highest level – the growing network addresses the pressing issues that artificial intelligence poses for society in all its dimensions.
“I am thrilled with the progress we have already made in our first year,” says Professor Dr. Mario Fritz, CISPA Faculty and coordinator of ELSA, at the consortium’s one-year anniversary meeting at the end of September. This first year together, in which top experts in the fields of machine learning and artificial intelligence have shared their knowledge and experience, is just the beginning. “We still have a lot of work to do to make artificial intelligence more robust, secure and trustworthy so that it can be used for the benefit of society,” says Fritz.
ChatGPT and co. make big waves
Large language models have been and will continue to be a topic of intense interest to many researchers in the ELSA network. With the launch of ChatGPT and other chatbots, they became accessible to the general public. According to Fritz, the modern learning technology behind them must be made sustainable and secure. “As we have seen, however, the models still have enormous weaknesses.” This spring, Fritz and his team, together with CISPA faculty Prof. Dr. Thorsten Holz and the Saarbrücken-based IT company ‘sequire technology’, were able to uncover critical weaknesses that make the models vulnerable to manipulation and attack. The publication of their paper has resulted, among other things, in the German Federal Office for Information Security (BSI) writing a detailed position paper on the topic entitled “Large AI language models – opportunities and risks for industry and authorities”.
Strategic positioning of the networks
Just as many intelligent minds join forces in ELSA, so do the NoEs. Together, they have drawn up a Joint Research Agenda that shows how they want to jointly pave the way for the safe use of artificial intelligence. ELSA is focusing on three major challenges: developing technically robust and safe AI systems, enabling privacy-friendly and robust collaborative learning and developing human control mechanisms for the ethical and safe use of AI. The specific use cases that ELSA is focusing on are health, autonomous driving, robotics, cybersecurity, as well as media and document intelligence. The ELSA network pursues a fundamental, transparent and interdisciplinary approach. The strategic research agenda published by ELSA in November gives interested parties an overview of what ELSA’s work on these challenges will look like in concrete terms.
Putting AI safely “on the road” – ELSA sets standards
In order to test the technologies and methods developed by ELSA researchers under real-life conditions and to assess their readiness for use, ELSA 2023 has published a benchmarks platform. This platform is used to share data and metrics within the network and publish “competitions” on the six ELSA use cases. This ensures that the network makes measurable progress and that research activities are constantly linked to real needs and applications.
In May, ELSA also called on small and medium-sized enterprises and innovative start-ups to apply for funding and work together with ELSA researchers on methods, benchmarks and software solutions and bring them into industrial application. Six start-ups were recently selected by a panel of experts. They will each receive around 60,000 euros in EU funding through ELSA and will work on specific projects together with selected consortium members. In January 2024, ELSA will announce which young companies were able to impress the jury with their proposals in the industry call.
Successful start
Building a network of this size is no easy task. However, the year 2023 has shown that ELSA has enormous potential to successfully tackle the complex challenges of modern AI solutions. ELSA consortium member Battista Biggio, Associate Professor at the University of Cagliari and co-founder of the cybersecurity company Pluribus One, says: “ELSA’s work in the field of safe AI is progressing at a rapid pace, especially in terms of testing, verification and certifiable robustness of AI.” However, political and social will is also needed to put the safety of AI systems at the top of the agenda. “A lot is currently being invested in the further development of artificial intelligence, but at least as much, if not significantly more, should be invested in the safety of these technologies,” says Fritz. This requires a joint effort. In 2023, the ELSA Network has more than clearly shown that these efforts are worthwhile.