Developed by top experts in the field of machine learning, the research agenda published today outlines how ELSA will be tackling the major challenges of secure and trustworthy artificial intelligence.
Artificial intelligence (AI), which is often based on machine learning techniques, permeates our everyday lives. It holds enormous potential in areas as important as, for example, the early detection and treatment of diseases, autonomous driving as well as the defense against cyberattacks. But where there is light, there is also shadow: If we cannot develop these technologies in such a way that the data they process remains protected, that the systems are secure and robust against attacks and that their decisions remain comprehensible, their use might do more harm than good to society.
As an EU-funded network of excellence, ELSA is promoting research into fundamentally safe AI methods, addressing three major challenges as defined in the research agenda: 1) developing technically robust and safe AI systems, 2) enabling privacy-friendly and robust collaborative learning and 3) developing human control mechanisms for the ethical and safe use of AI. ELSA focuses on the use cases of health, autonomous driving, robotics, cyber security, media and document intelligence. It pursues a fundamental, transparent and interdisciplinary approach.
“Much is currently being invested in the further development of artificial intelligence, but at least as much if not significantly more should be invested in the security of these technologies. With this strategic research agenda, we are defining and addressing the greatest challenges on the path to trustworthy and secure artificial intelligence. The agenda has been developed by leading experts in machine learning and artificial intelligence from across Europe and will bring us much closer to our goal of making Europe a beacon of trustworthy and secure artificial intelligence,” says ELSA Coordinator and CISPA-Faculty Professor Mario Fritz.