Grand Challenges

Progress of ELSA in methodology as well as deployment is driven by grand challenges. These challenges arise across all use-cases. Overcoming these challenges will be essential in overcoming obstacles to take-up and deployment of AI. These grand challenges are the first step in refining the strategic research agenda of ELISE. These overarching goals and the number of grand challenges will evolve as the project progresses in collaboration with all stakeholders.

Robustness guarantees and certification

AI technology has been shown to be susceptible to manipulations during training and test time. Different schemes have been proposed to harden AI systems against such attacks. However, only recently rigorous guarantees and certificates have been developed that can rule out certain types of attacks. While foundations are laid, developing practical techniques that can effectively deal with the full breadth of complex attacks is highly challenging. Although we seek the best possible robustness, such properties will not be achievable in an absolute sense, in particular if we zoom in on isolated components of complex systems that are built on AI technology. Hence, novel methodology and systems are needed that offer a resilience in the form of mitigation strategies if individual components fail, avoiding catastrophic system failure. Research is also needed to address the vulnerabilities of AI and deep learning against adversarial attacks. New methodologies to evaluate and assess adversarial robustness will be defined, to work in noisy and uncertain real conditions, different from those considered at training time, and to cope with fake data and misinformation. A grand challenge is not only to improve robustness, but also certify it, and measure the capabilities and the limits of the approaches, including when the certification is valid.

Private and robust collaborative learning at scale

Modern AI technologies require massive amounts of data, the availability of which can be a significant bottleneck. Often this scarcity is due to technological, legal, privacy and confidentiality constraints that, while ensuring respect for the informational privacy of individuals and their fundamental right of data protection, prevents sharing data across different parties. Our aim is to develop a technological platform that would enable using such distributed data in a scalable manner robustly and securely with provable privacy, thus opening significant new opportunities. Training a machine learning model with distributed data has been recently popularised as federated learning. In collaborative learning, we aim further to allow all parties to train models for their needs, safely using relevant data from other parties. In such scenarios, we need to deal with different actors with different incentives and different levels of trust and privacy-utility preferences (cf. Biswas et al., 2021). While such distributed and possibly decentralized (cf. Koloskova et al., 2019) training scenarios provide novel and innovative training regimes, they are not well understood. Providing guarantees of the overall effectiveness under a secure and reliable learning paradigm that participants are willing to engage in is highly challenging. While decentralising the data is good for privacy, it is not sufficient on its own, as intermediate updates can leak information to other parties of the computation and the final model can also leak sensitive information. Differential privacy can provide solutions for both challenges, but more fundamental research is needed to understand privacy-utility trade- offs that can be obtained using various architectures and secure multi-party computation primitives.

Human-in-the-loop decision making: Integrated governance to ensure meaningful oversight

Ensuring meaningful human oversight of AI systems that is demonstrably in accordance with core European values remains a key obstacle to widespread take-up and deployment across Europe. Achieving safe and secure AI is a particularly challenging and demanding task in relation to machine learning systems for several reasons: i) impact on humans: AI systems both rely upon (for human-in-the-loop systems) and directly affect humans who, as individuals worthy of dignity and respect, must be capable of understanding and evaluating the proposed outputs of AI systems, including whether those outputs are normatively justified based on reasons rather than being determined stochastically; ii) the complexity, sophistication and opacity of the underlying AI systems (Pasquale, 2015) can preclude establishing the safety and security of the system and its impacts; iii) the interaction between the AI system and its surrounding socio- technical context, including interaction within and between humans, is complex, dynamic and inherently difficult to predict. Yet, AI systems are now widely used to inform and automate decisions and actions that have significant consequences for individuals, including safety-critical and human-rights critical contexts, ranging from medical diagnostic tools (Babic et al., 2021), autonomous vehicles (Eliot, 2018; Soares and Angelov, 2020) and biometric identification and verification systems that are used to inform decisions to allow or deny access to critical resources and opportunities. Addressing these problems requires the development of methods that can be integrated into interpretable and accountable legal and ethical governance architectures that will enable lay-users to regard such systems as trustworthy. Our aim is to investigate the adequacy of existing technical methods and governance mechanisms, seeking to develop new techniques, mechanisms and analytical approaches that can provide the foundations for establishing demonstrable, evidence-based assurance mechanisms capable of safeguarding multiple dimensions of safety and security that otherwise remain under threat, including epistemic security and the safety and security of property, persons and human identity.