Can XAI methods satisfy legal obligations of transparency, reason-giving and legal justification?

Copyright: Adobe Stock

The purpose of this report is to explore how available technical methods can (or cannot) be integrated with, and embedded into, legal and ethical governance regimes to ensure that the design and deployment of algorithmic decision-making systems (including those which utilize AI) will serve legal, democratic and ethical values, focusing on legal obligations pertaining to transparency and accountability. It focuses on algorithmic decision-making (ADM) systems that are deployed by an organization that produce an output which is intended to inform, or to automate, the making of a ‘decision’ that can result in the imposition of a substantive intervention that produces legal or other significant effects on the life of an affected person (a ‘Decision’).  Our analysis proceeds on the basis that in real-world practice, an ADM system is typically embedded within a larger socio-technical system and is executed via an ‘organizational decision-making system architecture’ that its members are expected to follow in carrying out their tasks and duties.  Such an architecture typically identifies the formal chains of decision-making authority through which responsibility for carrying out designated tasks and duties are assigned.