How can we safeguard equality and fundamental rights against AI-generated violations?

A fusion of technical and legal expertise could help to build more trustworthy AI systems

Author: Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics, University of Birmingham

Earlier this year, the European Union’s AI Act became the world’s first legally binding, comprehensive legislation on AI. Its aims include promoting “the uptake of human-centric and trustworthy AI” while protecting fundamental rights “against the harmful effects of AI systems”, and it will create legal obligations for developers of any “high-risk” AI systems used in the EU, including those developed in the UK. 

As a lawyer, I’ve seen just how crucial safeguarding fundamental rights (also called human rights) is to maintaining a healthy democratic society. So how can we make sure that these rights are protected as technological innovation advances at breakneck pace? Will the mechanisms upon which the AI Act (and future legislation) relies be up to the task of protecting people’s rights? 

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.