ELSA workshop in London on the interactions of generative AI and the creative arts

The network of excellence “ELSA – European Lighthouse on Secure and Safe AI”, which is coordinated by CISPA, met in London on March 18. At a public workshop on “Generative AI and Creative Arts”, creators, legal scholars and scientists discussed the complex dynamics of generative artificial intelligence and copyright. The event was organized by the Alan Turing Institute, Lancaster University and the University of Birmingham.

Over the past two years, generative artificial intelligence (genAI) has developed more quickly than regulatory efforts have been able to keep pace. The technological progress raises a number of copyright issues, since the data sets used to train AI models also include artistic works. According to EU law, no one can be identified as the author of an AI-generated image, film or play. For copyright law to apply, works have to be a person’s “own intellectual creation.” This regulation, however, falls short for artists and creators, as they often live on the licensing fees that they receive in return for the use their works.

These and other questions were discussed in London by Lord Tim Clement-Jones CBE, Member of the House of Lords; Jeffrey Nachmanoff, American screenwriter of blockbusters such as The Day after Tomorrow; Lilian Edwards, Professor of Law, Innovation & Society at Newcastle University and an expert in internet law, and Matt Rogerson, Acting Chief Communications and Live Officer of the Guardian Media Group. The workshop was chaired by ELSA researcher Umang Bhatt, who is Assistant Professor and Faculty Fellow at the Center for Data Science at New York University.

Politicians and legislators need to act

 In their keynotes, Lord Clement-Jones and Professor Edwards shed some light on the measures that politicians and legislators might adopt to resolve the current impasse. Lord Clement-Jones emphasized the importance of solving the licensing issue on behalf of the creative professions. In his opinion, an equitable agreement is required to avoid an undesirable divide between the creative industries and the tech industry. Irrespective of national legislation, Lord Clement-Jones further suggested, it might serve all parties involved if international standards for the use of generative AI emerged eventually. Agreeing on shared principles could provide a viable framework for developers and procurers operating on an international market.

Professor Edwards seemed optimistic that the copyright question was solvable in an appropriate fashion. She referred to several initiatives that could help to establish and label the provenance of artistic works and pave the way for a feasible license agreement. Among others, she mentioned C2PA, the Coalition for Content Provenance and Authenticity. C2PA is a coalition of tech giants such as Adobe, Intel and Microsoft that is developing technologies to fight disinformation and content fraud. In February 2024, they released an open standard for labeling media, making it possible to identify the origin of an image in the metadata. However, C2PA cannot be used for text or audio files (yet).

Edwards is doubtful, however, if it will be possible to enforce the regulations set out in the EU AI Act that concern respecting human rights and avoiding bias against minorities. Compliance to these important regulations, she suggested, is ultimately monitored not by human rights experts but by technicians at tech giants such as Google and Microsoft.

Threats to authorship and integrity

Jeffrey Nachmanoff and Matt Rogerson provided insights into the creative industry’s side of the argument. Screenwriter and director Nachmanoff, for example, explained the role that the use of AI-generated content had played in the Hollywood strikes that both actors and screenwriters went on in 2023. In this context, he raised the question of what a fair licensing model might look like for artists whose intellectual property has been used to train large-language models. He emphasized, for example, that generative AI is able to copy and reproduce the style of individual authors to an extent that drastically exceeds any established artistic practice of working with intertextual references.

Matt Rogerson stressed the difference between the artistic and journalistic uses of artificial intelligence. As opposed to creative works, journalism is always about facts. For this reason, it is highly problematic that generative AI is, for example, capable of freely inventing situations and contexts. To create clarity for its readership, the Guardian published three basic principles for its approach to generative AI in June 2023 (https://www.theguardian.com/help/insideguardian/2023/jun/16/the-guardians-approach-to-generative-ai). In these guidelines, the Guardian pledges for instance: “When we use genAI, we will focus on situations where it can improve the quality of our work, for example by helping journalists interrogate large data sets […].”

Written arts are left to dry”

ELSA researcher Umang Bhatt, who chaired the workshop, concluded at the end of the afternoon: “We have surfaced important legal questions both from regulation and policy angles for generative AI use in journalism and film.” He also emphasized the uncertainty with copyright issues for creatives operating in textual domains: “As we have discussed and as alluded to by Lillian, we urgently need a public licensing conversation for creative content, beyond licensing image data. At the moment, creatives who work in the written arts are left out to dry.”

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.