Our partners from The Alan Turing Institute kindly provided this article. Authors: Kendall Brogle and Dr. Mackenzie Jorgensen. Editors: Dr. Umang Bhatt and Dr. Adrian Weller
Setting the Stage for a Thrilling Panel
On April 11, 2025, the Alan Turing Institute with the support of ELSA – European Lighthouse on Secure and Safe AI and the European Laboratory for Learning and Intelligent Systems (ELLIS) hosted a panel “How AI is shaping journalism and journalism is shaping AI” at the Royal Institution in London, UK.
The event was organized by Dr. Adrian Weller (Cambridge and Turing), Dr. Umang Bhatt (NYU and Turing), Dr. Mackenzie Jorgensen (Turing), Kendall Brogle (Turing), and Ruth Drysdale (Turing).
The panel was moderated by Dr. Umang Bhatt and the speakers included Dr. Tomasz Hollanek (Leverhulme Centre for the Future of Intelligence), Jessica Cecil (Former Reuters Institute), Parmy Olson (Bloomberg LP), and Raphael Hernandes (the Guardian).
Throughout the panel, speakers explored the rise of disinformation in the media, the resulting ramifications, and the limitations of current methods in combating disinformation. The conversation reflected an even larger problem than false content poses. Currently, our ability to respond with traditional mechanisms (e.g. fact-checking and media literacy) is too slow to keep up with the rapidly evolving virtual public square.
The Core of the Discussion
During the discussion, some key themes stood out:
Mis/Disinformation
- Misinformation and disinformation are collective, not individual harms: Raphael Hernandes stated that even when they appear to target only a single individual, they create broader repercussions by diminishing societal trust in the media. Isolated incidents of misinformation and disinformation have contributed to collective harm, including the distrust of journalism.
- Power dynamics and structures are affecting misinformation and disinformation: They are not only issues of content; rather, they are greatly amplified by overarching issues like media literacy and extreme polarization of views. Tackling formal structures that enable the dissemination of false information, such as regulatory inaction and algorithmic content recommendations, requires more than technical fixes. This challenge demands a rethinking of how information is produced, circulated, and trusted. Panelists explained that disinformation is sustained by structural incentives: financial, algorithmic, and political. Conversely, these same structures can disincentivize harmful content. One positive example from Brazil was highlighted: how there is a push to reduce misinformation by cutting off any economic incentive to spread false information.
- Policy has a crucial role in tackling misinformation and disinformation: Policy should play an active role in creating incentives to reduce harmful information. Institutions must work to reshape incentive structures and support transparent media. Regulations should not only be punitive, they should encourage dynamic systems that prioritize public trust.
The Use of AI for Journalism
- AI usage within newsrooms varies but is growing: Many media organizations have been quick to experiment with AI tools, partially due to the demand for efficiency. Larger, more established outlets have leaned towards more cautious approaches as a result of comprehensive internal policies that hinder risk-taking.
- A redline for AI in journalism: Parmy Olson drew a clear boundary for AI in journalism: it can assist with background tasks such as transcription, summarization, or brainstorming. But, using AI to generate articles is crossing the line. Published AI content would undermine human judgment and accountability.
- AI-generated content needs human verification: Despite a wide range in the capacity to which journalists use AI, there was a clear baseline view: AI-generated content should be treated as unverified information. This generated content must be verified like any other information would be. Editors and journalists hold the responsibility to review and validate any AI contribution before publication.
Further Topics
- Increased push against fact-checking (at least in the USA): The speakers shared that efforts to tackle misinformation through fact-checking are facing severe pushback. Some commented that there is a cultural shift towards apathy about misinformation. They also pointed out the rising number of punitive actions against organizations who work on fact checking, specifically in Washington D.C. These legal challenges reflect a broader narrative in some political circles that fact-checking work is seen as part of the “censorship industrial complex.”
- Finding public consensus on AI is extremely difficult: While it is possible to gauge public sentiment about AI tools, this can only happen when individuals are fully informed. However, this is a challenge because the general public is often ill-informed about AI. For consensus to be meaningful, we must have broad accessible reporting, from many sources and for diverse audiences, about how AI is used and what its impact is. As such, journalists must report on emerging technologies.
As AI continues to influence journalism and public discourse, related institutions, such as newsrooms and regulators, must adapt quickly. Misinformation and disinformation will not be addressed by narrow interventions, it requires a cross disciplinary approach, including technological, economic, legal, and cultural considerations. We must work to uphold systems that remain effective, while also building new, transparent and adaptable structures. In the age of AI, journalistic skills are needed more than ever to explain, verify, contextualize, and investigate stories and share them with the public.
We want to thank all participants and the authors of this article for sharing their insights and thoughts!