Home

News outlets risk using fake photos without AI policies

Jennifer Dudley-NicholsonAAP
A study found only one in three major media outlets had policies on the use of generative visual AI. (April Fonti/AAP PHOTOS)
Camera IconA study found only one in three major media outlets had policies on the use of generative visual AI. (April Fonti/AAP PHOTOS) Credit: AAP

Australian news organisations risk publishing fake images of war and other news events if they fail to introduce policies to regulate generative artificial intelligence tools, research has warned.

The study, authored by RMIT University in collaboration with QUT and Washington State University, investigated the use of generative AI technology by media outlets across seven countries, including Australia.

It found only one in three major outlets it surveyed had "formal policies governing the use of generative visual AI" and a minority had banned the tools, even though many editors expressed concern about the use of the technology.

But researchers stopped short of recommending a total ban on programs such as Midjourney and DALL-E in newsrooms, saying they were already too widespread to avoid.

Get in front of tomorrow's news for FREE

Journalism for the curious Australian across politics, business, culture and opinion.

READ NOW

The Generative Visual AI in News Organisations study, published in Digital Journalism, used interviews with photo editors from 16 major news outlets across Australia, the US, UK, France, Germany, Norway and Switzerland.

It found editors were most concerned with the risk of spreading misinformation or disinformation using generative AI, but were also worried about bias in AI-created images, the risk to photographers' jobs, copyright issues, and the challenge of identifying when generative AI technology had been used to produce an image.

"Many of the photo editors said text-to-image generators as replacements for photojournalism were 'not welcome' and 'don't belong' in news coverage," the study found.

"An exception exists, participants said, for reporting on AI images that have gone viral."

Lead researcher, RMIT senior lecturer TJ Thomson, said examples of AI-generated images accidentally published by media outlets included a picture of Pope Francis wearing a Balenciaga jacket that was created using Midjourney, and images of a Victorian MP whose clothing was artificially altered using AI.

Other examples, Dr Thomson said, included digital creations that appeared to come from conflict zones and had not been labelled appropriately.

"In the war in Gaza, we're seeing images that are AI-generated of that conflict that are being put up the Adobe Stock platform without being noted as being AI-generated and some news outlets in Australia have then republished those images without that context," he told AAP.

"If you have things that are AI-generated, that are synthetic, but that are purporting to show reality, that's problematic."

Dr Thomson said news organisations and other companies needed to develop better ways to clearly label generative-AI images, potentially using watermarks, and to set strict controls on what AI products they published.

Despite some bans on its use, he said AI tools could be valuable behind-the-scenes for basic tasks and inspiring ideas, and should not be banned entirely.

"Two thirds of Aussies say they use generative AI for work and that's massive," he said.

"I think you're going to see AI infiltrate every sector of our economy."

The federal government has yet to introduce rules around the use of AI but, in February, appointed an advisory group of 12 experts to identify high-risk uses of the technology and consider appropriate restrictions.

Get the latest news from thewest.com.au in your inbox.

Sign up for our emails