A new study from the News, Technology and Society (NTS) Network at RMIT University examines the role of generative artificial intelligence (AI) in journalism, detailing its implications for news production, ethics, and audience perceptions. The report, “Generative AI & Journalism: Content, Journalistic Perceptions, and Audience Experiences,” brings together insights from interdisciplinary researchers, journalists, and audiences across multiple countries.
The study, conducted between 2022 and 2024, involved six distinct research activities, including interviews with journalists, audience surveys, and an industry summit with representatives from nine national news organizations in Australia. It focused on three main areas: AI-generated content, journalists’ perspectives on AI use, and audience reactions to AI in journalism.
One of the key findings is that AI is increasingly integrated into newsroom operations, from automated fact-checking and transcription services to generating text, images, and even video. While these tools offer efficiency and cost-saving benefits, they also introduce ethical and practical challenges. Journalists interviewed in the study expressed concerns about AI-generated misinformation, job displacement, and the lack of transparency in AI-generated content.
The report highlights growing industry concern over AI bias, particularly in image generation. Researchers found that AI tools often reflect biases embedded in their training data, such as favoring urban environments over rural settings or reinforcing gender and racial stereotypes. Despite efforts to counteract these biases, the study suggests that algorithmic corrections have had limited success.
For news consumers, the study found that audiences have mixed feelings about AI-generated journalism. While many respondents were comfortable with AI being used for background research, transcription, and content summarization, they were less receptive to AI-generated news articles or photorealistic images. Transparency emerged as a significant issue, with audiences expressing a strong preference for clear labeling of AI-generated content.
Another critical issue is the potential legal and ethical ramifications of AI use in journalism. Concerns about copyright violations, deepfake imagery, and the risk of misinformation were frequently mentioned by both journalists and audience members. The study underscores the need for stronger regulatory frameworks and newsroom policies to ensure ethical AI use.
Despite these challenges, the report also identifies promising applications of AI in journalism. AI-assisted tools can enhance accessibility by generating automated translations and audio versions of news articles. They can also support investigative journalism by quickly analyzing large datasets, a capability that is becoming increasingly valuable in an era of digital information overload.
The report calls for a balanced approach to AI in journalism—one that leverages the technology’s benefits while mitigating risks. It recommends that news organizations develop clear policies on AI use, ensure human oversight in AI-generated content, and prioritize transparency to maintain audience trust.
The study concludes by emphasizing that while AI is reshaping journalism, its role should be as a complement to human journalists rather than a replacement. The challenge moving forward will be navigating the fine line between innovation and ethical responsibility in a rapidly evolving media landscape.
Need Help?
If you’re concerned or have questions about how to navigate the AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.