AI in Journalism 2025: What’s Changing in Newsrooms and Coverage

Journalists observe classical art in a futuristic media space filled with floating, glowing AI spheres—symbolizing the convergence of human storytelling and AI-driven editorial tools.
Newsrooms are evolving with AI agents, automation, and personalized delivery—reshaping how stories are discovered, curated, and reported in real time.

Share This Post

Over the past five years, AI adoption in newsrooms has accelerated dramatically. Early uses of automation in 2020 have expanded into widespread deployment of AI across editorial workflows by 2025. Surveys show that about three-quarters of news organizations globally now use AI in some capacity. News leaders overwhelmingly view AI as critical: 78% of media executives say investing in AI technology will be key to journalism’s survival. The advent of powerful generative AI in 2023–24 further spurred this trend – in fact, 87% of newsroom managers report that tools like GPT have partially or completely transformed how their newsroom operates. AI is not just behind the scenes; it’s also reshaping news coverage. Outlets are experimenting with AI-generated multimedia content and interactive news formats to engage audiences in new ways. Below, we break down the core areas where AI is changing journalism today:

Automation and Efficiency Gains: 

Organizations are widely leveraging newsroom automation to handle repetitive tasks and speed up reporting. AI transcription and tagging of media, for example, are considered important by virtually all newsrooms (60% say “very important” and 36% “somewhat important” in a recent survey). Routine articles such as weather updates, finance reports, and sports recaps can be generated by AI, freeing up reporters for more complex stories. For instance, the Associated Press (AP) uses automation for thousands of earnings reports each quarter, and many local U.S. outlets use AI to produce sports game summaries. These efficiency gains mean journalists spend less time on rote work and more on meaningful reporting.

Generative Content with Human Oversight: 

An increasing number of newsrooms now deploy generative AI for content creation – but always with human oversight. Approximately 77% of publishers surveyed in 2025 said AI-assisted content creation (e.g. drafting summaries, headlines, or even entire articles) is important, though typically “with human oversight” to ensure accuracy and tone. 

This has led to a hybrid workflow: an AI might produce a first draft or a bullet-point summary of a news story, which an editor then fact-checks and polishes. Early results are promising – news output can expand without proportional increases in staff. Major outlets like The Washington Post have used an in-house AI (“Heliograf”) to write simple news briefs (like election results and sports scores) since the mid-2010s, and AP’s newsroom has a policy of always keeping a “human in the loop” to vet AI-generated text. This balance helps maintain quality and trust even as AI takes on a larger writing role.

Personalization and Recommendations: 

Another area of transformation is audience-facing personalization. About 80% of media organizations are prioritizing AI-driven content recommendation systems – think personalized homepages, news app alerts tailored to user interests, and algorithmic story suggestions . In Europe, the BBC has explicitly made personalization a core goal, seeing it as part of its public service mission to deliver the right content to the right user. Personalized newsfeeds powered by AI keep readers engaged longer by showing topics they care about, whether it’s more local news for one person or more international analysis for another. However, publishers are cautious to do this in a way that doesn’t create filter bubbles or violate privacy. When done responsibly, personalization exemplifies AI amplifying human ingenuity – editors set content standards and strategies, and AI helps match content to audience segments at scale.

New Interactive News Experiences: 

AI is also enabling novel forms of news consumption. We’re seeing the rise of conversational interfaces and multi-modal news delivery. For example, Time magazine recently let readers “chat” with an AI avatar of one of its reporters, allowing users to ask questions about an article as if they were talking to the author. 

The BBC and other broadcasters have experimented with voice assistants that can answer news questions or read personalized bulletins on demand. According to a Reuters Institute study, the top audience-facing AI initiatives that publishers plan in 2025 include text-to-audio conversion (turning written articles into spoken word via AI, planned by 75% of surveyed publishers) and AI-driven news summarization (70% planning, to provide quick briefs for readers). 

Chatbots and smarter search – where readers can query an AI about the news – are also on the agenda (56% of publishers have this in their plans) (Newman & Cherubini, 2025). Visual journalism is evolving with AI as well: tools now exist to auto-generate infographic stories or short videos from text, expanding how news is delivered on social media. These innovations change the coverage itself, making news more accessible and interactive.

AI priorities in newsrooms (2025): Survey of global news leaders shows the percentage who rate each application of AI as important. Back-end automation and personalized recommendations top the list, followed by content creation, newsgathering, and commercial uses (data source: Reuters Institute survey in 51 countries, 2025). All categories saw increased importance compared to 2024, reflecting AI’s growing role in newsroom operations.

From Assistants to Intelligent Agents: AGD™, P.O.D.S.™, and G.U.M.M.I.™ in Journalism

As the industry embraces AI, thought leaders are redefining what sort of AI we need in journalism. The goal is moving beyond narrow tools toward intelligent agents that can work in tandem with humans across many tasks. Rather than pursuing Artificial General Intelligence (AGI) – an autonomous super-intelligence – some are advocating for Artificial General Decision-Making or AGD™, a paradigm where AI augments human decision-making across a wide range of contexts. In the media domain, AGD™ translates to AI systems that help editors and reporters make better choices, essentially acting as an ever-present consultant for journalists. 

Hand-in-hand with this is the concept of Point of Decision Systems (P.O.D.S.™), which are AI tools designed to plug into the exact moments where decisions are made in the newsroom workflow. For example, a P.O.D.S™ might assist an editor at the point of deciding which stories to feature by providing real-time audience analytics and content recommendations. 

Finally, bringing these ideas together in practice requires new interfaces – hence Graphic User Multimodal Multiagent Interfaces (G.U.M.M.I.™). G.U.M.M.I. refers to the next generation of newsroom software where multiple AI agents (multi-agent systems) with different specialties are integrated into a seamless interface, allowing journalists to interact with them through text, voice, or visuals. Below, we break down these three concepts and their applications in journalism:

  • Artificial General Decision-Making (AGD™): AGD™, known as an alternative to AGI — is not a replacement for humans, but an amplifier of human decision-making. In journalism, AGD integrates tools like trend analysis, fact-checking, and audience forecasting into a single decision-support system. Editors using AGD can instantly assess trending topics, competitive coverage, and audience sentiment in one view. The goal is to turn every reporter into a superuser, making sharper editorial choices faster. Rather than replacing intuition, AGD enhances it — forming the backbone of decision intelligence journalism.
  • Point of Decision Systems (P.O.D.S.™): P.O.D.S.™ are AI systems designed to assist at the exact moment a decision is made. In journalism, this might mean suggesting background links before publishing or flagging diversity gaps in sources during story assembly. Tools like those used by The New York Times, which optimize publish times based on reader data, are early P.O.D.S. examples. The difference is immediacy — instead of operating in the background, P.O.D.S. appear in context, offering guidance right when journalists need it.
  • Graphic User Multimodal Multiagent Interfaces (G.U.M.M.I.™): G.U.M.M.I.™ envisions a unified AI workspace — where reporters interact with multiple agents (like research bots, writing assistants, or translation tools) in one interface. Rather than switching between apps, users would use natural language, voice, or visual dashboards to manage tasks. A journalist could ask, “What’s the latest on this topic?” and instantly receive summarized results. This multimodal, multiagent setup turns complex workflows into intuitive actions, helping AI feel more like a colleague than a tool — embodying Klover’s vision of humanizing decision-making at scale.

By focusing on these elements, the industry is articulating a future where AI doesn’t operate as a black box or a gimmick, but as a cooperative force woven into every aspect of journalism. These frameworks stress collaboration, context, and control. The human journalist remains at the center – setting goals, providing oversight, and ultimately making the decisions – but they are supported by an army of specialized AI helpers. It’s a vision of AI-augmented journalism that could significantly amplify human capabilities. In the next sections, we’ll see how some leading news organizations are already moving towards this vision, implementing elements of these concepts in their newsrooms.

Case Study – Augmented News Production at Reuters and Bloomberg

To understand AI’s real-world impact, let’s look at two pioneers: Reuters and Bloomberg. These major news organizations – one a global wire service based in the UK/EU, the other a finance-focused outlet in the US – have embraced AI to enhance their news production at scale. Both have taken a cyborg-like approach, pairing journalists with AI systems to combine human editorial judgment with machine speed and precision.

Reuters’ AI Systems Transform Newsgathering: 

Reuters has championed what it calls a “cybernetic newsroom,” blending human and machine strengths. Back in 2018, Reuters introduced News Tracer, an AI tool that scans the firehose of social media (700+ million tweets a day) to detect breaking news clues in real time. News Tracer uses algorithms to identify clusters of tweets about a topic and assess their credibility – checking things like the history of the Twitter source and corroborating details – then alerts Reuters journalists to potential news breaks. 

This helps Reuters reporters jump on emerging stories faster and with more confidence in what’s true. Around the same time, Reuters built Lynx Insight, an AI system that sifts through large datasets (like financial market data or election results) to spot anomalies, trends, and story ideas. Lynx Insight doesn’t publish articles itself; instead, it might flag that a particular stock’s price moved in an unusual way or that voting results in a district deviate from historical patterns, suggesting a reporter investigate further. In essence, it’s an AI research assistant. 

Importantly, Reuters has kept journalists in charge – the AI finds the needles in the haystack, but humans decide if it’s news and then do the storytelling. As Reg Chua, a Reuters Executive Editor, explained, the question they ask is “How can humans and machines best combine their strengths?” . In practice, this means machines handle speed, scale, and data crunching, while humans handle context, nuance, and verification. The result has been richer reporting: Reuters can cover insights that might have been missed without AI, and do it faster than competitors, all without sacrificing accuracy. This man-machine teamwork at Reuters illustrates how a legacy newsroom can reinvent its workflows with AI – not by replacing reporters, but by giving them superpowers in information processing.

Bloomberg: 

Bloomberg News, known for its financial journalism, has similarly woven AI deeply into its operations, particularly to deal with the high volume of market data and company reports they cover daily. Bloomberg’s key AI ally is called Cyborg, a system that automatically analyzes corporate earnings releases and generates draft news stories on the results. Every quarter, thousands of companies worldwide announce earnings; Bloomberg’s Cyborg can parse those reports within seconds of release – extracting key numbers (revenue, profit, etc.), comparing them to analyst expectations, and producing a formatted news story almost instantaneously ). Human reporters then quickly review that story, add any context or quotes from executives, and publish it. 

This means Bloomberg clients and readers get the news almost in real time, a huge advantage on financial markets where minutes matter. By 2019, roughly one-third of all Bloomberg news articles were produced with some degree of AI assistance, largely thanks to Cyborg. Those tended to be the short, data-driven pieces (like market updates and earnings summaries) that follow a standard formula. The benefit is twofold: speed and scale, without increasing errors. Editors ensure the automated content meets editorial standards, and the AI itself is programmed not to stray into analysis – it sticks to factual output, reducing the risk of inaccuracies. 

Bloomberg has taken things a step further by developing a custom large language model, BloombergGPT, in 2023. BloombergGPT is a 50-billion-parameter AI trained specifically on financial data (Schroeder, 2023). It can perform tasks like answering finance questions, interpreting complex financial language, and even drafting longer-form analysis based on Bloomberg’s vast data reserves. This model is being integrated into Bloomberg’s newsroom tools and products (like the Bloomberg Terminal) to assist both journalists and subscribers. For example, a journalist might use BloombergGPT to quickly summarize a 100-page SEC filing into key points, or to suggest angles based on historical data patterns. Here we see a multi-agent ecosystem in action: one AI (Cyborg) creates quick news blasts, another (GPT) provides deeper analysis and language capabilities, all supervised by Bloomberg’s human journalists. 

The outcome for Bloomberg has been impressive – they maintain their reputation for ultra-fast, reliable financial news, and their journalists can focus more on high-level reporting (like why earnings are the way they are, or what executives’ statements mean for the industry) rather than spending all their time on initial numbers and drafts. Bloomberg’s approach underscores how AI can be a force multiplier: by handling the heavy lifting of data processing, AI lets human reporters cover more ground. It also shows a commitment to innovation; Bloomberg invested in its own AI R&D (building BloombergGPT) to ensure the AI tools are finely tuned to journalistic needs and domain-specific language. In summary, Bloomberg’s case demonstrates the power of AI-assisted journalism in a high-stakes, data-intensive beat – it’s a glimpse of how newsrooms can simultaneously increase output, maintain quality, and innovate new products (like AI Q&A for readers) by embracing AI as part of the team.

Case Study – BBC News Labs and the Quest for Ethical AI in the Newsroom

In Europe, the BBC offers a compelling model of how to innovate with AI while upholding ethics and public trust. As a public broadcaster, it serves a diverse audience and maintains strict editorial standards. Much of its AI work is led by BBC News Labs, an R&D team driving experimentation across content formats and newsroom workflows.

One standout initiative involved automating multi-format storytelling to reach younger, mobile-first audiences. Their prototype turned text articles into visual slideshows—complete with AI-matched icons and captions—optimized for Instagram and TikTok. Journalists retained editorial control, approving or refining the outputs before publication. Alongside this, the team also tested AI-generated audio and auto-summarization, enhancing accessibility across reading, viewing, and listening.

Crucially, all tools maintained a “journalist-in-the-loop” structure. The final word remained with humans—especially for sensitive tasks like image selection. Personalization, another major focus, is handled through AI-driven recommendation engines on the BBC website and app, but always with oversight to avoid filter bubbles and uphold its public service remit.

Beyond the tech, the BBC invested deeply in the ethical integration of AI. Through their Responsible AI team, staff were educated via workshops and collaborative design. Findings from a multi-year academic study, “Action Research at the BBC”, revealed many journalists initially saw AI as abstract and opaque—until they were given tools, definitions, and decision-making power.

To reinforce editorial responsibility, the BBC adopted internal policies mandating human review of all AI-generated outputs. This was put to the test in 2024 when BBC journalists evaluated leading generative AIs like ChatGPT and Gemini. As reported by Nieman Lab, over half of the answers contained inaccuracies, and 19% included outright factual errors. One model even inverted health guidance; another fabricated a crime detail.

These results prompted the BBC to share findings publicly and adopt a partnership-first strategy—working with AI companies to improve model performance using trusted BBC data, rather than deploying tools blindly.

The BBC’s approach represents a gold standard for ethical AI in journalism. It pairs technical innovation with editorial integrity, and by blending cutting-edge experimentation with rigorous safeguards, it offers a roadmap for others. Their stance aligns closely with Klover’s values: Empowering Innovation and Ethical AI for Economic Progress. As AI becomes more embedded in newsrooms globally, the BBC reminds us that transformation should always be done on human terms—anchored in creativity, transparency, and trust.

Empowering the Future: Decision Intelligence and Human-Centric AI in Journalism

The future of journalism is increasingly defined by decision intelligence—where AI, data, and human judgment converge to drive smarter editorial and business strategies. Leading outlets like Reuters, Bloomberg, and the BBC are proving that success comes from treating AI as a strategic partner, not just a tool. This means reporters rely on AI to surface leads, editors use analytics to guide story selection, and leadership uses predictive models to navigate audience trends and revenue decisions.

Multi-agent systems are also gaining traction. In this model, each journalist may be supported by a set of AI agents—handling tasks like data mining, trend tracking, or graphic generation. This approach boosts capacity and helps reintroduce local and investigative coverage where resources had previously been cut.

Still, AI adoption brings new risks. Research shows that 94% of audiences want transparency when AI is involved in content creation, and regulators are beginning to mandate disclosures and audit trails for AI-assisted journalism. To maintain trust, outlets may need to label AI-supported workflows clearly and verify outputs rigorously—especially as generative tools become harder to distinguish from human writing.

The outlook remains optimistic. AI already helps outlets personalize newsletters, analyze massive leaks, and produce faster investigations. As tools become more accessible, even small newsrooms could benefit from cloud-based copy editors or AI fact-checkers trained on local data.

Crucially, the next phase of AI adoption depends on journalist training. Editorial staff must become fluent in AI ethics, workflows, and oversight. Journalism schools are starting to offer courses in AI for media, while mid-career reporters are being retrained as AI-literate editors and decision-makers.

Ultimately, the newsrooms that thrive will be those that empower creativity with intelligence, scale output without sacrificing accuracy, and prioritize ethical, audience-centered applications of technology. This perfectly mirrors Klover.ai’s guiding principles: Empower Innovation, Amplify Human Ingenuity, and ensure Ethical AI for Economic Progress.

Conclusion

In 2025, AI in journalism is no longer a novelty – it’s an integral part of how news is gathered, produced, and delivered in both the U.S. and Europe. We’ve seen how AI-powered automation has made newsrooms more efficient and how intelligent systems are opening up new frontiers in news coverage, from interactive chatbots to personalized story formats. Crucially, the examples of Reuters, Bloomberg, and BBC News Labs show that the news organizations leading this revolution have done so by using AI to augment human capabilities, not replace them. The journalist remains at the heart of the process, wielding AI as a powerful tool to enhance speed, depth, and creativity.


Citations:
The Associated Press outlines its ongoing use of AI to streamline newsroom operations and improve reporting efficiency.

Axios explored the early rise of content bots in media, highlighting the growing role of automation in journalism.

A white paper from BBC R&D examines how personalisation strategies aim to deliver the right content to the right users.

The BBC Academy details the broadcaster’s Responsible AI strategy, focusing on ethical and transparent AI integration.

Bloomberg launched BloombergGPT, a 50-billion-parameter large language model trained specifically for the finance sector.

In a forward-looking analysis, Geneea discusses the evolving partnership between AI and journalism in 2025.

Granger reported how Reuters combines robotic data analysis with human storytelling for richer news coverage.

Gupta highlighted how BBC News Labs uses AI to automate content and reach younger audiences.

The International Telecommunication Union (ITU) introduced an AI skills platform designed to upskill newsroom professionals and other communicators.

Jones, Reis, and Waisbord explore action research at the BBC, evaluating how journalists interrogate the role of AI in editorial settings.

The Reuters Institute annual report by Newman and Cherubini shares predictions for journalism, media, and tech trends in 2025.

Owen at Nieman Lab details the BBC’s findings that AI tools can distort journalism, leading to errors and editorial confusion.

Panday covered how Bloomberg’s Cyborg system is transforming financial journalism through automation.

A Reuters article compares the evolution of newsroom AI to the shift from horses to cars—inevitable and transformative.

In Frontiers in Communication, Sonni offers a mini review on the digital transformation of journalism, emphasizing AI’s growing impact on newsrooms.

Trusting News provides guidance on how journalists should transparently disclose AI use in their reporting.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Ready to start making better decisions?

drop us a line and find out how

Make Better Decisions

Klover rewards those who push the boundaries of what’s possible. Send us an overview of an ongoing or planned AI project that would benefit from AGD and the Klover Brain Trust.

Apply for Open Source Project:

    What is your name?*

    What company do you represent?

    Phone number?*

    A few words about your project*

    Sign Up for Our Newsletter

      Cart (0 items)

      Create your account