texas-moody

AI streamlines work, but journalists warn it demands rigorous verification and clear rules

Summary

From data errors to bias and credibility risks, Latin American newsroom leaders say AI must be used with care, clear guidelines and constant human oversight.

Artificial intelligence (AI) is capable of streamlining work in newsrooms, but it can also amplify errors and biases—and jeopardize credibility—if not used with clear guidelines. This was one of the main conclusions of the roundtable discussion Inside the Newsroom: How AI is transforming journalism,” held online on March 26, 2026, and organized by the Knight Center for Journalism in the Americas.

Participants were Federica Ham, a multimedia journalist for Uruguayan newspaper Búsqueda; José Jasán Nieves, director of Cuban digital outlet elTOQUE; and Gisella Salmón, head of engagement at Peru’s El Comercio. The session was moderated by Claudia Báez, a digital innovator in AI and associate consultant at British media lab Fathm.

The panelists agreed that the challenge is not adopting the technology, but integrating it without sacrificing verification, ethics or audience trust.

Highlighting concrete experiences—such as the use of AI in election specials, monitoring systems and data visualization tools—they noted that automation does not replace journalistic work, but rather redefines it. In this process, they emphasized the need to strengthen fact-checking, establish internal guidelines and utilize AI to solve specific problems, rather than merely as a response to trends.

More speed, more verification

Between December 2025 and January 2026, a team from El Comercio led by Salmón published two interactive data specials focused on presenting, comparing and scrutinizing Peruvian political candidates and their proposals. It was all based on public documents.

During the development of those products—which involved the use of automated workflows and AI agents—the team discovered that these tools would sometimes confuse numbers or letters when extracting data, and would fill information gaps with arbitrary data, as Salmón explained.

The team learned that, while AI enables the analysis of large volumes of data, automation is not infallible, and the time gained through this automation must be invested in thorough verification work.

“AI is not magic; it is not perfect. It makes mistakes, hallucinates and has biases,” Salmón said. “Every result generated by an automated task or by passing through an AI process must be reviewed in detail.”

However, the dangers of applying AI without supervision in newsrooms go beyond data errors. Báez stated that there are editorial, legal and community risks associated with the unsupervised use of AI.

She explained that editorial risks refer to the hallucinations and factual errors that these tools often produce; legal risks involve feeding AI models with confidential, classified or copyrighted information; and community risks include perpetuating biases and stereotypes stemming from the training of such tools using elements and criteria from the Global North.

Hence the importance of media outlets having defined rules regarding the use of AI, Báez said, citing a 2024 survey conducted by the Thomson Reuters Foundation among 200 journalists in the Global South. It revealed that 81 percent of journalists already use AI, yet only 13 percent have established policies for the use of this technology in their newsrooms.

“When a journalist lacks clear rules—has no guide, and uses [AI] without direction or a framework—credibility issues arise, where the media outlet is singled out for its lack of rigor,” Báez said.

The success of AI adoption in a newsroom is not measured by the speed of automation, but by the ability of journalists to maintain the trust of their audience, she added.

“If they use AI irresponsibly, they will erode in seconds what they have built over years—which is credibility,” Báez said.

Salmón said that El Comercio has an AI policy guide, which includes recommended uses for the technology, the products that can be generated with it and the limits of its applications, among other things.

Although it serves as an operational manual for the New Narratives division—where special projects involving AI are developed— Salmón said it is currently being extended to the rest of the newsroom. And soon, the outlet plans to make these policies public on its website for its audience, she added.

Nieves noted that elTOQUE developed its AI usage policy collectively, with its entire team, while Ham indicated that at Búsqueda, they do not yet have defined guidelines due to the rapid adoption of the technology.

“Until a few months ago, we didn’t even believe we could implement AI,” Ham said. “But we are indeed working on it because [...] we realized that it was going to be fundamental—that we couldn’t develop any more projects without having this conversation first.”

For the hype, no; to solve problems, yes

Báez urged the roundtable attendees to overcome their fear and dive into experimenting with AI to expand and enhance their journalism. However, the panelists agreed that the integration of AI into journalistic work should not be driven by hype or the pressure of a trend, but rather as a response to concrete needs within newsrooms.

“Don’t start simply because the tool is pretty or because AI is trendy; instead, have a clear objective—move from a problem to a solution, step by step,” Salmón said. “We always start, rather, with a problem, seeking a solution by standardizing clear objectives, assessing the data we have available, and identifying a differentiating angle.”

Salmón added that at El Comercio, the use of AI is more of a cross-cutting capability designed to facilitate complex tasks and to help a story be better understood and connect more effectively with the user.

elTOQUE put this principle into practice in 2021, when it developed a system to monitor the informal foreign exchange market in Cuba in the absence of accessible official information. The system is capable of extracting foreign currency buy-and-sell listings from forums and social media, and subsequently—using a natural language processing (NLP) algorithm—calculates an average representative rate.

“What we save people with this system is essentially this: instead of joining 10 Telegram groups to check on WhatsApp roughly what the price [of currencies] is, this system now calculates it for them,” Nieves said.

He said the tool attracted 5 million users in 2025, in a country with a total population of 10 million.

“We are working toward making processes more efficient, repurposing content, leveraging the talent of the few journalists we have for what can truly be unique, and leaving to AI—under supervision—that which is repetitive and uncreative,” Nieves said.

Indispensable transparency

The currency monitoring project led to elTOQUE being accused of currency trafficking and tax evasion by the Cuban government, which characterized the tool as “a program of economic warfare organized, funded and executed directly by the United States government.”

For elTOQUE, it was important to make it clear that there was no intervention or manipulation by the team in the calculation of the rates. Nieves said that, in the face of such attacks and attempts to discredit them, transparency is the best strategy for utilizing AI while simultaneously maintaining a media outlet's credibility.

“We laid out our methodology; we laid out our data sources,” Nieves said. “You won’t manage to convince everyone, but by being consistent and transparent in your work, you will succeed in getting the majority of people to understand, and even to adopt your arguments and become your advocates.”

Ham said that transparency also entails explicitly informing audiences when AI is used in content production.

The journalist was part of the team that developed Búsqueda Dataviz, an AI application capable of creating customizable data visualizations from databases and unstructured information.

She said that the members of her newsroom who use the tool add the caption “visualization created by AI under journalistic supervision” to the generated graphics.

“This approach [...] encompasses these two perspectives: the perspective of telling the user what we are doing, and also holding the journalist accountable for supervising the product and what they are going to publish,” Ham said.

She also emphasized the importance of democratizing knowledge about AI among all members of a newsroom. Before participating in the development of Búsqueda Dataviz, Ham was responsible for producing charts and infographics for the entire publication. That, she said, created bottlenecks.

Ham and three members of the Búsqueda newsroom developed the tool as part of their participation in the Google AI Prototyping Sprint—an initiative organized by the Google News Initiative and Fathm—in October 2025. Following the tool's creation, the team's intention was to share knowledge regarding its operation with the media outlet's staff, thereby streamlining workflows.

“We wanted to focus on improving the performance of some processes and automating others, and we focused specifically on working with data and visualizations,” Ham said. “We wanted to democratize access to data visualization tools.”

The journalist said that the number of visualizations and data analysis on the Búsqueda website has increased significantly.

What can't AI do?

Nieves said that critical thinking is one of the dimensions of journalistic work that cannot be replaced by AI.

While this technology can process large volumes of data, it is not capable of interpreting complex dynamics or explaining what lies behind them—something that journalists can do, he said.

As an example, he said that—using the data generated by the elTOQUE tool—his team has reported on topics such as why the dollar reaches certain levels, what is happening in informal markets and how scams are being perpetrated within that market.

“And journalism makes it possible to explain all of this by means of an automated tool designed to extract this information, process it and return it as a public good,” he said. “It is about how you apply the critical inquiry—inherent in the journalist and their craft—specifically to explain and provide context to audiences.”

Báez added that the reporter's duties—such as fieldwork, human engagement and the acquisition of documents—as well as the editorial voice of each media outlet, are other skills that cannot be performed by an algorithm.

“What is the irreplaceable element that journalism offers in an ecosystem as challenging as the one we live in today? It is rigor and journalistic instinct,” Báez said. “AI has no idea why one story matters while another does not. That is something humans decide. Ethics and editorial judgment—that is, what I publish, what I omit, how I protect sources... these are human decisions, ethical decisions.”

Watch this discussion in Spanish and the entire webinar for free on the Knight Center’s YouTube page.

Translated by Teresa Mioli
Republish this story for free with credit to LJR. Read our guidelines