texas-moody

Can journalists rely on AI to simplify complex topics? This Argentine fact-checking service wants to know

The team at Chequeado, Argentina’s pioneering fact-checking platform, is taking on two big questions before many journalists.

How can newsrooms use generative artificial intelligence to help tell stories? And will readers like it?

To do so, Chequeado has created an artificial intelligence lab where they conduct experiments using artificial intelligence models — and the help of their readers.

“The spirit of the lab is to share the knowledge we generate and contribute to the community of fact-checkers, journalists, media and anyone working with artificial intelligence,” Eduardo Ceccotti, communications director at Chequeado, told LatAm Journalism Review (LJR).

In their first experiment, an interdisciplinary team of journalists, programmers, sociologists and accountants instructed four AI language models to simplify six news articles about economics, statistics and elections so that a high school student could understand them. The prompt also instructed the GPT-4, Claude Opus, Llama 3, and Gemini 1.5 language models to keep the relevant information, not introduce new information, and respect the style, sources, and format of the original text.

The texts delivered by the models underwent two evaluations. First, the team at Chequeado’s AI Lab manually examined how well the requirements were met. Based on this, they gave models high, medium, or low ratings depending on how well they fulfilled the team’s requests.

The second evaluation was a survey completed by 15 readers who had to choose their preference between the simplified texts generated by the AI languages, and a journalist. After five rounds, readers had to choose between two simplified texts or declare a tie. The two texts could be from two models or one model and one journalist.

According to these surveys, the Lab found that most users preferred texts from the Claude Opus and Gemini 1.5 models, followed by the journalist’s version, with the other two models coming in fourth and fifth.

Readers picked their favorites not only based on the content of the text, but also the format in which it was presented, Ceccotti said. Users preferred text presented in bullet points and Q&A sections.

“This is one of the first conclusions we have from these initial analyses of the Lab’s experiment,” Ceccotti said.

However, as Ceccotti points out, users preferred the formats that did not best meet the lab team’s requirements. For example, one model included additional information (creating summaries that were not requested), and another did not include the original sources.

Another conclusion from this experiment is the need to provide very precise prompts for the models, Ceccotti said. In this case, the Lab tried three different prompts to analyze which one offered the best results. Additionally, the speed with which these results can be obtained is an aspect the team highlights, though they always emphasize the necessity of human review afterward.

Although Ceccotti clarified that these results are not meant to be “scientific evidence,” he believes they are a starting point for gathering information that could lead to conclusions usable in newsrooms.

In mid-July, Chequeado presented these results during SIPConnect 2024, the annual conference of the Inter American Press Association (IAPA), which this year gathered to discuss the digital transformations of media, especially those caused by AI. They will also soon share a talk with the Journalism AI community, and have made the findings public on their blog and newsletter (ChequIAdo Innovación).

A lab studying efficient AI use

The team at Chequeado has been using AI for a long time. In 2016, they created a bot – called Chequeabot – to help accelerate their fight against disinformation. Now, they have many more tools to fact-check better, faster and with greater reach, Ceccotti said.

They’ve continued to wonder how they can use AI to fight disinformation, and came up with their AI Lab. They launched it with the help of the Engage fund, supported by the International Fact-Checking Network (IFCN). They competed with more than 80 other proposals to win the award.

“Their proposal to start an AI lab to apply AI to fact-checking was both exciting and impressive,” Angie D. Holan, director of the IFCN, told LJR.

“We believe this investment in Chequeado’s work will generate benefits and knowledge for the entire fact-checking community, with the ultimate goal of a better-informed public and an improved information ecosystem,” she added.

The Lab is already working on a second experiment, this time focusing on how AI models respond in creating news threads for the social network X.

Translated by Jorge Valencia
Republishing Guidelines