When fact-checker Talita Burbulhan was doing her daily chores, she saw a TikTok video in which a TV anchor was saying Brazil was on alert due to a new virus hitting the country on the eve of Carnaval week. The anchor said the virus was more aggressive than COVID-19.
The video then cut to a regular news broadcast. Below it, text read “The virus is already in Brazil.” It seemed legitimate, but as Burbulhan figured out, someone had used artificial intelligence (AI) tools to create disinformation by stitching together a fake TV anchor with a real broadcast.
“When I searched the broadcast of the media company there was no information that this was already in Brazil,” Burbulhan, a fact-checker with Estadão Verifica, told LatAm Journalism Review (LJR). “The real story even said there were no cases in Brazil and that the probability of it spreading was low. But the post didn’t show that, just the part where people shared their concerns.”
These hybrid creations – part fake, part real – are worrying because the tools are continuously getting better at creating realistic content.
Recent studies from the University of Oslo and Indiana University show AI content can be dangerous to public knowledge and political discourse. And in Brazil, a new report from Observatório Lupa finds the dissemination of fake content created with artificial intelligence has grown threefold since 2024 – a staggering 308% increase.
The report, “Overview of Disinformation in Brazil,” published in early February, said this will be one of the top challenges the news media will face ahead of Brazil’s general elections in October.
Beatriz Farrugia, one of the main authors of the report, told LJR that in 2024, AI content was used primarily to spread disinformation through scams and fraud. In 2025, she said, most of it was used in national politics.
“There was a diversification of AI content, covering politics, international affairs, entertainment, and even the environment,” Farrugia said. “One example was an AI-generated image of a fake aurora borealis in Rio de Janeiro.”
According to Farrugia, these videos combine imagery and written messages in eye-catching fonts. Researchers alike are keen to point out the ease of such fabrications.
“Today, anyone can generate text, images, audio or video in seconds using features already built into platforms such as search engines, social networks, mobile apps and conversational assistants,” Fernando Ferreira, a researcher at Netlab at the Universidade Federal do Rio de Janeiro, told LJR. “Technical knowledge and infrastructure are no longer necessary: the barrier to entry has fallen dramatically, including for the production of misleading or manipulated content.”
And while AI content creators primarily create videos for social media, they also create content to simulate conversations using AI-generated text and video, and fake selfie videos attributing statements to a politician or personality, Cauê Muraro, executive editor of G1’s Fato ou Fake, told LJR.
“This creates a sufficiently realistic scenario, making it difficult for detection tools to identify the use of AI,” Muraro said.
But even though some news organizations have dedicated fact checking teams, Natália Leal, head of fact-checking outlet Agência Lupa, said no reporter can dismiss the possibility that any image – despite how realistic it may seem – could’ve been generated with AI.
“We are living in a time when we need to consider all issues related to AI image generation and AI content generation,” Leal told LJR.
The use of disinformation in Brazil’s national and regional campaigns, which officially start in August, is worrying not just journalists, but other sectors. The Superior Electoral Court is already sharing their concerns and evaluating a 30,000 Brazilian Reais (about US$5,700) fine for the use of AI in creating fake news.
Farrugia, however, said any measures to combat disinformation and misinformation through official or educational communications must recognize that simply issuing a lengthy denial is unlikely to work.
“It will not go viral to the same extent as the original false narrative,” she said, underscoring a dilemma facing newsrooms and authorities alike: in an information ecosystem supercharged by AI, speed and emotion often outpace correction.