The integrity of information, foreign efforts to manipulate information and disinformation as bait for online scams were three of the topics addressed at the fifth edition of the Global Summit on Disinformation.
Held Sept. 17 and 18, the summit was the venue for debates among nearly 40 speakers on issues that also included new strategies to confront disinformation, innovative tools against fake news and funds for projects to improve information quality.
More than 2,500 people from 118 countries attended online. The summit was organized by the Inter American Press Association, Proyecto Desconfío (Argentina), and the Fundación para el Periodismo (Bolivia), with support from more than 10 partner organizations. It was held for the first time in three languages: Spanish, English and Portuguese.
"Sabemos que estamos en una curva de incremento del uso de la IA como fuente de información de primera mano" afirma @ctardaguila en el marco de la @cumbredesinfo
Lo dicho: La gente está utilizando más la IA que los propios sitios de busqueda de noticias. pic.twitter.com/AWmwCmdSOP
— Matías Enríquez (@tutenriquez) September 17, 2025
While disinformation and misinformation remain among the top global risks for the second consecutive year, according to the World Economic Forum, fact-checkers are facing increasingly complex situations of online harassment and political pressure, said Daniel Bramatti, editor of Estadão Verifica, the fact-checking unit of the Brazilian newspaper O Estado de S.Paulo, during the summit.
Currently, groups that benefit from disinformation campaigns in electoral contexts are stepping up attacks on fact-checkers, accusing them of political bias and censorship, Bramatti added.
In light of that decline in public favorability, there’s debate about whether the term itself should be reconsidered, Bramatti said during the panel “Scope and challenges of fact-checking.”
“There are those who think it’s a lost battle, that the term fact-checking has already been negatively redefined by adversaries, by the enemies of truth,” Bramatti said. “And there are those who believe the debate is not only about image, but that disproving falsehoods is not enough to combat the harms of the information ecosystem.”
Bramatti agreed with other panelists that the fight against disinformation must shift toward a more proactive approach to the integrity of information. This concept also points to open access to public information and strengthening people’s ability to shield themselves from malicious content, according to a United Nations report presented at the summit.
“If fact-checking is the doctor treating the patient’s symptoms, the information integrity is like public health,” Bramatti said, citing an analogy suggested by the generative AI platform Gemini. “It is the creation of conditions so that the entire population is healthy and resistant to ailments.”
Strengthening the integrity of information is more important today than ever, when the risks facing information ecosystems include hate speech, restrictions on press freedom and the malicious use of AI, said@, senior adviser at the United Nations on information integrity.
“Journalism and other reliable data is being scraped. It's being summarized. It's being used to train AI without permission and without compensation,” Scaddan said. “So effective responses really do require multistakeholder collaboration and action”.
In her participation in the talk “Antidotes from journalism: Strategies against disinformation,” Scaddan presented the U.N. framework “Global Principles for Information Integrity.”
The document offers a comprehensive framework built around five principles: trust and social resilience, healthy incentives, public empowerment, independent, free and pluralistic media and transparency and research.
To ensure the integrity of information, it is also necessary to rethink the governance of digital platforms, said Guilherme Canela, head of the section for Freedom of Expression and Safety of Journalists at UNESCO, during the panel “How to counter disinformation on climate change.”
“Based on human rights, we have to try to involve all stakeholders,” Canela said. “We have to rethink the governance of this system, and that will have impacts in many areas.”
Canela mentioned the “Global Initiative for Information Integrity on Climate Change,” designed to confront disinformation campaigns that delay and undermine environmental action.
The spread of hate and mis- & disinformation online is causing grave harm to our world.
This past June, @antonioguterres launched the UN Global Principles for Information Integrity, aiming to foster a more humane information ecosystem. https://t.co/cMjeeRSF3e#YearInReview pic.twitter.com/OOsit3vkrS
— United Nations (@UN) December 28, 2024
Thais Lazzeri, founder of FALA, a Brazilian initiative, presented the newsletter “Observatório da Integridade da Informação,” launched in April and available in Portuguese and English.
The newsletter focuses on what Lazzeri called “the production chain of lies,” involving actors and companies that systematically generate and distribute false or misleading information to benefit specific interests. Lazzeri said the project “O Mentira Tem Preço” (“Lies Have a Price”), from FALA, describes that production chain of lies in relation to climate disinformation in Brazil.
“[The production chain of lies] works to give influence, money, and power to certain groups,” Lazzeri said. “So you have a series of economic, social and political interests, and there is a scenario that must exist for that to happen.”
Disinformation today is more than just the spread of fake news. Coordinated disinformation operations by foreign actors to influence a country’s public opinion, compromise elections, sow confusion in armed conflicts or polarize societies are increasingly common.
This type of influence through disinformation is known as Foreign Information Manipulation and Interference, or FIMI, said members of a panel that addressed the impact of these operations.
“The main objective of these operations and these actors is precisely to have an impact on the perception we have about information, about events, about key or sensitive issues,” said Esteban Ponce de León, resident fellow at the Digital Forensic Research Lab of the Atlantic Council.
AI platforms are used to carry out FIMI. Ponce de León illustrated this with a co-authored study analyzing the performance of Grok, X’s chatbot, in content verification during the conflict between Israel and Iran in June of this year.
The analysis focused on posts supporting narratives favorable to Iran, he said. The results revealed that far from helping verify information, Grok tended to amplify the confusions of X users.
“One of the main patterns [of FIMI operations] is precisely trying to portray the enemy with certain connotations, to victimize, to add specific elements related to moral or religious issues; the amplification of the crisis, making it look urgent, generating those types of ‘breaking’ labels to make it go more viral,” Ponce de León said.
Ioana Belu, scientific supervisor at the Sustainability Policy Lab at the University of Cambridge, spoke about a campaign that reportedly influenced Romania’s presidential elections in November 2024 through social media content and cyberattacks, allegedly from Russia.
Romania’s Constitutional Court annulled the first round of elections after evidence of a massive disinformation campaign, making it the first country in the European Union to cancel elections citing this cause.
Belu said this case showed how influence campaigns often target vulnerable points in society. In Romania’s case, paid content on Twitter or X particularly appealed to nostalgia and the consistent discrimination suffered by the Romanian diaspora in Western Europe.“These accounts were able to be instrumental, because they [citizens] do not trust the system anymore,” Belu said. “And unless we address the social justice issues, these accounts on Twitter will continue to fall into the trap of the Russian disinformation campaigns, because the Russian disinformation campaigns know how to target these vulnerabilities”.
Scam Empire is a finalist for @ONA’s 2025 #OJA in the Al Neuharth Innovation in Investigative Journalism category.
Our cross-border investigation with @SVT + 30 partners exposed a global scam network that stole millions from victims. Learn more about the project:… pic.twitter.com/dgqtCjvFcC
— Organized Crime and Corruption Reporting Project (@OCCRP) August 27, 2025
The summit also addressed how online financial fraud has grown.
Antonio Baquero, regional editor in Europe for the Organized Crime and Corruption Reporting Project, or OCCRP, said in the panel “Dismantling major scams” that disinformation is fundamental for scams.
“Cyber scams are the fastest-growing crime in the world. No region of the planet is outside the reach of cyber scams,” Baquero said. “Disinformation is the bait that criminal networks use to lure victims.”
Baquero spoke about OCCRP’s collaborative investigation with more than 30 media outlets, “Scam Empire,” on two networks of cyber scammers who hooked victims through fake news on social media about public figures who had supposedly multiplied their money through investment funds.
The investigation reported that affiliate marketing and Meta’s digital platforms facilitated the success of the scams by allowing the spread of ads with content leading to fraud, Baquero said.
Raphael Ramos Monteiro de Souza, Brazil’s national prosecutor for the defense of democracy, agreed that large tech companies could be complicit in such activities. The challenge, he said, is implementing new legal frameworks that make their responsibility clear.
“Any print or traditional newspaper that had an ad on its front page, on its portal, on its cover, and that ad turned out to be fraudulent, no one would question its responsibility,” Souza said. “Translating that to the digital environment, and given the size of that market, it is a responsibility that [tech companies] cannot escape.”
Beatriz Farrugia, data analyst at Agência Lupa, spoke about “A Jornada dos Golpes” (“The Journey of Scams”), an investigation into digital scams in Brazil that revealed how criminals trick and defraud victims. The process includes the use of manipulated videos, trusted brands and public figures, and offers of immediate financial gains.
“We have identified an explosive increase in recent years in the use of deepfakes in these fraudulent contents and scams,” Farrugia said. “And so all the elements appear: the economic benefit, the video with deepfakes, the personality, someone famous to lend credibility. Practically the recipe we have identified from scammers currently in Brazil.”