texas-moody

Brazil and Colombia rank lowest in identifying false content online. Fact checkers aren’t surprised

Brazil and Colombia, respectively, occupy the last and penultimate positions in a ranking of people’s ability to tell if online content is false online.

This is according to the recently published Truth Quest report, from the Organization for Economic Co-operation and Development (OECD). It measured the ability of populations in 21 developed or developing countries to identify false or misleading content online. It also looked at which types of information are most misleading.

Authors found that in countries where trust in news discovered via social media is greater, the ability to identify false information is lower. The authors also said that people's perception of their own skills in detecting false information has no correlation with their actual ability – that is, someone who believes they know how to identify false content does not necessarily have this talent.

The study included 2,000 people from each country, with variations in age, gender, subnational region, educational level and income. Considering the entire survey, respondents correctly identified the veracity of the content 60% of the time. The countries that scored best are Finland, where 66.4% of those surveyed correctly identified the veracity of the content; the United Kingdom, with 64.4%; and Norway, with 62.8%.

Among Latin Americans, the rates were 56.5% for Brazil and 56.9% for Colombia. Mexico, the other Latin American country included in the survey, had a success rate of 56.9%, and ranked 14th. In 17th place, the U.S. had a rate of 58.4% in the ability to identify false content.

“While differences among the countries in the middle of the distribution may not be large, the differences among countries at both ends of the distribution are sizable and understanding why they exist is important for designing effective media literacy strategies, programmes and policies,” the study said.

Research challenges

According to its authors, conducting the study took more than two years, and one of the main challenges preparing it was to create a study that was “culturally neutral,” that is, capable of working in different countries. A kind of game was developed, in which participants needed to determine whether the information was true or false.

The database used was real, but very controversial topics were avoided, such as the COVID-19 pandemic or heads of state.

“We had to be sure that we didn't pick stories or content that people had seen before, that people might feel emotionally charged about, that they might feel offended by. So in our survey there were no claims about presidents, about specific people, about COVID, because those are things that everybody's kind of used to,” Molly Lesher, senior analyst at the OECD and one of the authors of the report, told LatAm Journalism Review (LJR). The other authors are Hanna Pawelec and Mercedes Fogarassy, under the direction of Audrey Plonk.

One of the questions the research aims to answer is whether people can distinguish some types of false or misleading content better than others. Five forms of false or misleading content were considered: satire, disinformation, propaganda, misinformation and contextual deceptions.

The study defines “disinformation” as false or misleading information created to intentionally deceive or cause harm, such as fake news and deepfakes. “Misinformation” is misleading content shared without the intention of deceiving or causing harm.

Contextual deceptions are pieces of true information taken out of context or distorted to manipulate the perception of an event or subject. Propaganda is understood as content propagated by governments, companies or individuals to influence attitudes and opinions, often appealing to emotions. Lastly, satire is the use of humor and exaggeration in works of art or media, which can be misinterpreted when taken out of context.

Brazil showed the smallest difference between scores when it came to telling the veracity of different types of content, with a difference of just 13 percentage points between disinformation and propaganda. Propaganda was the most difficult type of content to identify as false in the country, with only 48% of Brazilians managing to do it correctly.

Overall, the study found that education and income influenced the ability to identify disinformation and satire, with notable differences between groups with higher and lower education and income. 

Brazil and the U.S. were the only countries where the score for identifying true content was higher than the general average, showing that Brazilians and Americans correctly identify true content more easily than false content.

The topics researched were divided into three major thematic areas: environment, health and international politics. There were no significant differences between the themes in the ability to identify false content.

Another of the study's findings is that it is more difficult to identify false or misleading content generated by humans than by artificial intelligence (AI). The authors also mentioned that labeling content generated by artificial intelligence can wrongly influence people's perceptions.

They noted that simply marking something as "AI-generated" can bias people's opinions, depending on how they view AI.

“As people go forward in thinking about policy options to address this issue, labels always come up as one of the best ways to do it. But it almost distorts things, because is AI content always false? No, right? You can generate true content too. But most people think when they see that label that AI-generated stuff is always false,” Lesher said.

Countries with the highest proportions of respondents who get their news through social media have lower overall Truth Quest scores. Conversely, countries with the highest Truth Quest scores have the lowest proportions of people who get their news via social media.

In addition to the article already published, there are two others in development based on the same survey. The first, scheduled to launch in November 2024, will address participants' behavior during the research, including how they interact with additional context provided, and how this affects their ability to identify the veracity of information.

The final one, scheduled to be published in the first half of 2025, will investigate perceptions about democracy and political affiliation, and how these factors influence the ability to discern true and false information.

No surprises for fact checkers

The positions of Brazil and Colombia in the international ranking were not surprising for people who dedicate their lives to distinguishing facts from fiction.

Journalists who work with fact checking in both countries said that everyday situations already indicated the populations' low ability to distinguish between true and false information. They point to political polarization as one of the reasons for low performance.

“Unfortunately, it’s no surprise. There is a significant limitation of media literacy, of media information. Furthermore, we also live in a scenario of great political polarization that crystallizes positions. In this logic of polarization, people are consumed by confirmation bias, believing what they want to believe,” Raphael Kapa, ​​Education coordinator at Lupa, in Brazil, told LJR.

Executive director of Brazil’s Aos Fatos, Tai Nalon, emphasized that each country has a particular context, and the concepts of disinformation and misleading information are not always the same.

“It is very complicated to compare different political scenarios. The Brazilian scenario is different from the Mexican, or even the Colombian, where there is also a lot of polarization, but the situation is still quite different because of the civil war, and the problems are not the same as in Brazil,” Nalon told LJR.

Still, she said Brazil’s poor position in the ranking was expected.

“It is still serious that Brazil appears poorly ranked. There are many people who believe in deception, false cures, digital fraud, politicians who lie to get elected. Appearing poorly in the ranking is no surprise, it's more of a testament," Nalon said.

Ana María Saavedra, from ColombiaCheck, also brought up the situation of political polarization as an explanation of her country's position.

“The polarizing figure of [President] Gustavo Petro generates anger and fear on both sides. Petro publishes a lot of misinformation on social media, but at the same time a lot of misinformation is shared against him. However, there is another figure, María Fernanda Cabal, from the opposition, and the same thing happens with her: she shares misinformation, but misinformation is also shared against her,” she said.

Regarding what can be done, Nalon said that solutions need to be structural.

“I don’t think this is an individual problem, it’s a collective problem, which must be attacked in a structural and systemic way via regulation, national legislation and other control mechanisms,” she said.

Kapa agrees, and proposes actions in the short, medium and long term.

“In the short term, it is necessary to adopt measures so that false information cannot generate so much engagement. We are not in favor of removing information, even if it is false, because this could be a path to censorship. But it is important to have a disclaimer to show that the information is false and to generate less engagement,” he said.

“In the medium term, we have to think about public policies to deal with disinformation,” Kapa said.  “Finally, it is necessary to discuss media education in schools. The next generations will have access to this, and we need to invest in digital education to ensure that future generations are prepared.”

Translated by Teresa Mioli
Republishing Guidelines