Colombian journalist Yolanda Ruiz shared on her Twitter (now known as X) account a video in which she read a column explaining how sexist insults could lead to femicides. The platform saw the video as incitement of hatred and suspended her account.
Cases like this, where users are unfairly punished on social media, are frequent. It has also been documented how political satire can be interpreted as an endorsement for terrorism, or talking about the war in Ukraine can be considered graphic violent content.
For this reason, at the end of 2021 the Linterna Verde [Green Lantern] organization launched the Circuito [Circuit] project — a resource center to help content creators understand platforms' copyright rules and defend themselves in case they are wrongly sanctioned.
"Our goal was to be a resource in the defense of freedom of expression and other human rights in social media," Alejandro Moreno, project coordinator, told LatAm Journalism Review (LJR).
Linterna Verde is an independent non-profit organization that researches how public opinion is built-in social networks. It is based in Colombia, although the team works remotely.
"By mid-2022 we broadened our focus to encompass discussions coming up on content moderation, attention economy, platform regulation and, more recently, artificial intelligence. We are an information and analysis project that from these areas assesses the impact of social media on democracy," he said.
When entering the Circuito page, two sections appear: cases and stories. According to Moreno, the cases show those situations in which users are punished by mistake, either because of lack of context or wrong interpretations by human moderators or automated systems that monitor online content, as was the case with Ruiz.
"These cases are important because they allow us to see how the moderation systems work, what their failures are, and at the same time expose violations of freedom of expression online and other human rights. We are especially interested in cases involving activists or journalists," Moreno said.
Circuito received support from Google in its initial stages and during its first year of operation. Currently, it operates as an internal initiative of Linterna Verde.
Circuito also has a newsletter where the team shares "articles accompanied by an update on the constantly changing rules of the platforms, which define the rules of public debate online, as they establish what can and cannot be said on social media," Moreno said.
Circuito analyzes the intersection between social networks and democracy in Latin America. One particular focus it has been working on is the tensions between platforms and anti-democratic attempts to regulate them.
For example, for a few years now in Brazil there has been talk of Bill (PL) 2630/20, called the "Brazilian Law on Internet Freedom, Accountability and Transparency" and informally known as the "Fake News PL." The bill sought to force digital platforms to submit transparency reports, conduct risk assessments and pay news organizations for the use of their content.
Circuito criticized tech companies for their very aggressive response to the bill, which highlighted issues of these companies’ concentration of power, and their ability to influence the public debate.
The Circuito team has also worked on making public the platforms’ integrity policies in electoral matters. "In this election year in the region, which has local elections in Colombia and presidential elections in Ecuador and Argentina ahead, we are going to see the effects that an important change that X recently implemented can have on public debate: that of no longer penalizing unsubstantiated fraud claims," Moreno said.
"This is a particularly important type of electoral disinformation, because it is related to narratives that have led to violent events in offline spaces, such as what happened on Capitol Hill in Washington in 2021 and in Brasilia earlier this year," he said.
For Moreno, journalism, which represents a priority coverage for Circuito, is vulnerable to continue suffering unfair limitations due to the erroneous or disproportionate application of rules by platforms.
"At the core are structural problems — the difficulty of controlling problematic online content at scale and the way in which human moderators work (outsourced and in very precarious conditions)," Moreno said.
"In addition, we have found that platforms' third-party verification programs, which rely on media and fact-checking agencies to flag misleading information, have exposed journalists to threats and coordinated actions," he said.