texas-moody

These Brazilian newsrooms are using AI to expose online hate and track federal policy

  • By Aline Gatto Boueri
  • February 23, 2026

Two Brazilian media outlets focused on gender and race coverage have launched tools that combine artificial intelligence, feminist theory and data journalism to monitor online hate speech and bills on the rights of girls, women and LGBTQ people.

Radar Antigênero, developed by the media organization Gênero e Número, is a free platform that allows users to search, classify and analyze YouTube videos that can promote gender misinformation and hate speech directed at women, girls and LGBTQ+ people. Using artificial intelligence, the tool helps visualize the flow and dissemination strategies of this type of content.

QuiterIA, from the AzMina Institute, collects and classifies legislative proposals from a feminist perspective, helping users track, classify and evaluate legislative initiatives that could affect girls, women and LGBTQ communities.

These tools facilitate the creation of databases that can feed into news reports and help monitor trends in public debate in the year that Brazil returns to the polls for national and regional elections. Below, a closer look at how each one works.

AI to track hate speech

Launched in September 2025, Radar Antigênero allows users to search a catalog of YouTube videos that promote hate speech against women or LGBTQ communities – commonly described as “antigender speech.” It is a collaboration between the team at Gênero e Número, an independent newsroom based in Rio de Janeiro, and the data analytics firm Novelo Data.

To develop Radar, the team conducted research using terms commonly employed in hate speech. The initial data collection captured a large volume of videos, which were transcribed using Whisper, an audio-to-text converter from OpenAI. The transcripts were organized into a database and manually classified.

Based on this initial work, the project team assessed which keywords return content with greater impact on YouTube, whether due to reach or volume of interactions, and selected 36 channels that systematically produce anti-gender discourse.

The result is a platform with a visual interface that allows users to search for videos from 2018 to 2026. It produces a display of videos that meet the search criteria and shows their producer, publication date, number of views and "likes.”

The Radar team presented the platform to a group of experts from the fields of technology, data science and gender studies to help train the AI, says Vitória Régia da Silva, executive director of Gênero e Número.

“This contributed to the formulation and validation of the methodology, which can always be adjusted because discourse is dynamic,” Silva told the LatAm Journalism Review (LJR). “It was a process of much research and testing.”

The team organized and classified the content by thematic axes, discursive strategies and central targets of the attacks.

According to data released by Radar, 65% of the videos analyzed between January 2018 and August 2025 promote traditional gender roles, 25% contain anti-feminist messages and 20% use moralistic arguments.

“At first, we thought this tool wasn’t for everyone, but we realized it can support anyone who wants to think about public policies, produce knowledge and follow narratives on the subject,” Silva said. “So it can, indeed, be for everyone.”

“Feminist AI” monitors Brazilian Congress

Trained to monitor legislative activity in the National Congress with a focus on the rights of women, girls, and LGBT+ people, QuiterIA was launched in November 2025 by feminist organization Instituto AzMina, based in São Paulo.

Named after Maria Quitéria de Jesus, a heroine of Brazilian independence, the artificial intelligence was trained using data collected since 2019 for the Elas no Congresso (Women in Congress) project. The data is available in publicly accessible spreadsheets.

According to Ana Carolina Araújo, general coordinator of QuiterIA, training with a feminist, intersectional and anti-racist perspective is fundamental to distinguishing the tool from other models.

“Most AIs that the public has access to are trained on data from the internet. So, if a dataset with several sexist statements comes along, for example, the AI ​​tends to repeat those biases,” Araújo said. “For us, it’s very important to have a specialized perspective, that the AI ​​can replicate a feminist perspective.”

QuiterIA uses contributions from human rights and feminist organizations to classify bills as favorable or unfavorable and to retrain the model. When artificial intelligence contradicts expert assessments, Araújo states that human judgment prevails.

Adriano Belisario, from the QuiterIA development team, argues that the debate on artificial intelligence needs to incorporate nuances in order to move forward.

“We have a wide range of uses, not all of which are related to Big Tech,” he said in a conversation with LJR. “There are initiatives producing artificial intelligence in contexts that are not profit-driven, but rather aimed at guaranteeing rights.”

QuiterIA was also conceived from the debate already being developed by feminists in the field of technology. Araújo emphasizes that a feminist AI does not extract data and works with smaller models, which have less environmental impact and can be run on personal computers.

“Our idea for using feminist AI is not just to prevent harm,” Araújo said. “We deliberately want to generate access to rights, generate regeneration and see transformation.”

This article was translated with AI assistance and reviewed by Teresa Mioli.

Republish this story for free with credit to LJR. Read our guidelines.