With the growing expansion of generative artificial intelligence (AI), in recent years many news media in Latin America have developed innovative initiatives using this technology to enhance the power of their journalism. But few of these efforts have been focused on the information needs of Indigenous communities.
This last year in Peru, two generative AI projects have been developed for the creation of content for people who speak three of the Indigenous languages in that country.
In addition to contributing to meeting the information demands for populations speaking languages such as Quechua, Aymara and Awajún, the creators of both projects highlight the contribution of these initiatives to the preservation of those and other Indigenous languages.
The digital investigative journalism outlet Ojo Público has been known for developing innovative technological projects to enhance coverage since its foundation. But in 2023, the site broke one more barrier: it used that experience to help satisfy the information needs of a forgotten sector of the Peruvian population: the Indigenous communities.
Ojo Público developed Quispe Chequea, a tool that uses generative AI resources to produce fact-checked content in text and audio in Indigenous languages. The media outlet is using the tool to facilitate the generation of information for small newsrooms In indigenous communities in Peru, mainly radio stations.
“As part of the evolution of the fact-checking projects promoted by Ojo Público, we detected that there was a need to facilitate the generation of content for these communities and that regional communicators could apply verification methodology, but [they also needed] some resource that would allow them to generate content quickly, more efficiently,” David Hidalgo, executive director of the media outlet, told LatAm Journalism Review (LJR).
It was during the COVID-19 pandemic that Ojo Público began collaborating with community radio stations in at least eight regions of the Peruvian Andes and Amazon in the generation of verified content translated into Indigenous languages. The media outlet realized that Indigenous communities were the population group most vulnerable to disinformation at that time, as they did not have information in their language about the disease.
With the boom in generative AI in recent years, Ojo Público thought about applying this technology to further boost its collaboration with community media. In 2023, it brought together a multidisciplinary team that included journalists, developers, data scientists and translators to create Quispe Chequea, with support from the Google News Initiative.
The tool is made up of two components, one that has to do with the generation of fact-checked text and another that deals with the generation of audio in Indigenous languages.
The first component works through a content manager capable of generating text using ChatGPT resources. The journalist must enter pieces of information previously verified under the Ojo Público methodology, as well as other elements of necessary evidence, and the tool then writes a text.
Once the text is generated, an editor must review it and correct or add information, as deemed necessary, until the material meets the media outlet’s verification standards.
Hidalgo was clear in emphasizing that Quispe Chequea – whose name was taken from the surname Quispe, one of the most common in Quechua and whose meaning alludes to clarity and transparency– is not a tool that verifies information, but rather organizes data entered by the user in a text with the characteristics of a check.
“The journalist not only sits and waits for the machine to give him a check, but he has to know and master the verification process and all the guidelines it has with international standards,” he said. “If you place a wrong source or link, the platform will generate content with that data. We do not avoid the responsibility of the journalist. What this platform does is provide a solution so that text is generated very quickly.”
The audio generation comes once the editor approves the text, chooses the desired language (currently the tool is capable of generating audios in Spanish and in the Indigenous languages Quechua, Aymara and Awajún) and presses a button. The platform then translates the content and outputs a file in MP3 format that emulates a speaker's narration.
However, generating voice was not the most complicated part of developing the tool. Currently, there are multiple text-to-speech technology tools capable of recreating the human voice with AI. For this project, the team used Tacotron 2, a software developed by researchers at Google.
The complex part, according to Hidalgo, was getting Quispe Chequea to translate the texts into the Indigenous languages, for which the existing translation resources were very limited. To do this, the team had to develop a translator as part of the tool. A group of journalists from OjoBiónico, the fact-checking unit of Ojo Público, and three interpreters created a database for each language, with thousands of common phrases for each one.
Subsequently, the interpreters recorded each phrase with their voice until obtaining a sound bank of around four hours for each language. With this material, the developers “trained” the translator, designed with Tacotron 2 architecture, which allowed the tool to convert the texts into audio in the three Indigenous languages. After several adjustments and training processes, they obtained a system capable of creating natural-sounding voices.
“It has been quite hard work, especially collecting the data to train this model,” Gianella Tapullima, fact-checking editor at Ojo Público, told LJR. “They were phrases that we obtained from scraping the fact-checks that we have carried out over the years [in those languages].”
The translation into Awajún, the language of the second biggest group of Amazonian people in Peru, was another big effort. For translations into Quechua and Aymara, the developers of Quispe Chequea resorted to resources from Google Translate, which has both Indigenous languages in its system. But for Awajún, the team had to create an in-house machine translation model.
To do this required a database of more than 20,000 phrases in Awajún, a language considered to have few digital resources, that is, it has a low presence in the digital universe. Therefore, the phrases were taken from some of the few existing sources, such as a version of the Bible in that language, as well as stories, poems and government documents.
“There was a prior investigation by the development team about whether there was a tool that was adapted to languages with few resources,” Hidalgo said. “It was found that there were experiences with the development of a model similar to Sanskrit. And then from there they developed it and suggested using Tacotron 2.”
Quispe Chequea has been tested with Indigenous communicators and they have approved the accuracy of the results, Tapullima said. However, the team intends to improve the audio quality, for which they are testing other software, such as FastPitch.
“It was important to us that the message be understood,” he said. “There are things, of course, that still need to be improved, and that is done by generating more data and expanding the amount of data even more.”
Currently, Ojo Público is carrying out a training program with 12 radio stations in the Amazonian and Andean regions of Peru, not only on how to use Quispe Chequea, but also on the media outlet's fact-checking methodology.
One of those stations is Radio Uno, in the city of Tacna, in southern Peru, which reaches several remote communities in the Andean region of the country, on the borders with Chile and Bolivia.
In 2023, the station began to include headlines in Aymara in its news summaries. With the help of Quispe Chequea, the station could increase content in that language and thereby benefit the Indigenous communities it reaches.
“Having this platform with a full translation of the articles in Aymara would bring the information even closer to these sectors. In itself, Radio Uno is on dials in remote areas on AM, not just on FM,” Doris Rosas, editor of the station's website, told LJR. “What could be better than being able to listen to her in her natural language.”
Rosas and Fernando Rondinel, manager of Radio Uno, are the members of the station's team who participate in the training on Quispe Chequea. Rosas said that her colleagues whose native language is Aymara have so far approved of the quality and accuracy of the messages the tool delivers.
Although Ojo Público had support from the Google News Initiative for the development of the tool, the challenge for this year, Hidalgo said, is to find a business model to make it sustainable and improve it with the inclusion of more languages, without charging the community media outlets usage fees.
“I think [Quispe Chequea] sets an important precedent, both in the country and in the world. There is a problem of marginalization of Indigenous peoples in the development of artificial intelligence,” Hidalgo said. “I believe that the project helps precisely to provide that diversity and inclusion of the communities, and also the preservation of the Indigenous languages in our country.”
Illariy is the name of the presenter of the university news program “Letras TV Willakun,” the channel of the College of Letters and Humanities of the National University of San Marcos (UNMSM), in Lima, Peru. Illariy speaks Quechua, the most spoken Indigenous language in that country, and thanks to that, she has become a celebrity among the university community who watch her every day on the screens of the institution’s television system.
The curious thing is that Illariy does not exist in real life. She is an avatar created with AI that presents weekly news of interest to the university community.
“We found in artificial intelligence a tool to perpetuate tradition and to perpetuate the language, to avoid the extinction of the language,” Carlos Fernández, professor of Social Communication at UNMSM and leader of the Illariy creative team, told LJR.
The avatar was created after Fernández and his team produced a spot to promote the institution's Graduate Studies in Literature, made entirely with AI. For this spot, Peruvian writer José María Arguedas was recreated through generative voice and video tools.
The spot was well-received at the university so Fernández thought about doing something similar for the next season of “Letras TV Willakun,” which has presented news in Quechua for the university community since 2019. They decided to create an avatar that would appear in the frame narrating the news. To do this, they used Dall-E, the AI tool from OpenAI, the organization that created ChatGPT, that generates images from text.
The team asked the tool to generate a woman with Andean physical features and characteristics of the region's population. In addition, they gave precise instructions about the attire she should wear. This is how Illariy was born on March 20, 2023. Her name means “dawn” in Quechua.
The next step was to get Illariy to speak Quechua fluently on her own. To do this, the team turned to D-ID, the platform from the Israeli company of the same name capable of making still images “come to life” and speak based on text prompts.
However, they ran into a problem.
“All text-to-speech artificial intelligence platforms had the particularity of only having modern languages available. There was Spanish, French, English…, but what did not exist were Indigenous languages,” Fernández said. “Quechua is not in the equation of the large transnational artificial intelligence companies.”
The professor and expert in emerging technologies applied to journalism had an innovative idea to overcome the obstacle. Together with Óscar Huamán, a researcher from the university's Quechua Language Studies, they created a phonetic template of words in Quechua but written with Spanish terms.
The team introduced these terms to the D-ID system to make Illariy speak. After several rounds of trial and error, they noticed that this series of terms, pronounced by the avatar in Nicaraguan Spanish, sounded very similar to fluent Quechua.
“So we began to look for a series of equivalences so that the avatar would sound like [it was speaking] Quechua despite using Spanish,” Fernández said. “You put in the text and because of the way Nicaraguans pronounce certain consonants, it sounded more like Quechua.”
There are already several examples of news presenters in the region generated with AI. But for Fernández, the main innovation of his project is that, through a phonetic transcription, they made it possible for Illariy to narrate the news in an Indigenous language.
Huamán, whose native language is Quechua, is the one who receives the articles in Spanish from the news team, translates them into Quechua and transfers them to the phonetic template. Then, the words from the template are entered into the system and a first audio version of the articles is generated.
“In the editing process, I have to go in again and pass another filter, [to see if] both the message and the sound are coherent or not,” Huamán told LJR.
The linguist said that, so far, Illariy is between 80 and 90 percent accurate compared to Quecha spoken by a human.
“Due to the same structure that language has, sentences have a tonality. “That’s what [Illariy] is missing,” he said. “There you can improve by working with segmentation [of words], or putting them in capital letters, and in some cases putting accents.”
Similarly, in experiments, they have managed to make the avatar speak Aymara and Awajún. Although so far, the weekly newscast that the avatar stars in is only in Quechua.
“What we are doing here we can recreate with Awajún, with Aymara and with other dialects of Quechua and ensure that these 48 Indigenous languages [that exist in Peru] are not lost,” Fernández said. “Languages are not only words, but it is cultural identity that is involved.”
In almost a year, Illariy has gone from starring in university news to being present on several mediums: she now teaches Quechua on TikTok and has her own application in the GPT Store, Illariy Willarisunki (Illariy tells you), which generates stories in Quechua from prompts in Spanish.
The team plans to improve the avatar's image and the quality of its movements in 2024, as the character could soon transcend from the university audience to a mass audience. Fernández said that the UNMSM is in talks with some traditional media outlets to collaborate with Illariy.
Although the budget for the project is covered by the university, the professor stressed that Illariy's production costs are close to zero, except for the costs of the premium versions of some generative AI tools they used, which are relatively accessible. Some other tools, he said, were used with free trial periods.
For Fernández, Illariy is also an example that refutes the beliefs that AI came to replace human beings in their jobs. In the case of the UNMSM avatar, on the contrary, it has generated more work.
“She supposedly took the job of Óscar Huamán, the person who previously did the newscast,” he said. “But she ended up giving him more work, because now he not only has to translate, but now he is in charge of phonetic transcription.”