Artificial intelligence (AI) continues to be distributed unevenly between large and small newsrooms and between countries in the Global North and South. In addition, the benefits of AI are concentrated in the Global North, while many countries in the Global South deal with challenges such as poor infrastructure and a lack of professionals to deal with AI, plus a language barrier.
These are some of the main conclusions of the report "Generating Change — A global survey of what news organizations are doing with AI" by JournalismAI, the London School of Economics and Political Science's (LSE) initiative on journalism and AI.
The report, launched on Sept. 20, contains the results of a survey of 105 news organizations in 46 countries that took place between April and July 2023 about how AI is being used in newsrooms. Of these, 16 are in seven Latin American countries: the Brazilian Association of Investigative Journalism (Abraji, by its Portuguese acronym), Folha de S.Paulo, PodSonhar, and Rede Gazeta in Brazil; Chequeado, La Gaceta de Tucumán, La Nación, Perfil, and TN in Argentina; Cuestión Pública, El Tiempo, and Mutante in Colombia; El Surti in Paraguay; T13 in Chile; TV Azteca in Mexico; and Unitel in Bolivia.
Authors Charlie Beckett and Mira Yaseen define artificial intelligence as “a collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks normally requiring human intelligence.” Generative AI, the technology behind tools such as ChatGPT, is defined as “a subfield within machine learning (ML), a subfield of AI in its own right, that involves the generation of new data, such as text, images, or code, based on a given set of input data.”
This newly released study expands on the research carried out by JournalismAI in 2019 and reports on the advances in the use of AI in journalism over the last four years. In particular, “the arrival of generative AI (genAI) in the last year has accelerated all these trends and created new disruptions,” emphasizes the report.
In a chapter focusing on the global disparity in the development and adoption of AI, the authors state that “to collectively benefit from AI technologies in a more equitable manner, we ought to gain a better understanding of how and why global AI inequality exists.” This requires us to “pay close attention to the challenges of AI adoption faced by the majority of the world’s population, which resides in Global South countries.” The region includes “formerly colonized countries in Africa and Latin America, as well as the Middle East, Brazil, India, and parts of Asia,” according to the report.
While AI poses a number of challenges for all sectors, including journalism, for newsrooms in the Global South “the challenges are much more pronounced,” according to the authors. “Respondents in these countries highlighted knowledge gaps, resource constraints, language barriers, as well as infrastructural, legal, and political challenges,” they said.
Respondents from the Global South mentioned as challenges the difficulty in finding and hiring professionals who specialize in AI; the low level of digital literacy and the spread of disinformation among the public; and the fact that most AI tools are in English.
In addition, there are barriers to accessing these tools in some countries: ChatGPT itself, considered by the report's authors to be “the most famous publicly accessible genAI tool,” is not available in several countries, including Cuba and Venezuela. OpenAI, the developer of ChatGPT, does not support the use of the tool in some countries “most likely due to U.S. sanctions,” says the report.
Despite these difficulties, respondents from the Global South “expressed enthusiasm for building capacity in, and sharing AI expertise.” They mentioned the importance of collaboration among newsrooms facing similar challenges in order to overcome them. They also advocated collaboration between newsrooms in the Global South and North as a way to mitigate the global disparity in the use of AI in journalism.
The vast majority of respondents, 85%, said they had already experimented with generative AI technologies in various ways, such as writing code, generating images and creating news summaries. And more than 60% of respondents expressed concern about the ethical implications of the use of AI on editorial quality and other aspects of journalism. “Journalists are trying to figure out how to integrate AI technologies in their work upholding journalistic values like accuracy, fairness, and transparency,” says the report.
The interviewees said they use AI in newsgathering (75%), news production (90%), and distribution (80%). In newsgathering, some of the tools used perform optical character recognition (OCR), speech-to-text conversion (transcription) and text extraction, as well as automated translation. Tools are also used to detect trends and discover news, such as Google Trends, and page scraping and data mining applications.
In news production, respondents said they use AI applications for fact-checking, editing and proofreading text and writing news summaries and headlines, and programming code. In distribution, technologies used to expand the reach of content and audience engagement were mentioned, such as content personalization and recommendation systems to reach interested audiences, as well as tools to improve SEO and optimize the sharing of posts on social media.
The authors highlight the need for each newsroom to develop its own strategy for integrating AI into its processes, and suggest six steps in this direction:
1. Get informed: Look for books and resources to learn about the possibilities of using AI in newsrooms. The Knight Center for Journalism in the Americas, which publishes the LatAm Journalism Review (LJR), is open for registration for the course “How to use ChatGPT and other generative AI tools in your newsroom.”
2. Broaden AI literacy: Everyone needs to understand the components of AI that are most affecting journalism, because it will affect the work of all of them - not just editorial and not just "tech" people, say the authors.
3. Assign responsibility: Someone should be responsible for monitoring developments in the use of AI in the organization and also more broadly, to bring and share the possibilities with the team.
4. Test, iterate, repeat: Experiment with AI and scale it up, but always with human supervision and management, and always analyze the impact on the organization.
5. Draw up guidelines: Develop general or specific guidelines for the use of AI in your writing, knowing that they should be reviewed and may change over time.
6. Collaborate and network: There are many institutions working in this field and reflecting on the uses and possibilities of AI. Talk to them and other news organizations to find out what they are doing, and open yourself up to the new possibilities for collaboration offered by this technology.
Banner image: Jonathan Kemper / Unsplash
Featured image: Viralyft / Unsplash