Data structuring and collaborative design: Panelists at 25th ISOJ highlight uses of AI in newsrooms to improve journalism

The afternoon of the first day of the 25th International Symposium on Online Journalism (ISOJ) began with a panel on artificial intelligence (AI) in newsrooms, one of the most prominent topics in the international debate on journalism.

Trei Brundrett, consultant for the Product & AI Studio and Startups Studio at the American Journalism Project, moderated the panel “AI in the newsrooms: What is working now and how it is helping to improve journalism.” Brundrett compared the arrival of AI to the beginning of the internet and its initial impact on journalism.

“Why we have ISOJ is that the internet came along and was transformative for our industry and how we served our audiences and pursued our missions. Some folks were excited and some folks weren't, but what was important was that we learned how it was going to help us do our important work. (...) it's important that we learn about this new technology so that we are well-informed about what the challenges and dangers are, but also what the big opportunities are,” he said.

Successes and challenges

Lilian Ferreira, general manager of strategy and metrics at UOL, shared how the Brazilian site has tested and used several artificial intelligence tools to produce journalism and deliver it to its audience. Her presentation was made through a video produced with AI tools that used Ferreira's image and voice to create an audiovisual piece.

Among the applications developed by UOL is a transcription tool based on Whisper, from OpenAI. It’s possible to even transcribe interviews and speeches live, which facilitated the work of journalists covering press conferences and speeches by politicians at events, she said. The sitel also developed a tool that generates texts based on its own archive.

“After the text is written there is always an editor looking and seeing if everything is correct and then we always mention that the article was written based on data from other articles,” Ferreira said.

In addition to the successes, she also shared the difficulties that UOL faced in developing some tools, such as the presenter Dia.

“In the end it was very expensive for us to generate this presenter using AI,” Ferreira said. “Instead of making our work easier, she ended up giving us even more work, so we retired Dia for now.”

“We also test several tools. Don't think that the first one will be the best nor that you will find a perfect tool that meets all your needs. We don't think all the results are great, but we believe the test is always valid,” she said.

AP maps use of genIA in newsrooms

Aimee Rinehart, senior product manager for AI strategy at The Associated Press, shared the results of a survey on generative AI conducted in December 2023. The study “Generative AI in Journalism: The Evolution of Newswork and Ethics in a Generative Information Ecosystem” was launched on April 9 and had the participation of almost 300 media professionals, mostly from North America and Europe.

“The top-level findings are that many of the explorations of AI are happening in content production,” Rinehart said.

81.4% of respondents said they had some knowledge about generative AI and 73.8% said they had already used the technology in some way. Nearly half said tasks and workflows in their newsrooms have already changed because of generative AI.

According to her, there is an “unmet opportunity” for the design of new interfaces to support the work of journalists with generative AI, especially to enable the necessary supervision and checking.

“Journalists will need well-designed editing interfaces in order to effectively use generative AI for various tasks. And let's face it, we're all here because of a user interface called ChatGPT, so that really made people aware of the capabilities of AI in a very felt way and it's going to have to translate something similar into the newsroom,” Rinehart said.

Some of the research's conclusions, according to Rinehart, are that AI use policies need to be more concrete to guide a more responsible use of technology in journalism; There are many claims about AI efficiency and tools, but no concrete evidence, so more research is needed on which tasks and uses actually provide efficiency and performance gains; and the tools themselves could be more rigorously evaluated to ensure alignment with journalistic expectations and norms.

Collaborative design

Andrew Rodriguez Calderón, data project lead at The Marshall Project, introduced himself as a member of several social groups that have been “the subject of exploitation, extraction or exclusion by both the media industry as well as the tech industry.”

“When we adopt new technologies, we bring all of that history with us and I think that it's really important to develop practices as we use this new technology that acknowledges that past,” he said.

He proposed a reflection on how collaborative design can help journalists apply AI to serve people affected by the issues being covered. Such a methodology involves understanding historical and social contexts and welcoming the contributions of affected communities so that tools created with AI have a positive impact.

Calderón used the example of the development of a tool by The Marshall Project that allows people to check which books are banned in prisons in each state in the U.S., based on a survey of the states' public policies on the subject. The tool was developed with the collaboration of incarcerated people, people who have already been released from prison and their families, among other interest groups, who at each stage of the process contributed to the development of something that allowed access to rights.

“AI and design and journalism together as tools in that design process can produce empowering stories and products, can create alternate pathways for accountability that are more ground up, and can also address fundamental information gaps at scale,” Calderón said.

“My main takeaway is that I don't think AI is the end-all, be-all; it is a tool, but it is not neutral. It's important that we have a framework that informs the way that we use AI as a tool and how it perpetuates and motivates our journalism,” he said.

Structuring data

Zach Seward, newsroom’s editorial director of AI initiatives at The New York Times, commented that generative AI enchanted the general public with the launch of tools capable of writing poems, creating images and even penning songs based on instructions given by users.

“The whole thing can be a blast, but a year later it seems clear that introducing the world to generative AI through parlor tricks like that created some distorted impressions of what the technology is good for or at least what it's best at,” he said.

For Seward, the most powerful use of generative AI is not creating entirely new text or images, but creating structures from “messy” data that already exists. Celebrated projects using generative AI in journalism, such as that of The Marshall Project cited by Calderón, have in common the fact that they are not actually generating something new, but are rather “creating summaries, extracting information and structuring data in a more usable form,” he said.

He cited other examples in this regard, both carried out by citizens and by his colleagues at The Times.

“Real life is messy. Journalism, at its best, helps people make sense of that mess. Using AI to give structure to messy data is therefore a pure form of journalism,” Seward said.

Translated by Teresa Mioli
Republishing Guidelines