At the “quintessential” panel of “every journalism conference,” as described by Nikita Roy, founder of Newsrooms Robots Lab, experts discussed artificial intelligence (AI) and its impact on newsrooms during the first day of the 26th International Symposium on Online Journalism (ISOJ).
“It’s not just about AI tools, but about focusing on the transformation happening because of this technology,” said Roy, who moderated a panel on the impact of AI in newsrooms. “And then, also thinking about how we design future news experiences with AI.”
Roy posed several questions that journalists and news organizations should consider when implementing AI: What should collaboration between the newsroom and product development look like? How should they address threats to audience relationships? How can they deal with misinformation and disinformation? And what new opportunities might emerge?
“This isn’t just another tool. It’s a shift in news infrastructure. It’s a shift in how newsrooms operate and evolve,” Roy said. “This conversation will help us understand how to prepare for the changes ahead, since these decisions within the system will shape and prepare us for the future of our industry.”
As vice president of editorial innovation and AI strategy at Hearst Newspapers, Tim O’Rourke leads a team working with newspapers across the United States, from small hyperlocal outlets to major metropolitan publications like the San Francisco Chronicle and the Houston Chronicle. Part of his job is figuring out how to help journalists do more meaningful local journalism using AI.
O’Rourke noted how journalism was impacted by the arrival of ChatGPT in 2022. His team then began exploring how to use the tool, evaluating its role in local journalism, and even securing funding to develop AI-driven tools.
“We kind of navigated that and and figured out what's our path to the best way we can take advantage of this technology while respecting our standards, our ethics and understanding to innovate cautiously so we don't lose trust with our audiences,” O’Rourke said. “That's led us to this current period where we're enhancing and accelerating the best ideas that come out of our local news rooms.”
The network employs between 650 and 700 journalists, and last year, it was able to directly train about 350 of them on both proprietary and industry AI tools, O’Rourke said. The goal this year is to extend training to the entire editorial team. According to their records, AI tools have been used 65,000 times, with usage increasing each month.
“We follow the principle that a human should be in the loop,” O’Rourke said. “It’s time that we reinvest right into our local journalism to allow our folks to do higher-minded work, to allow them to get deeper into their communities, to allow them to do things that maybe they didn't have the time to do previously.”
Despite the progress, O’Rourke acknowledged room for improvement. He explained how they used an AI tool to help translate articles into Spanish, but they struggled to reach Spanish-speaking audiences.
Another lesson, he said, is that some tools work better for hyperlocal media, while others are more suited for large-city newsrooms with specialized teams. “We've learned we really have to cater our tools to particular internal users to make sure it works for the ways they do their jobs,” he said.
For Uli Köppen, head of AI at Germany’s public broadcaster Bayerischer Rundfunk, discussing AI’s “real impact” on media must start with examining how AI is changing the way journalists report on AI.
Köppen said the broadcaster specializes in algorithmic accountability reporting, an effort that began at least eight years ago with an investigation into how Germany’s credit scoring algorithm operated with little transparency.
In a recent story with Wired, the team revealed how user data from various apps was being sold in ways that could track people’s movements—including employees of the German and U.S. secret services.
“For doing such stories, we need interdisciplinary teams. We have to invite programmers into our newsrooms, and we have to invite product people being able to do such stories,” Köppen said. “I’m always advocating for more teams to cover algorithmic accountability journalism because I believe it’s essential to promote AI literacy through our reporting.”
One advantage at the German public broadcaster is that AI isn’t used as clickbait but as a way to get audiences to engage with journalism. They’ve learned that tools can be personalized. For example, one feature allows users to enter their location or postal code to receive important news relevant to their area.
Another use of AI involves generating multiple versions of the same story for different platforms. For example, they can request a radio version for a specific region.
“We want our journalists to focus on investigations, analysis, and field reporting, but we don’t want to waste energy creating different versions of a story—sometimes we produce far too many,” Köppen said.
For Yahoo News, everything revolves around “how we are leveraging AI to bring more quality to our readers,” said Brooke Siegel, the company’s vice president of content. According to Siegel, Yahoo News is the No. 1 news and information website in the United States, reaching 190 million users per month. AI, then, helps them “handle that scale.”
“The strategy is staying with the humans and the AI, the way we're incorporating it is to leverage it to bring more quality premium content to our audience,” Siegel said. “And also integrating it into our news room so editors can focus on the creative high-impact work that they're doing and pass off some work to AI.”
To achieve this higher-quality offering, Yahoo News uses AI to personalize content for its users. “We have tens of thousands of articles that are coming into our pipeline every single day. How do I find the right story to serve to our audience? AI is helping us tackle that,” Siegel said.
With their new algorithm, they have increased the amount of time users spend on the site by 65%. The algorithm also helps editors determine which writers are best suited for certain types of stories.
However, Siegel said it’s not just about offering audiences what they want but also what they need to know. To that end, they have another AI tool that scans thousands of stories to identify the most important ones.
“Editors are reviewing that cluster of stories and choosing the right articles, the right publishers, the right angles. So that when our audience is going into that top story cluster, they're really getting an array of perspectives on a single topic that was curated for them by an editor,” Siegel said. “But we wouldn't have been able to get to that place. We couldn't have the editor sifting through those thousands of stories.”
The final word came from Juliana Castro Varón, senior design editor for AI initiatives at The New York Times, who spoke about designing AI experiences. “Not a chatbot,” she said with a laugh.
Her background is in design, and said she has always been concerned with how words and language are presented, she said. That’s why she feels both “appalled” and intrigued when an organization decides to present its information through a chatbot.
A chatbot isn’t always a bad option, she said, but organizations must consider whether it truly serves their audience. Her team, for example, designs AI experiences for two groups: New York Times readers and the newspaper’s journalists.
“They both want context, a way of making sense of the facts,” Castro Varón said. “So we've established that one of the things that makes AI so powerful and so dangerous is that it can't understand meaning. But most often, the value of AI happens when we are able to not generate anything new, but rather create structure out of massive data sets.
One example of AI’s use came during election coverage, when journalists received information about groups dedicated to spreading disinformation. The documents contained more than 500 hours of Zoom call recordings, videos, and other materials. The AI team developed a way to analyze the data and extract key insights, which were then reviewed by reporters.
“Our team believes that AI alone is pretty much worthless – at least in our profession,” Castro Varón said. But “we've learned that expertise paired with AI is a very powerful combination.”
With a touch of humor, she ended her presentation by revealing that The New York Times does, in fact, have a bot—one that analyzes the most common incorrect answers in the daily puzzle. The paper’s games team reviews the data before publishing insights.
“I like bots, but only sometimes,” she said.
For just over 20 minutes, panelists and the moderator dove deeper into topics such as the relationship between product managers and editorial teams, the responsibility news organizations bear when AI generates misinformation or disinformation, and the need for transparency about how AI is used.