As the saying goes, “If you can’t beat them, join them.” This principle seems to apply to the impact of artificial intelligence (AI) on media and journalism. AI-based tools, such as ChatGPT, were perceived as a threat to employment in the field. However, more and more journalists are discovering the benefits of this technology and leveraging it in their work.
Headline writing, translation, editing and story idea generation are some of the advantages provided by ChatGPT, which is nothing more than an AI-based chat system developed and released in November 2022, by the company OpenAI.
ChatGPT is based on the GPT (Generative Pre-trained Transformer) architecture and uses machine learning techniques to understand and generate natural language text. ChatGPT has a free version, but it also has a paid version called ChatGPT Plus, which offers greater advantages to the user.
"I have used ChatGPT and similar versions, mostly to get ideas on the structure of the story and to summarize content for metadescription or similar fields," José Rafael Peña, editor and writer at BeInCrypto, a portal specializing in Spanish-language cryptocurrency news, told LatAm Journalism Review (LJR).
The process Peña follows is not complicated. He simply logs into the tool, writes a headline and asks the robot to write a blog post based on the given headline. As the publisher explains, the more information it is given, the better the responses will be. "If I like what I read, I go in that direction," he added.
A similar process is followed by Colombian journalist Juan David Garzón, who regularly uses ChatGPT to improve headlines and posts for social media. "I’ve not yet followed a strict system to know if it has in fact been effective, but it has given me good suggestions," he told LJR.
The text generated by ChatGPT can be generic, but there are ways to train the tool to write like someone in particular or to follow a news outlet’s style. Some journalists are already experimenting with the different writing styles that can come out of ChatGPT and also some technology experts are teaching others how to program the tool.
When asking ChatGPT how journalists can take advantage of the tool, it gives you the following disclaimer: "It is important to note that while ChatGPT can be a useful tool, journalists must be critical and do their own fact-checking. The information provided by the AI may not always be accurate or up to date, so it is essential to verify the data obtained through additional reliable sources."
And it’s right. When you ask ChatGPT to write a feature story or a news brief, it returns a well-written result, with no spelling errors, but lacking in quotes (because technology still can't interview real people). It also makes statements that may not be entirely true.
"I check everything that ChatGPT spits out before I publish it. Something that also happens a lot is that, although the tool is really powerful, it has a tendency to repeat sentence patterns, sometimes even within the same text," Garzón said.
La Silla Vacía, a well-known Colombian digital native news outlet which focuses on political coverage, has experimented with ChatGPT to improve the editing process in its section called En Vivo. But, the results have been 'bittersweet,' they said. "The system identifies and corrects most errors. However, sometimes it shows that it has corrected a sentence, but when we check the 'corrected' sentence, it’s exactly the same as the original.
In addition, because we’re working with a system which we have not trained to follow our writing stylebook, some of its suggestions, while grammatically correct, are not relevant to our site," said Karen De la Hoz on the blog of The Generative AI in the Newsroom Project, a project promoted by professor and researcher Nick Diakopoulos to explore the uses of artificial intelligence in newsrooms.
ChatGPT is a language model, not a fact-checker. Media such as CNET, an American multimedia website, have stopped using ChatGPT to create articles because of the percentage of errors found. In addition, it has been confirmed that when ChatGPT can’t find a piece of information, it can even make it up.
A few weeks ago, the company NewsGuard, created to rate the quality level of news sites, announced that it had detected at least fifty new news and information sites created with Artificial Intelligence. These portals are only interested in making a profit through low quality content and multiple ads.
Therefore, there is growing concern in the journalism community that ChatGPT will become a disinformation machine. "When using ChatGPT, fact-checking is super important, because if it’s used directly from OpenAI it’s not connected to the Internet. So it has information that is current only until 2021. So it will only release outdated information," Peña said.
Despite this, when asking ChatGPT how it can be useful for journalists, it yields data verification among its results. "AI such as ChatGPT can help journalists verify information and data. They can use the tool to cross-check data, look up references or verify claims made by sources."
For Peña, fact-checking with ChatGPT, or any other similar tool, is a bad idea. "If you use powered-up versions with Internet access such as Bing Chat and Bard, they give better and current information, but they still present the best-ranked content in search engines. AI tools have a bias by default. Given that they are trained by humans, they impart their bias," he said.
There has also been talk of ChatGPT's benefits for data journalism, as it allows for reviewing code and even doing data analysis and data clean up. "I once asked ChatGPT to create a table for me by cleaning up some data, but I found errors. It does things that are not right. It’s been useful as a spreadsheet accessory, that is, it’s helped me find a formula to use later in my own data," said Andrés Snitcofsky, Argentine graphic designer and part of Infobae's data unit team.
ChatGPT is the most popular chat system based on artificial intelligence, but it is not the only one. Sometimes, the demand for the tool is so high that it exceeds its capacity and blocks the user from accessing it. And, on the other hand, it is blocked in some countries — such as Venezuela and Cuba, in Latin America.
For these reasons, there are journalists who prefer to use alternatives such as Microsoft Bing, Notion AI, YouChat or even Pi from Whatsapp.
"Together with my team we use some new AI-based tools like copy.ai. These can be quite basic in their free version, but through rechecking and discernment some communicational tasks can benefit," Caroll Patricia Terán, journalist and communications director for the Center for the Study of Sexual and Reproductive Rights in Venezuela, told LJR.
"I don't use ChatGPT at the moment. I am using Pi from Whatsapp, but in some conversations where I ask it for 'advice,' it has even said these are very deep topics for it, as is the case of sexual and reproductive rights, or human trafficking. Although it does throw out one or two valuable tips," Terán added.
The use of AI technologies in journalism offers opportunities, but also poses challenges and responsibilities. And while some journalists in Latin America are experimenting with these technologies, it has remained just that: experimentation. Perhaps ChatGPT is right in saying that "journalists must be critical and aware of the limitations of these tools, while taking advantage of their benefits to improve and streamline their work in a constantly evolving media environment."