Media associations in the Americas are warning that journalistic media should not repeat with generative artificial intelligence (AI) the same mistakes they made with the appearance of the Internet almost three decades ago. At that time, news content was allowed to be freely available on websites, under the belief that the advertising business model would work on the web as it did in print, radio and television.
This is what representatives of the Inter-American Press Association (IAPA), the Colombian News Media Association (AMI, for its acronym in Spanish), the National Association of Newspapers of Brazil (ANJ). These are three of the more than 25 media associations from around the world that signed the Global Principles on Artificial Intelligence, a document that seeks to guide the development and implementation of this new technology in a regulated manner and within an ethical and responsible framework for journalistic media.
“We believe that generative AI needs to be developed for the benefit of humanity and civilization,” Marcelo Rech, executive president of ANJ, told LatAm Journalism Review (LJR). “So we understand that in order to not make the same mistakes that happened with the principles of the Internet, we must have regulations that establish ethical criteria so that this development is done in a way that benefits society.”
Given the rapid growth of generative AI platforms, associations recognize that a global effort that represents a significant counterweight to the power of development companies is necessary. The World Association of News Publishers (WAN-IFRA) was in charge of convening associations that include media from around the world to draft the Principles.
The document, published in August of this year, covers issues related to intellectual property, transparency, accountability, integrity, security and sustainable development in relation to generative AI and journalism.
The Association of Argentine Journalistic Entities (ADEPA) and Grupo de Diarios América are the two more associations from the continent that also signed the Principles.
Given the multiple disinformation crises that have affected most countries on the continent, the transparency of generative AI platforms and the protection of journalistic credibility are fundamental issues for the three aforementioned associations.
Platforms based on Generative Pre-trained Transformer technology, such as ChatGPT, Google Bard, or Claude, are designed to generate coherent, human-like text responses in a conversational manner. However, these platforms are trained with information from all types of Internet sources and their responses are based on language patterns, not verified facts. This often causes the information provided to include inaccuracies and misrepresentations.
“Information that has been distorted, that has been misattributed, can generate confusion, it can increase the risk of disinformation,” Carlos Jornet, president of the IAPA’s Press Freedom and Information Commission, told LJR. “Disinformation in itself is already a growing problem that is beginning to impact democratic institutions and the ability of citizens to be informed with quality standards. If this is enhanced by a robot that generates content multiplying at an increasing speed, it can have an enormous impact.”
The Global Principles on Artificial Intelligence state that there must be transparency regarding the sources of information that generative AI platforms use to create content, including clear and appropriate accountability mechanisms.
The document also indicates that AI developers should work with publishers to develop mutually beneficial attribution standards and formats, as well as to provide understandable information about how such systems work, so that users can make judgments about the quality and reliability of the information generated.
“There has to be transparency in AI systems, not only [saying] where they got things from, but that they have done the job well,” Werner Zitzmann, executive director of the AMI, told LJR. “It is very important that they are accountable, that they are not hidden behind 'the algorithm,’ a magical, abstract thing. [...] The most important thing is that all its results are aimed at guaranteeing the credibility of the content and all this based on the fact that, if the media are sources, then we are credible for the recipient."
Respect for intellectual property of journalistic content is another fundamental axis of the Principles, according to the media associations. The document indicates that AI developers must respect the intellectual property rights of those who generate original content and that publishers have the right to negotiate and receive fair remuneration for the use of their content on generative platforms.
“We believe that there is a use [of content] that does not respect the intellectual property rights of the media,” Jornet said. “This ends up affecting journalism, it ends up generating content that can surely lead audiences to content that was generated with a high allocation of resources, both human and economic, and then there is someone who takes advantage of them to generate income.”
A study by The Washington Post and the Allen Institute for AI published this year found that digital media rank third among the sources of information with which generative AI systems are trained. At least half of the top ten websites in that category were news outlets.
On Oct. 31, the News/Media Alliance in the United States published research that showed that AI developers are not only using content from their members’ news outlets to train their systems without authorization, but are using it widely and to a greater extent than other sources.
The study also indicated that most developers do not obtain the necessary licenses for the use of such content and do not provide any compensation to the news media.
“We defend very vehemently that the content has ownership. They have a certification of origin because they were made by a journalist, by a media outlet,” Rech said. “There is not even a credit to the origin of the source and much less remuneration, there were no agreements. We believe that the regulation must have a prior agreement releasing the content and that the content that is already there and is used [by the AI platform] without authorization must be paid for.”
Rech warned about the growing emergence of sites that take news content from serious media, rewrite it using platforms like ChatGPT and “repackage” it to present it as new, without giving any credit to the original author.
In August of this year, the anti-disinformation organization NewsGuard identified 37 websites that use chatbots to rewrite news articles from outlets such as CNN, The New York Times and Reuters. Some of these sites were apparently generated automatically, without any human intervention.
Beyond demanding that the right of news media to be paid for their content be fulfilled, through the Principles, associations lean towards a model in which journalists are part of the conversation and the construction of a quality information environment in AI platforms.
“On the one hand, we want them to recognize the value [of journalistic work] but on the other we want to know what the purpose is. We want to put rules in place that are not just for payment purposes, but really [to know] what they are going to do with this material,” Zitzmann said. “That has to be a constructive process and not simply a utilitarian thing where they say 'I need these inputs and I'm going to make a business with this.'”
Zitzmann, Rech and Jornet agree that it will not be easy to get large technology companies to adhere to the Principles proposed by media associations. However, they do believe that the declaration of these principles is a first step to raising awareness around the issue, as well as to calling other relevant actors, such as the technology sector, academia and regulators, to join in confronting the AI platforms.
“For a large, robust group of media in the world to say ‘we agree on this’ was the best way to act efficiently to create consequences as soon as possible,” Zitzmann said. “This is a matter of power. We are the counterbalance to that. That is why it is so important that the weight of the collective is a very big counterweight to the power of a product that has capital that is invested in an unrestricted way in projects that have the potential to do good in some things, but bad in others.”
The next step, Zitzmann said, is for each media association that is a signatory to the Principles to convey to their members the importance of adopting the Principles within their own organizations. Jornet said that spaces for debate must continue to be provided to find the best path forward.
For his part, Rech said that ANJ has begun to communicate the Principles and their importance in a series of talks with affiliated media. He also said that he has participated in forums on the regulation of said technology both in Congress and before Brazilian authorities.
“Like any tool, a hammer, for example, can be used to build the most beautiful house or also to hit someone on the head,” Jornet said. “Provide education, cultural training so that people know how to communicate with AI platforms and so that they know how to use them properly, see how there is self-regulation on the part of the platforms to prevent them from being used improperly, that’s the path for me.”
In the early days of the Internet, search engines like Google or Yahoo! facilitated free access to journalistic content from traditional media that were beginning to enter digital platforms. This accustomed readers to free news content, which finally led to a crisis in the media business model.
“Learning from what happened with the search engine algorithm taught us, precisely, that when you leave everything open you generate a culture of free,” Zitzmann said. “If you weren't in the search engine, you didn't exist. And positioning a site so that people didn’t know it existed cost enormous fortunes in advertising.”
The possibility of something similar happening with generative AI platforms, added to the threat of disinformation and loss of credibility of journalism, has led some media to choose to block their content from these platforms.
In August of this year, The New York Times blocked the web crawler of OpenAI, the company that developed ChatGPT, so that the company can no longer access the newspaper's content to train its AI model. Additionally, the Times updated its terms of service to prohibit scraping of its articles and images for training generative AI platforms. The New York Times was followed by The Washington Post, Reuters, CNN and ABC News, among others, with similar measures.
In Latin America, there is discussion among media associations about whether blocking systems like ChatGPT is the best option to protect their content. Although they have different opinions on the matter, both the IAPA, ANJ and AMI agree that drastic measures must be taken to prevent journalistic content from being used for free by generative AI platforms.
“The AI algorithm is capable of writing a monograph on a topic in a second because it has taken the information from sources. When one or many of those sources are journalistic media, which range from news media to specialized media, and all of that is free, you are going to take it, and you are going to learn for free. That's like going to college without paying tuition,” Zitzmann said.
For Rech, payment for journalistic activity should go beyond simple remuneration for the use of content. The executive president of the ANJ considers that the activities of digital platforms, including those of AI and social networks, have as a secondary effect a contamination of the information ecosystem, which includes phenomena such as disinformation and hate speech.
This contamination, he said, has been proven in countries like Brazil, with the influence of WhatsApp in the Jair Bolsonaro elections; in the United States, with the disinformation generated during the Donald Trump government; and in the Philippines, with armies of bots that manipulated information in favor of President Rodrigo Duterte.
Journalism has the professional and technical capacity to “clean up” that contamination, with content verification and quality information, and those responsible for that contamination are the ones who should pay for at least part of that cleanup, Rech said.
“I advocate that the platforms have to have a rate, a percentage of their profits, and that has to be transferred to whoever does the ‘cleaning’ of that pollution,” he said. “It's like an industry that produces, for example, shoes, and part of its waste ends up in a river. We have to pay for cleaning that river."
For Rech, this would be the safest and most sustainable way, not only to finance journalism, but to ensure that quality information prevails over disinformation.
“If global growth threatens the physical health of the planet, disinformation threatens mental health,” he said. “Whoever makes the threat, who makes fortunes, billions of dollars with it, has to pay a part in cleaning up that pollution.”
Banner: Illustration created by artificial intelligence with Canva's Magic Media