texas-moody

Latin America leads in mentions of journalism in AI laws

The impact of artificial intelligence (AI) on journalism is no longer in doubt, and, on the contrary, more and more newsrooms and media organizations are adopting initiatives to utilize and manage it. From using the technology for time-saving, repetitive tasks to developing mechanisms to identify AI-generated misinformation, journalism is striving to keep pace with these advancements.

The regulation of AI, however, is still a new area, and one that journalists and media outlets do not necessarily pay attention to, despite the potential impact it could have on their work.

Precisely with the aim of analyzing the possible implications of AI legislation on journalism and the news sector in general, the Center for News, Technology and Innovation (CNTI) recently published the study “Journalism’s New Frontier: An Analysis of Global AI Policy Proposals and Their Impacts on Journalism.”

CNTI reviewed 188 national and regional AI strategies, laws and policies from around the world to analyze how regulation addressed seven components that, in its view, would impact journalism. Topics such as freedom of expression, manipulated or synthetic content, algorithmic discrimination and bias, intellectual property and copyright, transparency and accountability, data protection and privacy, and public information and awareness were the focus of the study. In addition, they also identified which of these laws or proposed laws specifically mentioned “journalism.”

For Latin America and the Caribbean – one of the seven regions analyzed – they found 80 strategies, policies or laws, of which five specifically mentioned journalism or journalists. This made it the region that mentions the profession the most, CNTI said.

“Our goal was really to understand the potential implications of AI legislation for journalism and the information space more broadly. Given that goal, mentioning journalism can often be a double-edged sword,” Jay Barchas-Lichtenstein, senior research manager at CNTI who wrote the chapter on Latin America and the Caribbean, told LatAm Journalism Review (LJR). “It’s positive in that it shows some awareness that these policies are likely to impact the journalism sector. But it can be negative in cases where it means that governments are defining what journalism is and is not.”

Barchas-Lichtenstein said that previous work by the CNTI showed a reluctance among journalists when it comes to regulating their profession. "Journalists overall don't think it's appropriate for governments to define the boundaries of either the field or its practitioners,” they said.

On this topic, and in the context of Latin America, it is still unclear how journalism will be impacted by these regulations, given that the number of laws enacted is still very low, according to the CNTI. The organization highlights as positive that the debate on this issue seems to be more prevalent in the region than in others, and it suggests ensuring legislative language that does not create “new risks to the independence and diversity of a vibrant information ecosystem.”

According to the CNTI, Ecuador has one of the "most comprehensive" proposals in the region with its 2024 Organic Law for the Regulation and Promotion of AI.

The CNTI, for example, highlights Article 31 on "diversity and plurality in digital environments," which requires, among other things, that AI-based content recommendation systems facilitate "equitable access to content of public interest from local, community, and independent media outlets." Article 32 of the same law establishes "safeguards against algorithmic censorship and manipulation" by requiring greater transparency and appeal processes for those who use AI-based content moderation and curation.

Brazil has other notable legislative proposals in the region, with Bill 2338 currently being debated in the Chamber of Deputies. In fact, according to the CNTI, the country has been a leader on this issue in the region.

The bill has changed considerably since it was presented in 2023, Barchas-Lichtenstein said, which means it has several pages of amendments and modifications.  Therefore, the current bill may differ from what was analyzed in the report.

“With that caveat, I’m happy to point out one clear strength that is unlikely to change: it contains clear definitions of what is and is not regulated under the law,” they said.

According to the latest version analyzed for the CNTI report, the bill includes definitions of dozens of terms such as “AI system,” “general-purpose AI system,” “generative AI,” “text and data mining,” “information integrity,” among others.

“Specificity is important, especially because many of these terms don’t necessarily have widely-accepted consensus definitions,” Barchas-Lichtenstein said.

When the CNTI report was prepared, all countries in the region, with the exception of Guatemala, Nicaragua and Belize, had at least one bill or regulatory initiative that mentioned issues related to AI and journalism.

There is no “ideal” model, but there are key aspects

Emmanuel Vargas, co-director of the organization El Veinte, which works for the legal defense of freedom of expression, told LJR that the involvement of organizations and media outlets in any regulation on this issue is important due to potential implications.

“The eagerness on the part of various actors to ‘contain’ the risks of AI could lead to the imposition of useless or excessively restrictive regulations,” said Vargas, who was not part of the CNTI research but did analyze it. “There is also a risk that technology companies will promote a narrative that pushes for overly lax regulations, focused on the need to protect innovation.”

In this regard, Vargas said, regulations on AI must entail greater accountability. A first step, for example, is to identify the risks or problems that need to be regulated, thus preventing the impact of a regulation from being too broad.

“Given that AI carries certain risks of creating or reinforcing discriminatory biases, or of generating misinformation, it is important that certain regulations be in place for any type of development and deployment of these technologies when their purpose is to be part of information processes,” the lawyer said. “This includes, for example, the regulation of recommendation and moderation algorithms on social media platforms, but also their use by journalistic companies.”

Because, in the case of journalism, its regulation could lead to censorship, Vargas suggests three factors to consider. First, using criminal law only in cases of serious violations of rights, such as the production of child pornography. Second, differentiating between the various actors who use the technology and their responsibilities. And third, establishing measures that make the use of technology more transparent, and conducting audits.

“When it comes to use by a journalistic organization, it is important that these regulations include safeguards to prevent transparency or auditing obligations from resulting in the disclosure of information protected by professional secrecy,” Vargas said.

Barchas-Lichtenstein said that the CNTI does not advocate for any particular type of legislation, but rather aims to provide data and context so that those responsible for drafting legislation can find a framework that allows them to “ask the right questions and build a path toward solutions."

“Our goal is to analyze and raise awareness of the trade-offs and potential implications for journalism and the information space, so that legislators, civil society and other stakeholders can ask better questions and pass policies that consider press freedoms and democratic values,” they said.

The report concludes with a series of recommendations detailing "key areas" that various policies should address. One of these is that policies dealing with manipulated content must analyze journalistic cases to avoid censorship. For example, cases where AI is used to manipulate an image or audio to protect a source.

Another important aspect is that the working groups on AI established by legislators should include news producers, product teams and engineers, along with AI technologists, researchers, civil society and other relevant stakeholders. For the CNTI, this point is crucial, given that they are legislating "a technology that is constantly evolving and has wide-reaching consequences.”

The recent report is part of a broader series of research conducted by the CNTI on journalism and AI. Prior to the analysis of AI policies, the CNTI published a report on regular users of AI chatbots and their relevant information habits and needs.

 

This article was translated with AI assistance and reviewed by Teresa Mioli

Republish this story for free with credit to LJR. Read our guidelines