texas-moody

Basic principles of journalism are key to identifying authenticity of visual content

For the first time in its 21-year-history, the International Symposium of Online Journalism (ISOJ) was held online only in 2020. To watch this panel, click here. To watch other panels, click here.

The increasing trend in cheap fakes and deep fakes could very well become a larger issue for the journalism industry, which is why reporters should know how to detect them, said panelists during a discussion at the International Symposium of Online Journalism (ISOJ) on July 24.

ISOJ2020: How to fight deepfake and cheapfake videos

ISOJ2020: How to fight deepfake and cheapfake videos

Claire Wardle

Claire Wardle

“I’m not losing sleep over the threat of deep fakes right now. I am actually more worried about someone denying something that actually happened by saying, ‘that’s a deep fake,’” said moderator Claire Wardle, U.S. director of First Draft. As technology advances, the threat of deep fakes could become a serious issue by 2022. However, she added, the obsession with deep fakes has actually made people take manipulated images and videos more seriously.

Christina Anagnostopoulos

Christina Anagnostopoulos

Christina Anagnostopoulos, senior producer at Reuters, said that although 96% of the fakes are used in pornographic content and disproportionately affect women as methods of harassment, journalists should still educate themselves on how to detect these types of videos. As users’ tech literacy increases, it is becoming easier to bring this types of fake content — which is quite harmful and easily scalable — into the social media world.

“Practice is really important for when we prepare, so the more we familiarize ourselves … the easier it will be to identify them if and when they come up,” Anagnostopoulos said.

To help understand the issue about deep fakes, Anagnostopoulos offered the five categories of media manipulation and an example for some: lost context, edited, staged, CGI modified and synthetic (deep fake). The vast majority of issues Reuters sees, she said,falls within lost context and edited categories.

Lost context: A 2011 Contagion trailer being shared as an upcoming new film to prove that there was a conspiracy that the government and Hollywood knew that the pandemic would hit this year.

Edited: Video of a voice actor comedian impersonating Donald Trump's voice and speaking over actual Fox and Friends clips. It was viewed over a million times.

Imposter content: Screenshots and clips of the film World War Z being shared as if it were MSNBC footage. The video was doctored and never actually aired on MSNBC.

“These examples are often rooted in some level of truth and then taken further into harmful territory,” Anagnostopoulos said about the trends of deep fakes. “They stay within what might be believable so that users fall for it but they take it one step further.

Rhona Tarrant

Rhona Tarrant

Rhona Tarrant, U.S. editor for Storyful, agreed with Wardle that shallow fakes and deep fakes are an issue, but talked about a more common manipulation. Storyful works with newsrooms to verify footage and do online visual investigations verifying open-source content.

Tarrant emphasized that those who seek to spread disinformation “don’t need a high degree of sophistication.” An issue she sees the most occurs when videos that are real are taken out of context, she said, which “has the potential to do a lot of damage.”

The success of this type of manipulation comes from a user’s ability to create doubt, she explained. “Rather than to completely convince you of another narrative, just inserting the doubt in your mind or exploiting people’s suspension or distrust or tendency to believe narratives that fit into their own world view when it comes to a breaking news story,” she said.

Technology isn’t the sole answer to verifying videos’ authenticity, she added, saying that journalists should rely also on the basic principles of journalism: verifying time, date and source. Tarrant wrapped up by saying that journalists also had a responsibility to be conscious about how they spoke about the issue of deep fakes, as to not create further mistrust with the public.

Matthew Wright

Matthew Wright

Matthew Wright, director for research at the Rochester Institute of Technology’s Global Cybersecurity Institute, spoke about the technological advances that are being done to detect deep fakes, as well as their limitations in doing so.

He spoke about the partnership between Facebook, Microsoft and other institutions to present the Facebook Detection Challenge, in which the goal was to produce technology to detect when artificial intelligence had been used to alter a video or mislead. According to Wright, no technical team has been able to pass over 70%, and his own team’s submission failed for technical reasons. However, he added, Facebook’s own team submission also failed because they couldn’t get their own software to work on their own data set.

“In any case, there's a lot of deep fakes that are getting past even the best state of the art systems,” he added.

Wright also highlighted Global Cybersecurity’s Deep Fake Tool, which was most affected in forward-facing videos, in which there was only one face. It runs into problems, he said, where there are multiple poses and faces. There is still a component of error with software, he explained. Something they could do, he added, was to provide better uncertainty values.

“If we just told the tool to run right now, it will not only give an answer, it will say it’s very confident in its answer no matter what kind of situation you put it in,” he said, adding that a solution to this could be to tell the machine to factor in issues such as bad lighting or multiple faces. “In situations like that the tool can throw up its hands and say, ‘look I don’t know. I don’t think I can give you some kind of answer, but you should really be careful with this answer.’”

To view the panel, click here.

RECENT ARTICLES