Chinese disinformation campaign targets the U.S. using fake people on a fake news program
By Adam Satariano and Paul Mozur
In one video, a news anchor with perfectly combed dark hair and a stubbly beard outlined what he saw as the United States’ shameful lack of action against gun violence.
In another video, a female news anchor heralded China’s role in geopolitical relations at an international summit meeting.
But something was off. Their voices were stilted and failed to sync with the movement of their mouths. Their faces had a pixelated, video-game quality and their hair appeared unnaturally plastered to the head. The captions were filled with grammatical mistakes.
The two broadcasters, purportedly anchors for a news outlet called Wolf News, are not real people. They are computer-generated avatars created by artificial intelligence software. And late last year, videos of them were distributed by pro-China bot accounts on Facebook and Twitter, in the first known instance of “deepfake” video technology being used to create fictitious people as part of a state-aligned information campaign.
“This is the first time we’ve seen this in the wild,” said Jack Stubbs, the vice president of intelligence at Graphika, a research firm that studies disinformation. Graphika discovered the pro-China campaign, which appeared intended to promote the interests of the Chinese Communist Party and undercut the United States for English-speaking viewers.
“Deepfake” technology, which has progressed steadily for nearly a decade, has the ability to create talking digital puppets. The A.I. software is sometimes used to distort public figures, like a video that circulated on social media last year falsely showing Volodymyr Zelensky, the president of Ukraine, announcing a surrender. But the software can also create characters out of whole cloth, going beyond traditional editing software and expensive special effects tools used by Hollywood, blurring the line between fact and fiction to an extraordinary degree.
With few laws to manage the spread of the technology, disinformation experts have long warned that deepfake videos could further sever people’s ability to discern reality from forgeries online, potentially being misused to set off unrest or incept a political scandal. Those predictions have now become reality.
Although the use of deepfakes in the recently discovered pro-China disinformation campaign was ham-handed, it opens a new chapter in information warfare. In recent weeks, another video using similar A.I. technology was uncovered online, showing fictitious people who described themselves as Americans, promoting support for the government of Burkina Faso, which faces scrutiny for links to Russia.
A.I. software, which can easily be purchased online, can create “videos in a matter of minutes and subscriptions start at just a few dollars a month,” Mr. Stubbs said. “That makes it easier to produce content at scale.”
Graphika linked the two fake Wolf News presenters to technology made by Synthesia, an A.I. company based above a clothing shop in London’s Oxford Circus.
The five-year-old start-up makes software for creating deepfake avatars. A customer simply needs to type up a script, which is then read by one of the digital actors made with Synthesia’s tools.
A.I. avatars are “digital twins,” Synthesia said, that are based on the appearances of hired actors and can be manipulated to speak in 120 languages and accents. It offers more than 85 characters to choose from with different genders, ages, ethnicities, voice tones and fashion choices.
One A.I. character, named George, looks like a veteran business executive with gray hair and wears a blue blazer and a collared shirt. Another, Helia, wears a hijab. Carlo, another avatar, has a hard hat. Samuel wears a white lab coat like the ones worn by doctors. (Customers can also use Synthesia to create their own avatars based on themselves or on others who have granted them permission.)
The company’s software is mostly used by customers for human resources and training videos, where an unpolished production quality is sufficient. The software, which costs as little as $30 a month, produces videos in minutes that could otherwise take several days and would require hiring a video production crew and human actors.
The entire process is “as easy as writing an email,” Synthesia said on its website.
Victor Riparbelli, Synthesia’s co-founder and chief executive, said those who used its technology to create the avatars discovered by Graphika had violated its terms of service. Those terms state that the company’s technology should not be used for “political, sexual, personal, criminal and discriminatory content.” Mr. Riparbelli declined to share information about the people behind the Wolf News videos, but he said their accounts had been suspended.
Mr. Riparbelli added that Synthesia has a four-person team dedicated to preventing its deepfake technology from being used to create illicit content, but said misinformation and other material that did not include outright hate speech, slurs, explicit words and imagery could be hard to detect.
“It’s very difficult to ascertain that this is misinformation,” he said after being shown one of the Wolf News videos. He said he took “full responsibility for anything that happens on our platform,” and called on policymakers to set clearer rules about how the A.I. tools could be used.
Identifying disinformation will become only more difficult, Mr. Riparbelli said. Eventually, he added, deepfake technology will become sophisticated enough to “build a Hollywood film on a laptop without the need for anything else.”
Graphika linked Synthesia to the pro-China disinformation campaign by tracing the two Wolf News avatars to other innocuous training videos online featuring the same characters. On its website, Synthesia called the two avatars “Anna” and “Jason.”
How the same A.I.-generated avatar appeared in marketing and disinformation campaigns.
The avatars are reading a script that has been typed into Synthesia’s software. With the characters’ pixelated faces and robotic voices, it does not take long to notice something is off.
In the video supporting Burkina Faso’s new government, Anna also made an appearance. “Let us all remain mobilized behind the Burkinabe people in this common struggle.” she said in a robotic monotone. “Homeland or death, we shall overcome.”
Deepfake videos have proliferated for years. Kendrick Lamar used the technology in a music video last year to morph into Kanye West, Will Smith and Kobe Bryant. Pornography websites have faced criticism for showcasing videos in which the technology had been used to illicitly copy the likenesses of famous actresses.
In China, A.I. companies have been developing deepfake tools for more than five years. In a 2017 publicity stunt at a conference, the Chinese firm iFlytek made deepfake video of the U.S. president at the time, Donald J. Trump, speaking in Mandarin. IFlytek has since been added to a U.S. blacklist that limits the sale of American-made technology for national security reasons.
Meta, the owner of Facebook, Instagram and WhatsApp, said it had deleted at least one account affiliated with the pro-China deepfake videos after being contacted by The New York Times. The company, which declined further comment, does not allow video and other media that is manipulated with the intent to mislead. Twitter did not respond to requests for comment.
Graphika said it discovered the deepfake videos while following social media accounts linked to a pro-China misinformation campaign known as “spamouflage.” In these campaigns, political spam accounts plant content online and then use other accounts that are part of a network to amplify the material across platforms.
Researchers said the use of deepfake technology was more notable than the actual impact of the videos, which were not seen by many people. The two videos featuring the so-called Wolf News anchors were posted at least five times between Nov. 22 and Nov. 30 by five accounts, according to Graphika. The posts were then re-shared by at least two more accounts, which appeared to be part of a pro-China network.
Mr. Stubbs said disinformation peddlers will continue experimenting with A.I. software to produce increasingly convincing media that is hard to detect and verify.
“What we’re seeing today is another sign of things to come,” he said.
Credit: The New York Times