Election

Deepfakes Are Good for More Than Just Manipulating Voters

With Israel's general election taking place Tuesday, researcher Dov Greenbaum outlines the dangers, consequences, and potential legitimate uses of deepfakes

Dov Greenbaum 13:0617.09.19

Deepfakes and fake news are in the news again and with Israel's general election being held Tuesday, it is important to revisit how these technologies can influence various electorates.

 

This past summer, two mobile apps stood out in the news: Zao and FaceApp. The Chinese face-swapping app, Zao, created by NASDAQ listed company Momo Inc., took the Chinese iOS app store by storm and earlier this month became the most downloaded free app in China. The app, which allows users to replace celebrity faces with their own in iconic film scenes has helped democratize the creation of deepfakes, allowing anyone to pretend to be famous.

 

But it is not all fun and games. These sorts of apps help popularize the problematic technology of deepfakes, as well as to push advancements in those underlying technologies. Zao also raised significant privacy concerns. In particular, its original user agreement—which was eventually changed after a public outcry—allowed Zao to use all of the uploaded faces for marketing purposes. Comparable privacy concerns were raised by the popularity of the similarly viral face-aging Russian app, FaceApp, which demanded that its users grant the app perpetual, irrevocable access to the uploaded images.

Deepfakes. Photo: Shutterstock Deepfakes. Photo: Shutterstock

Earlier this month, Facebook announced a multi-million-dollar Deepfake Detection Challenge, in conjunction with Microsoft, which it hopes will spur the industry into developing tools that could be employed to discover imagery altered by artificial intelligence and posted on social media.

 

This is not the first time that Facebook has thought about deepfakes in earnest this summer. In July, U.S. House Intelligence Committee Chairman Adam Schiff demanded that Facebook take action against deepfakes on its site. Facebook’s efforts in this space are commendable, but also a bit disconcerting, as Facebook will develop its own dataset of expertly crafted deepfake videos to train its AI software.

 

From the old-school media side, the British Broadcasting Corporation (BBC), a 96-year-old public service broadcaster, has partnered with technology companies to develop an early warning system as well as an educational program that will be especially alert around elections in seeking out fake news and deepfakes.

 

Even the U.S. government, under the auspices of Defense Advanced Research Projects Agency (DARPA), is seeking out help in developing software, called Semantic Forensics (SemaFor), to determine whether or not a video has been manipulated by AI. DARPA is also developing a similar software called MediFor, which seeks to find fake still images and text such as those developed by the GTP-2 algorithm which recently convincingly wrote about the discovery of a herd of unicorns.

 

As DARPA explains: its "attribution algorithms will infer if multi-modal media originates from a particular organization or individual. Characterization algorithms will reason about whether multimodal media was generated or manipulated for malicious purposes. These SemaFor technologies will help identify, deter, and understand adversary disinformation campaigns.”

 

However, while it is clear that elections are a time of extra vigilance in the pursuit of fake news, it might be just as important, or possibly even more important, to be looking for fake news long before elections, especially when voter suspicion is less attuned to fake news stories.

 

Researchers have recently uncovered an insidious concern associated with fake news: the ability to create seemingly realistic false memories, which can then be used to collaborate additional fake news items. Like a mole stationed in an adversarial organization long before it is needed, these memories could be implanted long before an election, only to be triggered at a later date around election time to influence voters.

 

Even more problematic than simply false memories are the psychological realities within which these false memories are employed. It has long been established that we are all cognitive misers when it comes to thoughts. That is especially true in cases where complicated ideas—such as the many issues arising during an election—are involved, as our brains tend to take mental shortcuts through unconscious inferences, to help us manage the deluge of data that is presented to us daily. These mental shortcuts, when based on fake news, become strongly held beliefs that we are not always willing to let go of, especially when these memories align with our own political predispositions.

 

Consider, for example, the well-known Mandela effect, the unexplained phenomenon of broadly shared false memories, named after its most famous iteration, that South African politician Nelson Mandela died in jail, a "fact" many people would swear by. These false memories are so entrenched that there is a large segment of the population that would rather believe in a supernatural alteration of the time-space continuum than the possibility that they are misremembering.

 

There are other ongoing political concerns with fake news: have we already destroyed the value of the term fake news and our faith in the media by overusing the term and effectively allowing anyone to claim that anything is fake news? Further, have we created a liar's dividend by entrenching a default of disbelief, allowing anyone to escape responsibility for their earlier poor judgment by claiming that references to their real misbehaviors are in actuality fake news?

 

Like with all areas where law and technology overlap, it is imperative that we define what the primary concern is with fake news and deepfakes. What should we regulate? What is and what is not a problematic use of these technologies? For example, are there legitimate uses for deepfake technologies that technology companies should consider when deciding whether to allow the technology to be employed on their platforms, rather than banning it outright? Is satire or parody employing deepfake technology fake news?

 

One possible legitimate use for deepfakes could be for advertisers to implement the technology to provide specific and optimized tailored content to each target demographic, or even each individual. Aren't you, as a consumer, more likely respond to an advertisement with a celebrity talking directly to you, rather than to everyone?

 

Perhaps the most important question is whether deepfake technology is even the source of the problem that we are trying to solve, in which case it should be severely curbed, or is there something more prevalent, insidious, and foundational at play. To wit, how valuable is the technology of deepfakes really is to the phenomenon of fake news, when regular photographs and videos shown out of context can be used to create realistic fake news, or what is known as cheap fakes?

 

In the end, while deepfakes may not be the only way to change our minds, they are one of the few proven ways. Other methods, including drugs and torture, have been tried with no success, with one of the most famous cases being the CIA’s recently back in the news Project MKUltra.

 

Dov Greenbaum, JD PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies and Professor at the Harry Radzyner Law School, both at the Interdisciplinary Center (IDC) Herzliya.

Cancel Send
    To all comments