author photo
By Bruce Sussman
Fri | Sep 6, 2019 | 8:26 AM PDT

"Video doesn't lie."

For decades now, that has been the refrain of law enforcement, lawyers, activists, and media—as this video from 2011 shows:

video-does-not-lie

The "truth" of video has led to arrests, convictions, revolts, and regime change.

But over the last few years, we ran into a little problem.

It turns out that video can be made to lie through AI and we have almost no way to detect it. From fake Obama to fake Putin, deepfakes as they are known are really, really good. And getting better thanks to Artificial Intelligence.

Plus, through the power of social media, these AI-created deepfakes can spread very rapidly. They can change lives before people figure out they are lies.

Facebook and Microsoft launch 'Deepfake Detection Challenge'

For several years, we've been hearing about concerns over deepfakes at SecureWord conferences across North America.

Now, a  group of tech giants, higher education, and an AI collaborative are creating the "Deepfake Detection Challenge." It has a $10 million budget and will give away grants and awards to those who develop a way to detect AI-created fake videos.

"The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer," says Facebook Chief Technology Officer Mike Schroepfer.

"The Deepfake Detection Challenge will include a data set and leaderboard, as well as grants and awards, to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others."

How does the 'Deepfake Detection Challenge' work?

The challenge has a website, which is largely a placeholder until the competition gets going in October 2019.

However, there are a few FAQs posted, like these:

Q: When will the challenge begin?
A: The challenge will launch in late 2019 with the release of a dataset.

This will be a custom created dataset which is a starting point for the challenge.

Q: How does the challenge work?
A: Participants can download the created dataset for training models. Entrants will also submit code into a black box environment for testing. We’ll be opening the challenge for submissions later this year and the guidelines and dataset license will be available at that time.

Q: Are you using user data from social media or video platforms in the dataset?
A: No user data from social or video platforms will be included in the training dataset. We are constructing a new dataset specifically for this challenge.

Q: How will the challenge be judged and a winner selected?
A: We are going to be providing a test mechanism that enables teams to score the effectiveness of their models, against one or more black box test sets from our founding partners.

And here's a question I was thinking about while writing this story: what about the bad guys? Won't they benefit from knowing how we're detecting deepfakes so they can avoid those things?

Q: How are you protecting against adversaries who will try to access the code and data?
A: We will be gating access to the training dataset so only researchers accepted into the challenge can access it. Each participant will need to agree to terms of use on how they use, store, and handle the data, and there are strict restrictions on who else the data can be shared with.

Who is involved with the Deepfake Challenge?

Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY are all part of the challenge.

At least $10 million has been committed for the competition.

UC Berkeley professor Hany Farid, from the Department of Electrical Engineering & Computer Science, says this type of investment has to happen to attack the deepfake problem:

"This will require investments across the board, including in industry/university/NGO research efforts to develop and operationalize technology that can quickly and accurately determine which content is authentic."

And Phillip Isola, Assistant Professor of Electrical Engineering & Computer Science at MIT, sums it up like this:

"Technology to manipulate images is advancing faster than our ability to tell what’s real from what's been faked. A problem as big as this won't be solved by one person alone. Open competitions like this one spur innovation by focusing the world's collective brainpower on a seemingly impossible goal." 

An impossible goal that just may be achieved through competition and cash.

Read: Facebook's announcement of the Deepfake Detection Challenge

Bookmark: Deepfake Detection Challenge Homepage

Comments