author photo
By SecureWorld News Team
Thu | Jun 28, 2018 | 4:22 AM PDT

Imagine the leader of a country, who dies or gets killed in office, but those around the leader keep it secret to maintain their own power.

No one suspects a thing, for very good reason. The leader is seen delivering a crucial speech from the White House, the Kremlin, or Parliament. 

The only catch? The video of the speech is fake—it was created by artificial intelligence (AI). And it looks so real, you'd never know the difference.

I know it sounds like Hollywood, but a new study by Oxford University paints a picture of AI where this is possible within the next five years.

AI will soon be so good that our existence in video, photos, and speech can be completely faked. And that will alter the cyber attack surface forever, taking it to a new level.

The Oxford study—in conjunction with the Electronic Frontier Foundation, OpenAI, and other partners—is called "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation."

Faking human likeness with artificial intelligence

The authors of the report say synthetic images of a human can now be generated by AI so convincingly it's nearly impossible to spot a fake. Look at these synthetic images created by AI in each of the last four years. 

AI-oxford-synthetic-faces-report.png

That is an amazing progression, is it not? And would you ever suspect the 2017 image was not an actual photo but instead a creation of AI? Of course not.

And here is a chilling finding from the report. AI is nearly perfect at analyzing real images and creating images to look real.

"... the performance of the best AI systems has improved from correctly categorizing around 70% of images to near perfect categorization (98%), better than the human benchmark of 95% accuracy. Even more striking is the case of image generation. AI systems can now produce synthetic images that are nearly indistinguishable from photographs, whereas only a few years ago the images they produced were crude and obviously unrealistic."

AI powers fake video

And it goes beyond images to video. AI is now becoming capable of making it look like you are in front of your corporate offices admitting you hacked your company in an inside job. The admission and video are fake, but since we believe video because it's "real" you can imagine you'd likely be fired and your InfoSec career down the tubes.

Imagine the power to inflame political tensions with fake video. It will make what the Russians did in 2016 look like child's play.

AI will make it possible to perfectly imitate your voice

Now, let's go beyond AI created images and videos. The Oxford report talks about the ability of artificial intelligence to imitate your voice so well it would fool your own mother. That's because AI is not limited to mere human strengths.

"The property of being unbounded by human capabilities implies that AI systems could enable actors to carry out attacks that would otherwise be infeasible. For example, most people are not capable of mimicking others’ voices realistically or manually creating audio files that resemble recordings of human speech. However, there has recently been significant progress in developing speech synthesis systems that learn to imitate individuals’ voices (a technology that’s already being commercialized). There is no obvious reason why the outputs of these systems could not become indistinguishable from genuine recordings, in the absence of specially designed authentication measures. Such systems would in turn open up new methods of spreading disinformation and impersonating others."

And this ability reminds me of something I heard in a recent SecureWorld web conference on business email compromise. Always be suspicious of an email from your CEO or CFO asking for money to be transferred somewhere, unless you hear it confirmed from their own voice, as well. 

Even that will soon be a problem. AI will make it possible for spear-phishing attackers to send you an email from your "CEO" and a voicemail from that executive, as well. How, exactly, will employees avoid falling for that?

Artificial intelligence and cybersecurity are intersecting

The report emphasizes that AI and cybersecurity are at a critical juncture, where AI developers should consider the malicious uses of the positive technology they are developing. Although the report admits it will still be difficult to stop AI from being misused:

"Since some tasks that require intelligence are benign and others are not, artificial intelligence is dual-use in the same sense that human intelligence is. It may not be possible for AI researchers simply to avoid producing research and systems that can be directed towards harmful ends."

The paradox of artificial intelligence

The report also talked about some of the types of attacks that AI could carry out and in doing so, uncovered a paradox of AI. It is frail, in its own way. First, though, the threats:

"These include data poisoning attacks (introducing training data that causes a learning system to make mistakes), adversarial examples (inputs designed to be misclassified by machine learning systems), and the exploitation of flaws in the design of autonomous systems’ goals. AI & Security Threats are distinct from traditional software vulnerabilities (e.g. buffer overflows) and demonstrate that while AI systems can exceed human performance in many ways, they can also fail in ways that a human never would."

You really should spend some time reading the Oxford report on artificial intelligence because we've only scratched the surface here. It is 100 pages long and full of spine-chilling scenarios.

So who wins in the future?

Humans and InfoSec teams trying to protect our organizations with AI? Or hackers who use AI to perform attacks with superhuman abilities? 

"We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be." 

Oxford researchers won't take sides here, but how about you?

Let us know in the comments below if you have any guesses on who wins in the AI-powered battlefront over the next few years.

Comments