Computers have become absolutely foundational to the world we live in Not long ago, they were the machines of science fiction, the real thing being something of a novelty. But they’ve become much more important, often proving many of the books and movies right. And, much like science fiction, they’ve been getting smarter and smarter, and we’re slowly teaching them to learn. Through machine learning algorithms, computers are beginning to learn to mimic rudimentary human behavior, and though something like a sentient AI is pretty far off right now, AI is already introducing potentially dangerous and problematic abilities. So, just what is AI? And what kind of dangers will we, and do we already face from it?
What is Artificial Intelligence?
In recent decades, computers have rapidly become one of the key players in the modern world. As they’ve advanced in processing power, new abilities have become apparent in them. Artificial Intelligence(AI) in particular has started to gain traction in recent years. However, it may be surprising to some to know that the idea of AI, and a lot of the science behind it, isn’t actually all that new. Alan Turing, the father of computer science, was among the first to develop the idea of a neural network, which is the basis for many machine learning programs today.The term Artificial Intelligence was coined in 1956, and become a formal field of research at that point. So while we haven’t seen widespread use of AI until just recently, the idea of it has been around from a while. In fact, even some ancient societies had myths and stories about intelligent machines. In the past 70 years, following the development of computers, AI has become a common theme in movies, books and TV.
Now, at this point we’re still pretty far from any sentient machines, you won’t see any HAL 9000’s or Lt. Commander Data’s for a good while yet. But, while an evil sapient computer is still yet to come, AI is starting to have some other pretty concerning applications. In recent years, “Deep Fake” has started to become a household term. Deep Fakes are the popular name for a relatively new machine learning technique, called a Generative Adversarial Network (GAN). This technique can be applied to a data set (whether it be a set of text, audio, or video) and used to generate entirely new data. This method was originally used for research, but in 2017 it gained some more attention. A reddit user with the alias “Deepfakes” (The name that would eventually become the popular terms for their creation) build GANs using TensorFlow, a free and opensource machine learning software developed by Google. He used these GANs to edit pornographic videos; pasting the faces of celebrities onto the bodies of women in these videos.
The privacy concerns related to Deep Fakes are quite obvious from the first usage of the technology. Imagine if anyone could create an embarrassing video of you, just by downloading the contents of your social media pages. The truly scary part is this; anyone can do that already. The technology to do this is more accessible than you might realize. You don’t have to be a graduate computer science student at MIT to create a Deep Fake video, you don’t have to be a master programmer or hacker, as long as you have some amateur technical skill; anyone can make a deep fake.
The process of creating a deep fake video is simple in theory; gather a large number of photos (preferably in the hundreds or thousands, but it can be done with fewer), plug it into an AI, and boom! You’ve got yourself a completely fabricated, photo realistic video of someone. The actual process is slightly more complicated, but not nearly as hard as you think. It helps that the original creator made an app, called “FakeApp,” which can be used to create one of these videos. The app has been taken down from the App store and the Google Play Store, but you can still side load it onto an android device, or a jail-breaked iOS device. There are also version developed for Windows 10. Once downloaded, you can give it a set of images or a video to learn from, training it on those images. Then you can use that training to create a new video. There are also some other ways to do it if you want to get a bit more advanced, but that’s the real gist of it.
What’s the danger?
So what is the problem here really? I mean, it does make privacy a bit of an issue, but could it actually be used to hurt someone? The answer to that is, absolutely yes. There are a number of different ways that a deep fake could be used to do harm. In the most innocent cases it can be humorous, but applied correctly, deep fakes can do tremendous harm. One of the “lesser” harms that can be done is reputation damage. Deep Fakes could be used as another form of cyber bullying, potentially making videos of a victim doing something embarrassing. In more extreme cases, Deep Fakes could potentially pose problems for criminal justice systems.
A person could take a video of someone committing a crime, and using a deep fake, superimpose someone else’s face onto that video. Then, that video could be used as evidence in a court case. And, being a new technology, no one knows to check for it. Sometimes its pretty obvious when a video has been deep faked (when smaller datasets are used to create the image or video its often more obvious that its been faked. It’s also easier to identify when the person is talking, or in general more animated when moving.) What kind of checks are in place for this? At this point, since the technology is rather new, it should be somewhat simple to identify, as we’ll talk about in a moment, but this technology will get better over time.
The other big threat is political in nature. In this day and age, fake news is becoming an increasingly prevalent issue. And with the development of deep fakes, this will become even more pressing. In fact, in May 2018, a video was circulated on Facebook of Donald Trump making a statement on climate change in Belgium, made by one of Belgium’s political parties. Citizens were quick to respond, thinking the video was genuine-when in fact it wasn’t. This video was made by Socialistische Partij Anders (Sp.a), a political party in Belgium, as a way of grabbing attention. The real kicker here is this: Sp.a’s intention wasn’t to fool people. All they wanted to do was get people’s attention, and point them to an online petition. They assumed that people would be able to tell that the video was fake. If you watch the video, you’ll find that it’s a fairly low quality fake. The lips don’t quite match up correctly, his face looks oddly inexpressive, and his face slightly changes proportion a few times. It’s likely that this video was deep faked with a fairly small data set of images and videos. But it still fooled many people. Imagine what kind of damage could be done if a higher quality deep fake was made of a public figure making an official speech. Even if it was later flagged as non-genuine, the damage caused could be irreparable.
The next step to consider is this-how can we identify these videos? If you know what to look for, at least for the moment, it isn’t actually too difficult to identify deep fakes, especially if they’re done poorly. The previous video of “Donald Trump” is actually a pretty good example to use for identifying deep fakes. In more well done videos, the same kind of issues described there; the face looks a bit too smooth or too wrinkly, there isn’t enough expression going on, the proportions of the head seem to change a bit oddly, the lighting doesn’t quite match up right, the person doesn’t blink enough or they blink too much. These are all some methods that can be used to identify deep fakes. But here’s the end problem-deep fakes aren’t always done this poorly. Even when they’re done right, there are some small inconsistencies, but the larger the data set the person uses on the video, the better it gets. And as time goes on, this technology is only going to get better at making more photo realistic videos and images. And, most people don’t take into consideration that a video might be faked when they’re viewing it. Usually when someone watches a video, they don’t pay enough attention to really notice if its fake.
What is being done about this?
Now the obvious question after going over all the different issues related to AI, how it can be used for rather nefarious purposes, is how is society addressing these problems? Is there any kind of regulation in place to ensure that, for example, deep fakes aren’t used to falsify evidence in a court case? But the other question to consider is this: how far should regulation go? In the majority of cases so far, deep fakes have been used for relatively innocent purposes-like creating a humorous video with someone else’ face placed on a body in an image or video-and the problem is, that the only way to really ensure that this doesn’t happen would be an all out ban. But is that practical?
The other problem is this-in general, this technology can be used for good. The technology behind Deep Fakes, GAN’s were originally used for research at the university level. They aren’t just used to generate fake videos of someone, they can also be used to generate data for a number of different applications. The potential to improve image searching techniques and tagging is immense. The movie and TV industries could make great use of this tech. So realistically, we should keep it around.
But even if we didn’t want to use this technology, it would be practically impossible to regulate. As it stands, there are very few regulations on software-mainly because it is so difficult to regulate. How do you stop people from using machines to do things like DDOS servers or spread viruses? Computers are, by design, machines that can be programmed for whatever purpose the user needs. While some are more suited to certain purposes, at their core, they can be adapted for many different things. It doesn’t take much skill in programming to do so either. In most cases, using AI, like deep fakes is as simple as downloading a neural network from a github repository, then running it with whatever parameters the user sees fit. Since they have some practical purposes, AI’s can often be downloaded easily and with little risk to the user. It is possible that a computer could be programmed to recognize certain code as an AI, and refuse to run it, but for a moderately skilled programmer, it would be a simple matter to either remove the code restricting it, bypass it, or just entirely wipe the computer and install an OS without any kind of AI regulation software. You just can’t ban AI, it won’t work.
But there might be some other solutions. Perhaps we could tag videos from security cameras with a certain key or watermark, or encrypt the footage from them to make it difficult to gain access to the footage to edit it. Now, realistically, a determined person would find a way around these, but it would deter some. The best thing we can do is this: become informed. Look at online videos, pictures, even audio clips with a more critical eye, look for the signs described above. Understand that whats on the internet might not be the truth, especially if the website its coming from isn’t a verified source.
Deep Fakes are here, and they’re here to stay. This technology, and the workings behind it aren’t going away. They may be dangerous in the wrong hands, but there are also amazing and incredible applications they could be used for. Now, more than ever, people will need to learn that they can’t trust everything they see on the internet.