Quantum and classical computers handle time differently. What does that mean for AI?

As humans, we take time for granted. We’re born into an innate understanding of the passage of events because it’s essential to our survival. But AI suffers from no such congenital condition. Robots do not understand the concept of time.

State of the art AI systems only understand time as an implicit construct (we program it to output time relevant to a clock) or as an explicit representation of mathematics (we use the time it takes to perform certain calculations to instruct its understanding of the passage of events). But an AI has no way of understanding the concept of time itself as we do.

Time doesn’t exist in our classical reality in a physical, tangible form. We can check our watch or look at the sun or try and remember how long it’s been since we last ate, but those are all just measurements. The actual passage of time , in the physics sense, is less proven.

In fact, scientists have proven that time’s arrow – a bedrock concept related to the classical view of time – doesn’t really work on quantum computers . Classical physics suffers from a concept called causal asymmetry. Basically, if you throw a bunch of confetti in the air and take a picture when each piece is at its apex, it’ll be easier for a classical computer to determine what happens next (where the confetti is going) than what happened before (what direction the confetti would travel in going backwards through time).

Quantum computers can perform both calculations with equal ease, thus indicating they do not suffer causal asymmetry. Time’s arrow is only relevant to classical systems – of which the human mind appears to be, though our brains are almost certainly quantum constructs .

Where things get most interesting is if you consider the addition of artificial intelligence into the mix. As mentioned previously, AI doesn’t have a classical or quantum understanding of time: time is irrelevant to a machine.

But experts such as Gary Marcus and Ernest Davis believe an understanding of time is essential to the future of AI, especially as it relates to “human-level” artificial general intelligence (AGI). The duo penned an op-ed for the New York Times where they stated:

While the statement is intended as a sweeping indictment on relying on bare bones deep learning systems and brute force to achieve AGI, it serves as a bit of a litmus test as to where the computer science community is at when it comes to AI .

Currently, we’re building classical AI systems with the hopes they’ll one day be robust enough to mimic the human mind. This is a technology endeavor, meaning computer experts are continuously pushing the limits of what modern hardware and software can do.

The problem with this approach is that it’s creating a copy of a copy. Quantum physics tells us that, at the very least, our understanding of time is likely different from what might be the ultimate universal reality .

How close can robots ever come to imitating humans if they, like us, only think in classical terms? Perhaps a better question is: what happens when AI learns to think in quantum terms while us humans are still stuck with our classical interpretation of reality?

So you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.

How programmers are using AI to make deepfakes — and even detect them

S o you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.

In 2018, a big fan of Nicholas Cage showed us what The Fellowship of the Ring would look like if Cage starred as Frodo, Aragorn, Gimly, and Legolas. The technology he used was deepfake, a type of application that uses artificial intelligence algorithms to manipulate videos.

Deepfakes are mostly known for their capability to swap the faces of actors from one video to another. They first appeared in 2018 and quickly rose to fame after they were used to modify adult videos to feature the faces of Hollywood actors and politicians.

In the past couple of years, deepfakes have caused much concern about the rise of a new wave of AI-doctored videos that can spread fake news and enable forgers and scammers.

The “deep” in deepfake comes from the use of deep learning , the branch of AI that has become very popular in the past decade. Deep learning algorithms roughly mimic the experience-based learning capabilities of humans and animals. If you train them on enough examples of a task, they will be able to replicate it under specific conditions.

The basic idea is to train a set of artificial neural networks , the main component of deep learning algorithms, on multiple examples of the actor and target faces. With enough training, the neural networks will be able to create numerical representations of the features of each face. Then all you need to do is rewire the neural networks to map the face of the actor on to the target.

Autoencoders

Deep learning algorithms come in different formats. Many people think deepfakes are created with generative adversarial networks (GAN) , a deep learning algorithm that learns to generate realistic images from noise. And it is true, there are variations of GANs that can create deepfakes.

But the main type of neural network used in deepfakes is the “autoencoder.” An autoencoder is a special type of deep learning algorithm that performs two tasks. First, it encodes an input image into a small set of numerical values. (In reality, it could be any other type of data, but since we’re talking about deepfakes, we’ll stick to images.) The encoding is done through a series of layers that start with many variables and gradually become smaller until they reach a “bottleneck” layer. The bottleneck layer contains the target number of variables.

Next, the neural network decodes the data in the bottleneck layer and recreates the original image.

During the training, the autoencoder is provided with a series of images. The goal of the training is to find a way to tune the parameters in the encoder and decoder layers so that the output image is as similar to the input image as possible.

The narrower the problem domain, the more accurate the results of the autoencoder becomes. For instance, if you train an autoencoder only on the images of your own face, the neural network will eventually find a way to encode the features of your face (mouth, eyes, nose, etc.) in a small set of numerical values and use them to recreate your image with high accuracy.

You can think of an autoencoder as a super-smart compression-decompression algorithm. For instance, you can run an image into the encoding part of the neural network, and use the bottleneck representation for small storage or fast network transfer of data. When you want to view the image, you only need to run the encoded values through the decoding half and return it to its original state.

But there are other things that the autoencoder can do. For instance, you can use it for noise reduction or generating new images.

Deepfake autoencoders

Deepfake applications use a special configuration of autoencoders. In fact, a deepfake generator uses two autoencoders, one trained on the face of the actor and another trained on the target.

After the autoencoders are trained, you switch their outputs, and something interesting happens. The autoencoder of the target takes video frames of the target, and encodes the facial features into numerical values at the bottleneck layer. Then, those values are fed to the decoder layers of the actor autoencoder. What comes out is the face of the actor with the facial expression of the target.

In a nutshell, the autoencoder grabs the facial expression of one person and maps it onto the face of another person.

Training the deepfake autoencoder

The concept of deepfake is very simple. But training it requires considerable effort. Say you want to create a deepfake version of Forrest Gump that stars John Travolta instead of Tom Hanks.

First, you need to assemble the training dataset for the actor (John Travolta) and the target (Tom Hanks) autoencoders. This means gathering thousands of video frames of each person and cropping them to only show the face. Ideally, you’ll have to include images from different angles and lighting conditions so your neural networks can learn to encode and transfer different nuances of the faces and the environments. So, you can’t just take one video of each person and crop the video frames. You’ll have to use multiple videos. There are tools that automate the cropping process, but they’re not perfect and still require manual efforts.

The need for large datasets is why most deepfake videos you see target celebrities. You can’t create a deepfake of your neighbor unless you have hours of videos of them in different settings.

After gathering the datasets, you’ll have to train the neural networks. If you know how to code machine learning algorithms , you can create your own autoencoders. Alternatively, you can use a deepfake application such as Faceswap, which provides an intuitive user interface and shows the progress of the AI model as the training of the neural networks proceeds.

Depending on the type of hardware you use, the deepfake training and generation can take from several hours to several days. Once the process is over, you’ll have your deepfake video. Sometimes the result will not be optimal and even extending the training process won’t improve the quality. This can be due to bad training data or choosing the wrong configuration of your deep learning models. In this case, you’ll need to readjust the settings and restart the training from scratch.

In other cases, there are minor glitches and artifacts that can be smoothed out with some VFX work in Adobe After Effects.

In any case, at their current stage, deepfakes are not a clickthrough process. They’ve become a lot better, but they still require a good deal of manual effort.

Detecting deepfakes

Manipulated videos are nothing new. Movie studios have been using them in the cinema for decades. But previously, they required tremendous effort from experts and access to expensive studio gear. Although not trivial yet, deepfakes put video manipulation at the disposal of everyone. Basically, anyone who has a few hundred dollars to spare and the nerves to go through the process can create a deepfake from their own basement.

Naturally, deepfakes have become a source of worry and are perceived as a threat to public trust. Government agencies, academic research labs, and social media companies are all engaged in efforts to build tools that can detect AI-doctored videos.

Facebook is looking into deepfake detection to prevent the spread of fake news on its social network. The Defense Advanced Research Projects Agency (DARPA), the research arm of the U.S. Department of Defense, has also launched an initiative to stop deepfakes and other automated disinformation tools. And Microsoft has recently launched a deepfake detection tool ahead of the U.S. presidential elections.

AI researchers have already developed various tools to detect deepfakes. For instance, earlier deepfakes contained visual artifacts such as unblinking eyes and unnatural skin color variations. One tool flagged videos in which people didn’t blink or blinked at abnormal intervals.

Another more recent method uses deep learning algorithms to detect signs of manipulation at the edges of objects in images. A different approach is to use blockchain to establish a database of signatures of confirmed videos and apply deep learning to compare new videos against the ground truth.

But the fight against deepfakes has effectively turned into a cat-and-mouse chase. As deepfakes constantly get better, many of these tools lose their efficiency. As one computer vision professor told me last year : “I think deepfakes are almost like an arms race. Because people are producing increasingly convincing deepfakes, and someday it might become impossible to detect them.”

This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here .

Containment algorithms won’t stop super-intelligent AI, scientists warn

A team of computer scientists has used theoretical calculations to argue that algorithms could not control a super-intelligent AI.

Their study addresses what Oxford philosopher Nick Bostrom calls the control problem: how do we ensure super-intelligence machines act in our interests?

The researchers conceived of a theoretical containment algorithm that would resolve this problem by simulating the AI‘s behavior, and halting the program if its actions became harmful.

The study found that no single algorithm could calculate whether an AI would harm the world, due to the fundamental limits of computing:

This type of AI remains confined to the realms of fantasy — for now. But the researchers note the tech is making strides towards the type of super-intelligent systems envisioned by science fiction writers.

“There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it,” said study co-author Manuel Cebrian of the Max Planck Institute for Human Development.

“The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

You can read the study paper in the Journal of Artificial Intelligence Research.

Leave A Comment