When you play video games, you are still aware that the world in which you play is not the ‘real’ world in which you live. Your eyes find ways to distinguish between what is computer generated imagery and what isn’t. If we looked at a series of images, we could pick out the real from the false. But, what if we couldn’t? What if our deep learning algorithms became so advanced that they could imitate the art of Van Gogh? Would you believe it? Would you trust it?
This is precisely what some machine learning programmers are trying to achieve. They want to create AI that can essentially copy and create the real world in such detail that the human eye may not be able to know fact from fiction. To make this possible, they use Generative Adversarial Networks, GANs for short.
How do GANs work?
GANs are two neural nets, first developed by Ian Goodfellow back in 2014. They are basically competing against each other for the best outcome in a game-like series of back and forth. One way to explain this would be to think about vegan meat producers trying to imitate the real thing. Stay with me on this one.
So, let’s say the vegan meat producers are the generators and are trying to persuade the general public, who are the discriminators, that their meat is just like real meat. Firstly, the vegan producers would discover some characteristics of real meat and incorporate that into their recipe. Then they would pass that over to the discriminators. Now, the general population (discriminators) already know what real meat should be like, so they would be able to tell pretty easily that the meat is fake. So, they would send it back, stating that this is fake. When it comes back to the vegan producers, they will find out what else makes meat, well, meat. And the cycle continues, with numerous iterations, until the generators, the vegan meat makers, can pass their product off as real meat.
Back to the technical wording, a GAN works by one neural network creating new data that the other neural network classes as fake or real. This happens over many iterations and takes many hours before it is successful. Both networks are learning from each other. As the generator learns how to mimic the characteristics, the discriminator learns more about the nuances that divide the real from the fake data. The Generator is given a label, in our case fake meat, and is trying to make something that fits that label. This can be in images, sounds, or text. The discriminators are basically the opposite, they already have the characteristics of real meat, and now put labels onto the data they receive.
In a way, this is how criminals and the law interact. As the criminal advances in fooling the law, the law advances in detecting the criminal. There is much potential for GANs, from chatbots to videos and game developments, but there is something that worries me. Will humans be able to distinguish fact from fiction?
What if generative adversarial networks become so advanced that it can fool the human eye. Think about the innovation of video capture for insurance claims, the validity of the video itself could be compromised. In an already hyperbole society, do we really want technology that can mimic the truth, worse yet, do we seriously want to make this so believable that we cannot distinguish it from reality?