elsehow

Share this post

GANs move the human's problem from NP to P

www.else.how

GANs move the human's problem from NP to P

Nick Merrill
Dec 2, 2020
Share this post

GANs move the human's problem from NP to P

www.else.how

Here I claim claiming that GANs, and perhaps machine learning in general, let humans outsource problems that are in NP (e.g., folding a protein, writing a symphony) to a computer, by instead solving a task that is in P (designing a machine learning pipeline). This post uses GANs, generative adversarial networks to illustrate the point.

How do GANs work?

You may have heard of deepfakes. The concept is that a computer can generate something extremely realistic—perhaps making someone say something they didn't say.

People make deepfakes using GANs, or generative adversarial networks. GANs are a fancy way of talking about two machine learning models locked in a competitive match against each other.

To illustrate how GANs work, imagine an art critic and an art forger engaging in a one-on-one competition. The art forger wants fool the art critic by producing some excellent art forgery—say, some new Georgia O'Keefes.

Now, we know probably intuitively that the art critics job is a bit easier in the forgers job. As they say, "everyone's a critic." That saying hints at this age-old observation that criticism is in P, whereas producing something miraculous is in NP. (See my previous discussion for background on P and NP).

So here's the game. Our forger, whose job is in NP, is going to try to make some Georgia O'Keefe paintings. We, the outside observer, are going to mix up the forger's paintings up with some real paintings. Then the critic, whose job is in P, is going to try to figure out whether each of these paintings is real or fake.

The first round passes, and the forger does quite poorly. The critic is able to pick out all of the forgeries. But this failure is instructive: Both the forger and the critic can evaluate how well they did and learn from their mistakes. The art forger is going to get a little bit better at forging art, and the art critic is going to get a little bit better at distinguishing real Georgia O'Keefes from fake ones.

Now, they're going to keep doing this round after round, learning from their mistakes, until finally, at some point, the critic is just guessing at chance. The critic can no longer determine which pieces are real and fake. He's just guessing. At this point, we can say that the forger is as good as he can be. The critic has effectively trained the forger to be good enough to fool anyone. [1]

From NP to P

GANs tend to do problems of generation, which are in NP. But, to do them, they require a critic, whose job is in P. We have a generator, a forger, we're trying to train, whose problem is an NP. And we have a critic whose problem is a little easier—it's in P.

The magic of GANs is that they allow us, the human programmer to get this computer to do a problem that's in NP. But all we, the operator, have to do is solve a problem that's in P—just make a good critic!

To illustrate this point, let's say we notice something coming out of the GAN that isn't quite right: the generated faces have artifacts that give them away, allowing us to pick them out from the real faces. Now, we can use that information to improve our critic a bit, manually if need be. That problem is in P. Once we do that, we can get the computer to take on this NP problem of generating a better face, but we the human did not to do anything in NP! We only had to solve a P problem.

So what?

There's no way a problem's NP-ness. Whether producing an art forgery or a symphony, some things just are in NP. But as a human programmer, sometimes, one only has time to do a problem in P.

GANs illustrate a type of workaround. They let us, the human, complete a problem in NP when we ourselves only had to complete a problem in P. This theme could be general to machine learning! [2]

Now what?

Back in the days before modern machine learning, there was good-old-fashioned AI (GOFAI). Herbert Dreyfus described why it was overhyped.

Today, there is no great critique of machine learning. But there sure is a lot of hype. I think before a critique, you have to describe what's going on. And I think what's going on is that, suddenly, people can outsource problems that are in NP. At least at the highest level.

Now, go find a good critique!

Footnotes:

[1] Now, maybe our critic isn't so perfect. Maybe after a while we're going to notice that the forger leaves some telltale clues. In the case of deepfakes of faces, we do see some reliable artifacts in the images. Maybe we can use those observations from our human observation to improve the critic. 

But, as soon as we have a better critic, we can immediately take this critic back to the forger and play the game again. We can do this forever, making a better and better forger until there are no clues left. A perfect forger. 

Phrasing that a little bit more precisely, we can train a generator function to produce synthetic samples that are drawn from the same distribution as real samples. In reality, we'll never find the exact same distribution; we can never actually collect all photos of faces in the entire world, and even if we did, it would be out of date in a second. But, we can approximate this distribution so well that the end result is a generator whose outputs are indistinguishable from reality to even the best critics.

[2] As Ben Barad said in conversation about this, "It's a democratization of the ability to do arbitrary approximation instead of solving directly, which is an oldddd mathematical approach." All I'd add is that I think the brain does arbitrary approximations; that machinery can itself produce systems that solve directly.

Share this post

GANs move the human's problem from NP to P

www.else.how
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Nick Merrill
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing