How Neural Networks generate Visual Art from inspiration

Since my last blog post on Google Translate, I have been reading the earlier articles on Google’s Research Blog. Their work on generative AI particularly caught my eye, where they have tried building models to create art/imagery using deep learning.

Announced back in 2015, DeepDream has fascinated a lot of people with its ability to interpret images in fascinating ways to ‘dream up’ complicated visuals where none exist.

fefe

Talking about creating beautiful pictures, we also had apps like Prisma and DeepForger that transformed user-given photos in the manner of famous/standard art-styles to create some stunning work.

picasso-796x398

In this post, I attempt to give an intuitive explanation for this paper: A Neural Algorithm of Artistic Style by Gatys, Ecker and Bethge. The aim of this work is pretty similar to what Prisma actually does, i.e. combining the content from one image with the artistic style of another to fabricate a new image. On the way we will also get some glimpse into how DeepDream works.

Convolutional Neural Networks

Before we delve into creation of images, lets get a high-level understanding of how deep learning typically understands them. Convolutional Neural Networks (CNNs) are state-of-the-art when it comes to image analysis. Assuming you know what a basic Neural Network is, heres a simplified depiction of a Convolutional Network:

screen-shot-2017-02-02-at-6-47-34-pm

Layers 1 & 2 are what make CNNs special; the final ‘classifier’ is just a standard fully-connected network.

Both layer 1 and 2 are performing two different operations on the input:

  1. Convolution
  2. Pooling

In the Convolution step, we compute a set of Feature Maps using the previous layer. A Feature Map typically has the same dimensions as the input ‘image’, but there’s a difference in the way its neurons are connected to the preceding layer. Each one is only connected to a small local area around its position (see image). Whats more, the set of weights that every neuron uses is the same. This set of shared weights is also called a filter.

main-qimg-0b66c1de5925f47e97bd2c26d99dbc3e

Intuitively, you can say that each node in the Feature Map is essentially looking for the same concept, but in a limited area. This gives CNNs a very powerful trait: the ability to detect features irrespective of their position in the actual image. Since every neuron is trained to detect the same entity (shared weights), one or the other will fire incase the corresponding object happens to be in the input – irrespective of the exact location. Also worth noting is the fact that neighboring neurons in the Map will analyze partially intersecting portions of the previous layer, so we haven’t really done any hard ‘segmentation’.

In the set of Feature Maps at a particular level, each one looks for their own concept which they learnt during training. As you go higher and higher up the overall layers, these sets of Maps start looking for progressively higher-level objects. The first set (in the lowest layer) might look for lines/circles/curves, the next one might detect shapes of eyes/noses/etc, while the topmost layers will ultimately understand complete faces (an over-simplification, but you get the idea). Something like this:

screen-shot-2017-02-02-at-6-47-46-pm

Pooling – You can think of Pooling as a sort of compression operation. What we basically do is divide each Feature Map into a set of non-overlapping ‘boxes’ and replace each box with a representative based on the values inside it. This representative could either be the maximum value (called Max-Pooling) or the mean (called Average-Pooling). The intuition behind this step is to reduce noise and retain the most interesting parts of the data (or summarize it) to provide to the next layer. It also allows the future layers to analyze larger portions of the image without having to increase filter size.

figure7

Typical CNNs used in deep learning have multiple such Convolution + Pooling layers, each caring lesser and lesser about the actual pixel values and more about the general content of the image. Feature Maps at Layer N+1 will take inputs from all the compressed/pooled maps from Layer N in a typical scenario. Moreover, the number of Feature Maps at each layer is not a constant, and is usually decided by trial-and-error (as are most design decisions in Machine Learning).

Recreating the Content of an Image

Neural networks in general have a very handy property: The ability to work in reverse (well, sort-of). Basically, “How do I change the current input so that it yields a certain output?“. Lets see how.

Consider a CNN C, trained to recognize animals in input images. Given a genuine photo of a dog, the CNN might be able to classify it correctly by virtue of its convolutional layers and the final classifier. But now suppose I show it an image of just…clouds. Forget the final classifier, the intermediate layers are more interesting here. Since C was originally trained to look for features of animals, that is exactly what it will try to do here! It might interpret random clouds and shapes as animals/parts of animals – a form of artificial pareidolia (the psychological phenomenon of perceiving patterns where none exist).

You can actually visualize what a particular layer of the CNN interprets from the image. Suppose the original cloud-image was I_c:

screen-shot-2017-02-03-at-12-32-59-pm

Say at a certain level l of C, the Feature Maps gave an output  F_l based on I_c.

What we will do now, is provide C with a white-noise image I_n:

screen-shot-2017-02-03-at-12-38-19-pm

This sort-of works like a blank-slate for C, since it has no real information to interpret (though C can still ‘see’ patterns, but very very vaguely). Now, using the process of Gradient Descent, we can make C modify I_n so that it yields an output close to F_l at level l.

What it essentially does, is iteratively shift the pixel values of I_n until its output at l is similar to that of I_c. One key point: Even after the end of this process, I_n will not really become the same as I_c. Think about it – you have recreated I_c based on the CNN’s interpretation of I_c, which involves a lot of intermediate convolutions and pooling. The higher the level l you choose for re-creating the image, the deeper the pareidolia based on the CNN’s training – or more ‘abstract’ the interpretations.

In fact, this is pretty similar to what DeepDream does for understanding what a deep CNN has ‘learnt’ from its training. The cloud image I showed earlier was indeed used with a CNN trained to recognize animals, leading to some pretty weird imagery:

screen-shot-2017-02-03-at-1-44-05-pm

screen-shot-2017-02-03-at-1-44-11-pm

Now, the paper we use as reference wants to recreate the content of an image pretty accurately, so how do we avoid such misinterpretation of shapes? The answer lies in the use of a powerful CNN trained to recognize a wide variety of objects – like the one developed by Oxford’s Visual Geometry Group (VGG) – VGGNet. VGGNet is freely available online, pre-trained and ready-made (Tensorflow example).

Recreating the Style of an Image

In the last section, we saw that the output from Feature Maps at a certain level (F_l) could be used as a ‘goal’ to recreate an image with conceptually similar content. But what about style or texture?

Intuitively speaking, the style of an image is not as much about the actual objects in it, but rather the co-occurrence of features/shape in the overall visual (Reference). This idea is quantified by the Gramian matrix with respect to the Feature Maps: G(F_l).

Suppose we have n different Feature Maps at level l of CNN C. G(F_l) is a matrix of dimensions n X n, with the element at position [i, j] being the inner product between Feature Maps i and j. Quoting an answer from this Stack-Exchange question, “the inner product between x and y is indicative of how much of y could be described using x“. Essentially, in this case, it quantifies how similar are the trends between the numbers present in Feature Maps i and j (“do triangles and circles occur together in this image?”).

Thus, G(F_l) is used as the Gradient-Descent ‘goal’ instead of F_l while re-creating the artistic style of a photo/image.

The following stack shows style (not content) recreations of the Composition-VII painting by Kandinsky . As you go lower, the images are based on progressively higher/deeper layers of the CNN:

screen-shot-2017-02-03-at-3-13-47-pm

As you will notice, higher layers tend to reproduce more complex and detailed strokes from the original image. This could be attributed to the capture of more high-level details by virtue of feature-extraction and pooling in the Convolutional Network.

Combining Content and Style from two different Images

That brings us to the final part – combining the above two concepts to achieve something like this:

screen-shot-2017-02-03-at-4-27-41-pm

Gradient Descent always considers a target ‘error function’ to minimize while performing optimization. Given two vectors x and y, let this function be denoted by \Lambda(x, y).

Suppose you want to generate an image that has the content of image I_c in the style of image I_s. The white-noise image you start out with, is I_n. Let F^{I} be the output given by a certain set of feature maps based on image I.

Now, if you were only looking to recreate content from I_c, you would be minimizing:

\Lambda(F^{I_n}, F^{I_c})

If you were only interested in the style from I_s, you would minimize:

\Lambda(G(F^{I_n}), G(F^{I_s}))

Combining the two, you get a new function for minimizing:

\alpha*\Lambda(F^{I_n}, F^{I_c}) + \beta*\Lambda(G(F^{I_n}), G(F^{I_s}))

\alpha and \beta are basically the weightage you give to the content and style respectively.

The tiles shown below depict output from the same convolutional layer, but with higher values of \alpha / \beta as you go to the right:

screen-shot-2017-02-03-at-4-42-33-pm

Pretty cool, isn’t it?

Advertisements

7 thoughts on “How Neural Networks generate Visual Art from inspiration

  1. Hi Sachin,
    Thanks for the pretty good blog. I have a question as below:
    Can I regard the G(Fl) as the feature that represents the style info of that image, and Fl as content info of the image?

      1. Recently, I came across one problem: I have got the G(Fl) of one artwork, but it is a matrix, how can I transform it to one dimension vector and still keep its ability to represent style.

      2. You can just append all rows one after the other. The information in the data will remain the same. Just remember to do the same across all layers :-).

      3. I found that that paper used 5 layers to calculate the gradient and add them. I donot need to calculate the gradient, I just want to get the style representive matrix. Need I pick all matrixs of 5 layers or I can just pick matrix of one layer, such as conv5_1.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s