Categories
Projects

Dipping the feet in the game design pond

For my wife’s birthday this year, I created a prototype of a small game-like 3D environment that she could “walk around” in using the keyboard and mouse. The idea was to have an “exhibit” for each year we have known each other, consisting of a few photos from that year as well as a short text describing major events that happened during the year.

Unfortunately, I am terrible at planning things and started a bit too late so I did not finish the game in time for the birthday.

With the help of Unity, I managed to finish the project eventually. This post is about that journey as well as some observations on Unity and game design in general.

Note: I usually make my stuff available online, e.g. on Github. Not this time though. This was a personal thing I created for my wife. I’m reproducing the images for this blog post with her permission :-)

Focus on the content

The prototype was created for the browser and used three.js for most of the 3D stuff. I am a fan of WebGL, and I had some prior experience from working on Photo Amaze and Zombie Hugs, so it seemed like a natural choice.

Here are two screenshots from the initial prototype:

The thick fog was added as a way to limit the initial view and ensure only one piece of text was visible at a time, and the rest was hidden in the fog until the player started moving forward. This fog then cleared up once the first exhibit was reached, making the scene more open.

I found photos for three out of ten exhibits and wrote the initial intro text, but the rest remained unfinished. One reason for this was that I quickly started obsessing over details like shadows and lighting instead of focusing on the core content, which was selecting good photos and writing the right text.

Missing the birthday deadline had both good and bad consequences. I was very disappointed and embarrassed that I only had an early prototype to show, but it also allowed me to step back and rethink the project.

Beyond the browser

An appealing aspect of using three.js is that everything has to be defined in code. This provides a lot of control, but I quickly realized that I was not able to iterate and tweak the experience very fast. I had wanted to get my feet wet with a more full-fledged game engine for a while, so this was a good opportunity to do that.1

After researching the pros and cons of different game engines as well as experimenting briefly with Unity, Unreal Engine, Godot and Babylon.js (spending way too long trying to make grass out of fur), I ended up sticking with Unity, because it has a native Linux editor and good platform support.2

Unity is easy to get started with and includes lots of helpful tools out of the box. For example, I had good initial impressions of the built-in terrain editor and tree creator, and it seemed very easy to set up a basic outdoor environment. The Unity asset store also has a generous offering of free assets, including a ready-made first-person controller which is very handy.

Once the surface is scratched though, it becomes apparent that Unity is neither perfect nor complete, and creating games is not easy. Some of the challenges I faced were fun while others were frustrating.

The first (fun) obstacle I came across was creating the photo exhibits.

3D modeling the photo exhibit

Concept art for the “photo exhibit”… I only drew it for this blog post though :-)

As you can see in the prototype above, an “exhibit” has three wall-like structures with a photo. I wanted the pictures to protrude a bit from the walls, making it look like a white canvas is hanging from the wall, with the photo “painted” on top of it. The drawing on the right illustrates the idea.

I thought it would be super easy to do this in Unity. Just create two cubes (a canvas cube and a wall cube)3, flatten and stretch them a bit, and put them next to each other so they overlap.

Actually, this worked ok, but there were two problems:

  1. There was a weird flicker where the cubes touched each other.
  2. When adding the photo to the canvas cube, it showed the photo on all sides, not just the front.

I fixed the second problem by putting a “quad” — a flat surface — on top of the canvas cube (or rather, next to it). The wall structure thus consisted of three 3D objects that were technically separate from each other, and it did not look good. There was still a weird flicker, and it also felt like the wrong way to solve the problem.

Wall structure with photo canvas, created in Unity using two cubes and a quad. There is a flickering artifact at the edge of the photo, framed in red.

So I hit an early roadblock: Either I had to define the wall structure programmatically, or I had to make my own 3D model. I opted for the latter choice.

After going through some basic tutorials for Blender, I was able to create the wall structure and learn a thing or two about 3D modeling along the way. This is the result in all its simple glory:

Blender render of a wall structure aka. “photo exhibit”. I used different colors to indicate that separate materials and textures can be used for different parts of the structure.

Even though I only did very basic stuff in Blender, it felt like a big win to be able to make basic models. I also created an exhibit sign and a cylinder with one open end (to simulate a tunnel or tube). All models can be found here.

Free models are great

Besides the photos and text, I decided to also create a “display” for each exhibit. This consisted of a 3D model or effect that was either a direct or indirect reference to the year the exhibit was for. For example, I used a Big Foot model standing on top of Mt Saint Helens for the year when we visited the area.

Big Foot standing on top of a model of Mt Saint Helens erupting.

Using pre-made models was a fun and easy way to make the exhibits a bit more interesting. It took some time to find the right model, and it sometimes needed tweaking after import, but it gave me the opportunity to include visuals I could not have created on my own.

For the record, here is the list of the models I used:

All models are licensed under CC BY except for MtHelens (CC BY-NC-ND) and 15Legend (CC BY-NC).

All the models were found on Sketchfab, an online community with a lot of 3D models available either for free or purchase. It was a nice discovery!

An extra dimension

Besides downloading models, I also researched the possibility of adding models to the scene by simply scanning my environment or specific objects.

A technique known as photogrammetry makes it possible to turn multiple photos into 3D models. I played around with an open-source tool called Meshroom which is amazingly simple to work with. Just add a lot of photos from different viewing angles, wait a few hours, and a finished 3D model comes out.

A scan of a birch log from the forest made its way into the scene:

Photogrammetry scan of a fallen birch log. The rough/spiky surface is the result of reducing the model complexity by removing polygons. The light reflections are unnatural but I kept them because they look kind of fun.

I did not get outstanding results, but it is worth noting I also just took the photos with my bad phone camera and spent very little time making sure I got good shots from all angles.

It is mindblowing that it is possible to go from 2D photos to 3D model, and I will definitely revisit photogrammetry again in the future.

Creating fake rain

A small feature I had fun creating was a super simple rain effect. There are numerous weather system plugins available for Unity (some are free), and there was even a “hose” effect available in the standard assets that kind-of did the trick (it simulates spraying water). But I needed a more uniform down-pour, and I really just needed something simple.

The effect was created by taking a bunch of small particles, give them a blue/white gradient color, apply gravity too them, and that is basically it.

A simple rain effect using a Unity particle system.

I reused a texture from a water surface effect in the standard assets to give the raindrops a blue-ish appearance. The tails on the raindrops are automatically created by the particle system when using a render setting called “stretched billboard”. A bit of noise was added to the movement of the rain drops, so the rain does not fall straight down but looks slightly more natural and chaotic.

After playing around with the particle speed and size, I got the right look and feel I wanted. I was expecting this to be much more complicated, so it was a nice surprise when the process was fairly straightforward.

Designing for the player

The most enjoyable aspect of creating this game-like experience was going through our old photos to find a few that represented each year as well as thinking about the various events that happened throughout the year. It was a nice trip down memory lane.

Although the photos and text tell a story which is sequential in nature, the question was if they necessarily had to be experienced sequentially as well.

I considered two ways to handle progression through the game:

  1. Limit the initial environment with something like walls and corridors, guiding the player from exhibit to exhibit.
  2. Make the environment completely open, allowing the player to freely visit each exhibit in any order and with no restrictions.

The first option, limiting the player, would give me more control over the player’s movements and the “narrative” (if there was such a thing) of the experience, but it also felt like it would constrain the player. This can sometimes be a good technique to control pacing (a lot of games do this), but here it seemed unnecessarily constricting.

So I decided to go for 2., the open environment, but I still wanted to provide some guidance to help navigate the scene. I did this by creating a dirt path that leads through the grass between the exhibits. I thought it was a nice, obvious and non-constricting way to guide the player a bit:

Aerial top view showing the entire scene. The player starts in the center. The gray lines are dirt paths that go from exhibit to exhibit.
Example of a dirt path, this one leading between the 1st and 2nd exhibit.

During the first 5-60 seconds of the game, the player is presented with the movement keys and the purpose of the game in a series of three welcome messages that show up on the screen as 3D text.

I wanted to be absolutely sure that the player could not miss the information, especially the movement keys. The way I achieved this was to add some constraint to the otherwise open environment at the initial stage of the game.

If you look closely at the aerial top view above, you might notice a long green shape at the center of the scene. This is actually a cylinder (or tunnel) floating 50 meters above the ground. The player starts the game inside the tunnel, and can only move forward and backward, ensuring that the information is difficult (but not impossible) to miss.

Furthermore, during the first 1-2 seconds, the camera is actually fixed in place, showing the movement keys while the start menu is fading out.

To make the cylinder/tunnel slightly more interesting, I painted it a bright green and used a normal map from a tree bark texture to give some resemblance of walking inside of a tree trunk.

When the player steps over the edge of the cylinder, they land near the first exhibit.4

The player starts the game inside a green cylindrical shape, and is presented with the movement keys and other information further ahead.

The launch

I hope the above sections have provided at least some idea of how my little game-like experience turned out. I have not described everything, and there were even a few more ideas that did not make their way into the game at all, but I decided to stop the project when the core content was in a state I was satisfied with.

And then it was time to launch it, i.e. get my wife to play the game. I really wanted to see her reaction while playing, but I let her go through it by herself at her own pace.

I got quite emotional about it actually. Having revisited the memories of nice moments from the past while working on the project, I was already on a trip of nostalgia. Showing the game to my wife was the culmination of that journey, and when I heard a giggle coming from her room, I shed a little tear.

Moving forward

Even for a simple game-like experience like the one I created, there are still many little decisions that go into making it. Thinking through these decisions, playing around with solutions and seeing the result is often rewarding and interesting, and I can totally understand the appeal to work professionally with games and similar creative endeavors.

I also have a newfound appreciation for how long it takes to produce game content. Even though I am an amateur in everything that has to do with game design (except for writing code), and my project was extremely small in scope, it is still easy to see why it takes so long to create games, and why people specialize in modeling, programming, animation, sound design etc. instead of trying to do everything.

I do not think this is the last time I will dabble with creating games. I hope to be able to combine aspects of my professional work-life (data science/ML/AI) with game creation. That would be a win-win for a side-project indeed.

Continue on page 2 if you are interested in reading a bit more about my experience with Unity. If this does not sound interesting, you can just stop reading here. Thank you for making it this far :-)

Categories
Code

2 + 2 with neural networks

Some weeks ago, I was at a get together with my old university friends that we call “the hack day”. It usually revolves around drinking lots of coffee and soft drinks, eating loads of chips and candy, as well as working on the occasional masterpiece project like Zombie Hugs.

At the end of the day, one of us (I cannot remember who now) mentioned how useful it would be to have a neural network that could add numbers together.

The remark was meant as a joke, but it got me thinking, and on the way home on the train, I pieced together some code for creating a neural network that could perform addition on two numbers between 0 and 9. Here’s the original code.

Warning: The rest of this post is probably going to be a complete waste of your time. The whole premise for this post is based on a terrible idea and provides no value to humanity. Read on at your own risk :-)

Making addition more interesting

It is worth mentioning that it is actually trivial to make a neural network add numbers. For example, if we want to add two numbers, we can construct a network with two inputs and one output, set the weights between input and output to 1, and use a linear activation function on the output, as illustrated below for 20 + 22:

Illustration of neural network for adding two numbers, with an example for 20 + 22.

It is not really the network itself that performs addition. Rather, it just takes advantage of the fact that a neural network uses addition as a basic building block in its design.

Things get more interesting if we add a hidden layer and use a non-linear activation function like a sigmoid, thereby forcing the output of the hidden layers to be a list of numbers between 0 and 1. The final output is still a single number which is a linear combination of the output of the hidden layer. Here is a network with 4 hidden nodes as an example:

Example of neural network with one hidden layer with four nodes (h1-h4).

When we ask a computer to perform 2 + 1, the computer is really doing 10 + 01 (2 is 10 in binary and 1 is 01). I had this thought at the back of my mind, that the neural network might “discover” an encoding in the hidden layer which was close to the binary representation of the input numbers.

For example, for the 4-node hidden layer network illustrated above, we could imagine the number from input1 being encoded in h1 and h2 and the number for input2 being encoded in h3 and h4.

For 2 + 1, the four hidden nodes would then be 1, 0, 0 and 1, and the final output would convert binary to decimal (2 and 1) and add the numbers together to get 3 as result:

Example calculation of 2 + 1 with assumed binary representation in hidden layer.

Since the hidden nodes are restricted to be between 0 and 1, it seemed intuitive to me that a binary representation of the input would be a very effective way of encoding the data, and the network would thus discover this effective encoding, given enough data.

False assumptions

To be honest, I did not think this through very thoroughly. I should have realized that:

  1. The sigmoid function can produce any decimal number between 0 and 1, thus allowing for a very wide range of values. Many of these would probably work well for addition, so it is unlikely it would “choose” to produce zeros and ones only.
  2. It is unclear how a linear combination of input weights would produce the binary representation in the first place.

That second point is important. For the 2-bit (2 nodes) encoding, we would have to satisfy these equations (where S(x) is the sigmoid function and w1 and w2 are the weights from the input node to the 2-node hidden layer):

Input numberBinary encodingEquations
00,0S(w1 · 0) ≈ 0
S(w2 · 0) ≈ 0
10,1S(w1 · 1) ≈ 0
S(w2 · 1) ≈ 1
21,0S(w1 · 2) ≈ 1
S(w2 · 2) ≈ 0
31,1S(w1 · 3) ≈ 1
S(w2 · 3) ≈ 1

Which weights w1 and w2 would satisfy these equations? Without providing proof, I actually think this is impossible. For example, both S(w2 · 1) ≈ 1 and S(w2 · 2) ≈ 0 cannot be satisfied at the same time. Disregarding the sigmoid function, this is like saying 2x = 0 and x = 1 which is not possible.

The Experiment

Regardless of the bad idea, false assumption or whatever, I still went ahead and made the following experiment:

  • Use two input numbers.
  • Use 1, 2, 4, 8 or 16 nodes in the hidden layer.
  • Use mean squared error (MSE) on the predicted sum as loss function.
  • Generate 10,000 pairs of numbers and their sum for training data.
    • Use 20% of samples as validation data.
  • Allow the sum of the two numbers to be at most 4, 8 or 16 bits large (i.e. 16, 256 and 65536).
  • Train for at most 1000 epochs.

When measuring accuracy, the predicted number is rounded to the nearest integer and is either correct or not. For example, if the network says 2 + 2 = 4.4, it is considered correct, but if it says 2 + 2 = 4.6, it is considered incorrect. 20% accuracy thus means that it correctly adds the two numbers 20% of the time on a test dataset.

Here is a summary of the accuracy and error of these models:

Number of hidden nodesMaximum sumAccuracy on test dataError (MSE)
11620%7
12561%1487
1655360%1,247,552,766
21693%0.0
22562%296
2655360%1,214,443,325
41693%0.0
42561%856
4655360%1,206,124,445
81693%0.0
82566%85
8655360%1,150,550,393
161696%0.0
162566%48
16655360%1,028,308,841

There are a few things that are interesting here:

  1. The 1-node network cannot add numbers at all.
  2. Networks with 2 or more hidden nodes get high accuracy when adding numbers with a sum of at most 16.
  3. All networks perform poorly when adding numbers with a sum of at most 256.
  4. All networks have abysmal performance for numbers with a sum of at most 65536.
  5. Adding more hidden nodes improves performance most of the time.

Here is a plot of the validation loss for the different networks after each epoch. Training can stop early if the performance does not improve, which explains why some lines are shorter than others:

Validation loss during training of networks for adding two numbers.

Exploring prediction errors

Let us look at the prediction error for each pair of numbers. For example, the 1-node network trained on sums up to 16 has an overall accuracy of 20%. When we add 2 + 2 with this network we get 6.42 so the error is 2.42 in this case. If we try a lot of combinations of numbers, we can plot a nice 3D error surface like this:

The prediction error for a 1-node hidden layer model trained on sums up to 16.

It looks like the network is good at predicting numbers where the sum is 8 (the valley in the chart), but not very good at predicting anything else. The network is probably overfitting to numbers that sum to 8, because the training data has an overweight of samples that sum to 8.

Adding an extra node brings the accuracy up above 90%. The error surface plot for this network also looks better, but both networks struggle with larger numbers:

The prediction error for a 2-node hidden layer model trained on sums up to 16.

When predicting sums up to 256, the 1-node hidden layer model shows the same error pattern, i.e. a valley (low error) for sums close to 130. In fact, the network only ever predicts values between 78 and 152 (this cannot be seen from the graph), so it really is a terrible model:

The prediction error for a 1-node hidden layer model trained on sums up to 256.

The 2-node hidden layer network does not do much better for sums up to 256 which is expected since the accuracy is just 2%. But it looks fun:

The prediction error for a 2-node hidden layer model trained on sums up to 256.

As can be seen in the table above, even the 16-node hidden layer network only had 6% accuracy for sums up to 256. The error plot for this network looks like this:

The prediction error for a 16-node hidden layer model trained on sums up to 256.

I find this circular shape to be quite interesting. It almost looks like a moat around a hill. The network correctly predicts some sums between 51 and 180, but there is an error bump in the middle.

For example, for the sum 120, 60 + 60 is predicted as 128.8 (error = 8.8), but 103 + 17 is predicted as 119.9 (error = 0.1) which is correct when rounded. The error curve for numbers that sum to 120 is essentially a cross section of the 3D plot where the hill is more visible:

Prediction error for numbers that sum to 16-node hidden layer model trained on sums of to 256

I have no idea why this specific pattern emerges, but I find it interesting that it looks similar to the 2-node network when predicting sums up to 16 (the hill and the moat). A more mathematically inclined person could probably provide me with some answers.

Finally, for the networks that were trained on sums up to 65536, we saw abysmal performance in all cases. Here is the error surface for the 16-node network which was the “best” performing one:

The prediction error for a 16-node hidden layer model trained on sums up to 65536.

The lowest error this network gets on a test set is the sum 3370 + 329 = 3699 which the network predicts as 3745.5 (error = 46.5).

In fact, the network mostly just predicts the value 3746. As a and b get larger, the hidden layer always produces all 1’s and 0’s (or values very close to 0 and 1), so the final output is always the same. This already starts happening when a and b are larger than around 10 which probably indicates that the network needs more time to train.

The inner workings of the network

My initial interest was in how the networks decided to represent numbers in the hidden layer of the network.

To keep things simple, let us just look at the 2-node hidden layer network on sums up to 16 since this network produced mostly correct sum predictions.

What actually happens when we predict 2 + 2 with this network is illustrated below. The number above an edge in the graph is the weight between the nodes. There is a total of 6 weights for this network (4 from input layer to hidden layer and 2 from hidden layer to output layer):

The calculation of 2 + 2 for the 2-node hidden layer network trained with sums up to 16.

One thing that might be of interest are the final weights 22.7 and -23.5. The way the network sums numbers is to treat the first hidden node as contributing positively to the sum and the second hidden node to contribute negatively. And they are almost the same.

It turns out that the 4-node hidden layer network works the same way. Here, there are 4 weights between hidden layer and output layer, and these are (rounded) 8, 8, 9 and -25. So we still have the large negative weight, but the positive weighting is now split between three hidden nodes with lower values that sum to 25. When calculating 2 + 2, the output of the hidden layer is 0.6, 0.6, 0.6 and 0.4 which is exactly the same as the 2-node network.

The same goes for the 8-node network. The 8 output weights are 3, 3, 3, 3, 4, 5, 5 and -25 (the positive numbers sum to 26). When predicting 2 + 2, the hidden layer outputs 0.6, …, 0.6 and 0.4, same as before.

Once again, I am a bit stumped as to why this could be, but it seems that for this particular case, these networks find a similar solution to the problem.

Conclusion?

If you made it this far, congratulations! I have already spent way more time on this post than it deserves.

I learned that using neural networks to add numbers is a terrible idea, and that I should spend some more time thinking before doing. That is at least something.

The experimentation code can be found here.

Categories
Projects

Generating cartoon avatars with GANs

You might have heard of Deepfakes, which are images or videos where someone’s face is replaced by another person’s face. There are various techniques for creating Deepfakes, one of them being Generative Adversarial Networks (GANs).

A GAN is a type of neural network that can generate realistic data from random input data. When used for image generation, a generator network creates images and tries to fool a discriminator network into believing that the images are real. The discriminator network gets better at distinguishing between real and fake images over time, which forces the generator to create better and better images.

I wanted to play around with GANs for a while, specifically for generating small cartoon-like images. This post is a status update for the project so far.

Here is the code, and here are 16 examples of images generated by the current state of the network:

16 cartoon faces generated by a GAN

DCGAN Tutorial and drawing ellipses

There are many online tutorials on how to create a GAN. One of them is the DCGAN tutorial from the Tensorflow authors. This tutorial was my starting point for creating and training a GAN using the DCGAN (deep convolutional GAN) architecture.

In the tutorial, the authors train the GAN to generate hand-written digits, based on the famous MNIST dataset. Instead of creating hand-written-number-lookalikes, I wanted to see if I could generate simple shapes like these ellipses:

Color ellipses used for input

I thought these shapes would be a trivial task for the GAN to generate, but I was of course mistaken.

After implementing the DCGAN network based on the DCGAN tutorial, my first attempt that actually did something produced color in some kind of shape but not actual ellipses.

A note on the images shown throughout this post: Let’s say we have 10 thousand images in our dataset (in this case 10 thousand images of an ellipse). One epoch consists of running through all these images once and a network is trained for 50 epochs. At the end of each epoch, an image is captured based on 16 sample inputs to the generator. These inputs stay the same during training. Thus, we have 50 images (one for each epoch) with 16 generated samples when the network is done training, and we are ideally interested in seeing these 16 images get more realistic over time.

The video below shows the evolution of one of these network training sessions. The video is stitched together from the 50 epoch images. Notice that at the beginning of training, the output of the generator is a gray blob which is the random data. Over time, some colors emerge, until training collapses in the end and it just generates white backgrounds :-)

First attempt at making ellipses with a GAN

Ellipses in opaque black and white

Taking a step back and reviewing the tutorial again, I took note of a few things that I did not pay attention to initially:

  1. The tutorial uses white, opaque digits on a black background. I was using unfilled (not opaque) ellipses on a white background.
  2. The images are only black and white (grayscale). I was using many colors.
  3. The MNIST dataset consists of 60 thousand examples. I was using a few hundred images.

If the goal of the generator is to fool the discriminator, but the images of ellipses are actually mostly white background with a little bit of color, it makes somewhat intuitive sense that the generator ends up just drawing white backgrounds as seen in the video above.

With this in mind, I created 10 thousand opaque white ellipses on a black background, just to prove that the network was indeed working. Here are some examples:

Opaque ellipses, black and white

The result from doing this was much better, and the generator ended up creating something that resembles circles:

Second attempt at making ellipses with a GAN

Wow, I created a neural network with 1 million parameters that can generate white blobs on a black background *crowd goes wild and gives a standing ovation*.

Sarcasm aside, it is always a good feeling when the network finally does something within a reasonable timeframe (it took about a minute to train this network).

Deeper, wider, opaque, color

After the “success” of the black and white ellipses, I started reviewing some tips on how to tweak a GAN (see references at the bottom of post). Without going into too much detail, I basically made the neural network slightly deeper (more layers) and slightly wider (more features) and switched back to using random colors for the ellipses, while keeping them opaque.

Here are some examples of the input ellipses:

Opaque ellipses, with color

After training the network with these images, it was interesting to see the 16 generated samples converge to colored blobs and then change dramatically between epochs. I think this is what is known as “mode collapse” and is a known issue/risk when training GANs:

Each iteration of [the] generator over-optimizes for a particular discriminator, and the discriminator never manages to learn its way out of the trap. As a result the generators rotate through a small set of output types. This form of GAN failure is called mode collapse.

Google Developers, Common Problems with GANs

Mode collapse is most obvious when viewing the epoch images individually, so rather than stitch them together into a video, I have included 50 images below. Notice that after about 20-25 epochs, the output starts to resemble colored ellipses, and all epochs after that do not seem to improve much:

I must admit, I think there’s a certain beauty to these generated images, but to be honest, it is still just randomly colored blobs, and they could be generated with much simpler algorithms than this beast of a neural network.

Generating cartoon avatars

Instead of continuing to tweak the ellipses-generating network, I wanted to see if I could generate more complex images. My original idea was to generate cartoon like images, and to my great delight, Google provides the Cartoon Set, a dataset consisting of thousands of cartoon avatars, licensed under the CC-BY license.

You have already seen an example result of using this dataset at the top of this post. Here are the 50 epoch images from training the network on the small version of the dataset (10 thousand images). Notice that the network starts to create face-like images after just a few epochs, and then starts cycling the style of the face, probably due to the above mentioned mode collapse.:

This is as far as I got currently. I would like to create a little web app for generating these images in the browser, but that will have to wait for another day. It would also be nice to be able to provide the facial features (hair color, eye color, etc.) as inputs to the network and see how that performs.

To keep my motivation up though, I think I need to switch gears and try something else for now. This was fun! :-)


References

A search for “DCGAN Tensorflow” yields many useful results, a lot of which I have skimmed as well, but the above are the primary resources.