Hello World is a Dream

Generating song lyrics using neural networks… that sounds cool! Those were my thoughts on an evening in the recent past, so I started experimenting with the idea.

I decided to name the neural network that came out of this project Gene Lyrica One, a very AI-sounding name (it is “the year of AI” after all). But before we get into the details of how Gene was born, let’s set the stage for this blog post with Gene’s unique take on the classic “hello world”:

hello world is a dream
i know when i been like your love
and i can’t go home

i can’t cry
i don’t want to see
i don’t know why i can’t run
i got me
i got the way you know
i want to feel my love
i want to go
i need you
and i want to feel so good
i want to get it and i can’t see

i’m gonna stop if i had it
i can’t really want to see

Gene Lyrica One

The neural network is given some initial text, called the seed text, and then the network creates new lyrics based on this text. As mentioned above, the seed text for these lyrics were “hello world” which, given the subject matter, makes sense on multiple levels.

If you want to create your own lyrics, you can try it out here, 1 and the code that generated the network can be found on GitHub.

In the following sections, I will describe the process that led to Gene Lyrica One, including more lyrics from Gene as well as other networks that were part of the experiment.

I have no clue

i have no clue
i want to lose the night
and i’m just a look in your mind
but i know what you want to do
all the way
i gave you love
and i can think you want to

Gene Lyrica One

Generating lyrics is not a walk in the park, and I have not quite cracked the nut yet. To be honest, I would say I generally have no clue what I am doing.

I know where I started though: To get a feeling for how to “predict language” with a neural network, I created neural networks to generate text based on two different techniques:2

  1. Given a sequence of words, predict another full sequence of words.
  2. Given a sequence of words, predict just one word as the next word.

The second kind of model (sequence-to-single-word) is the one that conceptually and practically was easiest for me to understand. The idea is this: For an input sentence like “all work and no play makes jack a dull boy”, we can split the sentence into small chunks that the neural network can learn from. For example “all work and no” as input and “play” (the next word) as output. Here is some code that does just that.

With the basic proof-of-concept architecture in place, I started looking for a dataset of song lyrics. One of the first hits on Google was a Kaggle dataset with more than 55 thousand song lyrics. This felt like an adequate amount so I went with that.

New Lines

new lines
gotta see what you can do

oh you know

Gene Lyrica One

Lyrics consist of a lot of short sentences on separate lines, and while the texts on each line are often related in content, they do not necessarily follow the same flow as the prose in a book.

This led to two specific design decisions for creating the training data. First, newline characters (\n) are treated as words on their own, which means that a “new line” can be predicted by the network. Second, the length of the input sequences should not be too long since the context of a song is often only important within a verse or chorus. The average length of a line for all songs happens to be exactly 7 words, so I decided to use 14 words for the input sequences to potentially capture multiline word relationships.

A few other decisions worth mentioning:

  • Words are not pre-processed. This means that e.g. running, runnin, and runnin’ will be treated as three different words.
  • Words are not removed or cleaned. For example, the word “chorus” sometimes appear in the dataset to mark the beginning of the song’s chorus.

Well Known With a Twist

well known with a twist for the bed

i got the

oh oh

what you want to do
i’m goin’ down

Gene Lyrica One

The first attempt at training the network yielded some funny results. Because there were hundreds of thousands of parameters to tune in the network, training was extremely slow, so I initially tested it on just the first 100 songs in the dataset. Because of alphabetical ordering, these all happened to be Abba songs.

The final accuracy of the network was somewhere around 80%. One way to interpret this is to say that the network knew 80% of the Abba songs “by heart”. Thus, the network was creating “Abba songs with a twist”. For example, it created the verse:

so long see you baby
so long see you honey
you let me be

Baba Twist

The Abba song “So long” has the phrase “so long see you honey” so it improvised a little bit with the “so long see you baby” (“you baby” appears in a different Abba song “Crying Over You” which probably explains the variation). Or how about:

like a feeling a little more
oh no waiting for the time
if you would the day with you
’cause i’m still my life is a friend
happy new year
happy new year
happy new year
……
[many more happy new years] :-)

Baba Twist

which is probably “inspired” by the Abba song “Happy New Year”. The network was overfitting the data for Abba, which turned out to be fun, so this was a promising start.

Too much information

too much information
i can’t go

Gene Lyrica One

With decent results from Baba Twist (the Abba-network), it was time to try training the network using all 55 thousand songs as input data. I was excited and hopeful that this network would be able to create a hit, so I let the training process run overnight.

Unfortunately, my computer apparently could not handle the amount of data, so I woke up to a frozen process that had only finished running through all the songs once (this is called one epoch, and training often requires 50 or more epochs for good results).

Luckily, the training process automatically saves checkpoints of the model at certain time intervals, so I had some model, but it was really bad. Here is an example:

i don’t know what i don’t know

i don’t know what i don’t know

i don’t know what i don’t know

Tod Wonkin’

Not exactly a masterpiece, but at least Tod was honest about its situation. Actually, “I don’t know what I don’t know” was the only text Tod produced, regardless of the seed text.

In this case, I think there was too much information for the network. This feels a bit counter-intuitive. We usually seem to always want more data, not less, but for a small hobby project like this, it probably made sense to reduce the data size a bit to make the project more manageable and practical.

Famous Rock

famous rock are the dream

chorus

well i got a fool in my head
i can be

i want to be
i want to be
i want to be
i want to be

Gene Lyrica One

After the failure of Tod Wonkin’, I decided to limit the data used for training the network. I theorized that it would be better to only include artists with more than 50 songs and have a smaller number of artists in general, because it would potentially create some consistency across songs. Once again, this is a case of “I have no clue what I’m doing”, but at least the theory sounded reasonable.

A “top rock bands of all time” list became the inspiration for what artists to choose. In the end, there were 20 rock artists in the reduced dataset, including Beatles, Rolling Stones, Pink Floyd, Bob Dylan etc. Collectively, they had 2689 songs in the dataset and 16389 unique words.

The lyrics from these artists are what created Gene Lyrica One.

It took some hours to train the network on the data, and it stopped by itself when it was no longer improving, with a final “accuracy” of something like 22%. This might sound low, but high accuracy is not desirable, because the network would just replicate the existing lyrics (like Baba Twist). Instead, the network should be trained just enough that it makes sentences that are somewhat coherent with the English language.

Gene Lyrica One felt like an instant disappointment at first, mirroring the failure of Tod Wonkin’ by producing “I want to be” over and over. At the beginning of this post, I mentioned Gene Lyrica One’s “Hello World” lyrics. Actually, the deterministic version of these are:

hello world is a little man

i can’t be a little little little girl
i want to be
i want to be
……
[many more “i want to be”]

Gene Lyrica One

At least Gene knew that it wanted to be something (not a little little little girl, it seems), whereas Tod did not know anything :-)

The pattern of repeating “I want to be” was (is) quite consistent for Gene Lyrica One. The network might produce some initial words that seems interesting (like “hello world is a little man”), but it very quickly gets into a loop of repeating itself with “i want to be”.

Adding Random

adding random the little

you are

and i don’t want to say

i know i don’t know
i know i want to get out

Gene Lyrica One

The output of a neural network is deterministic in most cases. Given the same input, it will produce the same output, always. The output from the lyric generators is a huge list of “probabilities that the next word will be X”. For Gene Lyrica One, for example, the output is a list of 16389 probabilities, one for each of the available unique words.

The networks I trained were biased towards common words like “I”, “to”, “be”, etc. as well as the newline character. This explains why both Gene Lyrica One and Tod Wonkin’ got into word loops. In Gene’s case, the words in “I want to be” were the most likely to be predicted, almost no matter what the initial text seed was.

Inspired by another Kaggle user, which in turn was inspired by an example from Keras, I added some “randomness” to the chosen words in the output.3 The randomness could be adjusted, but adding too much of it would produce lyrics that do not make sense at all.

All the quotes generated by Gene Lyrica One for this post have been created using a bit of “randomness”. For most of the sections above, the lyrics were chosen from a small handful of outputs. I did not spend hours finding the perfect lyrics for each section, just something that sounded fun.

The final trick

the final trick or my heart is the one of a world

you can get out of the road
we know the sun
i know i know
i’ll see you with you

Gene Lyrica One

A few months ago, TensorFlow.js was introduced which brings machine learning into the browser. It is not the first time we see something like this, but I think TensorFlow.js is a potential game changer, because it is backed by an already-successful library and community.

I have been looking for an excuse to try out TensorFlow.js since it was introduced, so for my final trick, I thought it would be perfect to see if the lyrics generators could be exported to browser versions, so they could be included more easily on a web page.

There were a few roadblocks and headaches involved with this, since TensorFlow.js is a young library, but if you already tried out my lyrics generator in the browser, then that is at least proof that I managed to kind-of do it. And it is in fact Gene Lyrica One producing lyrics in the browser!

This is the end

this is the end of the night

i was not every world
i feel it

Gene Lyrica One

With this surprisingly insightful observation from Gene (“I was not every world”), it is time to wrap up for now. Overall, I am pleased with the outcome of the project. Even with my limited knowledge of recurrent neural networks, it was possible to train a network that can produce lyrics-like texts.

It is ok to be skeptical towards the entire premise of this setup though. One could argue that the neural network is just an unnecessarily complex probability model, and that simpler models using different techniques could produce equally good results. For example, a hand-coded language model might produce text with better grammar.

However, the cool thing about deep learning is that it does not necessarily require knowledge of grammar and language structure — it just needs enough data to learn on its own.

This is both a blessing and a curse. Although I learned a lot about neural networks and deep learning during this project, I did not gain any knowledge regarding the language structure and composition of lyrics.

I will probably not understand why hello world is a dream for now.

But I am ok with that.


Is it Mila?

One of the great things about the Internet is that people create all sorts of silly, but interesting, stuff. I was recently fascinated by a deep learning project where an app can classify images as “hotdog” or “not hotdog”. The project was itself inspired by a fictional app that appears in HBO’s show Silicon Valley, and the project was organized by an employee at HBO.

The creator of the app wrote an excellent article, outlining how the team approached building the app. From data gathering, over designing and training a deep learning neural network to building an app for the Android and iPhone app stores.

Naturally, I thought to myself: perhaps I can be silly too. So I started a small project to try and classify whether an image contains my dog Mila or not. (Also, the architecture for the hotdog app is called DeepDog, so as you can see, it is all deeply connected!)

The is-mila project is not as large and detailed as the hotdog project (for example, I am not building an app), but it was a fun way to get to know deep learning a bit better.

The full code for the project is available on Github, and feel free to try and classify a photo as well.

A simple start

One of the obstacles to any kind of machine learning task is to get good training data. Fortunately, I have been using Flickr for years, and many of my photos have Mila in them. Furthermore, most of these photos are tagged with “Mila”, so it seemed like a good idea to use the Flickr photos as the basis for training the network.

Mila as a puppy
Mila as a puppy

I prepared a small script and command-line interface (CLI) for fetching pictures via the Flickr API. Of course, my data was not as clean as I thought it would be, so I had to manually move some photos around. I also removed photos that only showed Mila from a great distance or with her back to the camera.

In the end, I had 263 photos of Mila. There were many more “not Mila” photos available of course, but I decided to also use only 263 “not Mila” photos so the training set for the two classes “Mila” and “not Mila” had equal size. I do not really want to discuss overfitting, data quality, classification accuracy, etc. in this post, but there are many interesting topics to discuss there for another time.

For the deep learning part, I used Keras which is a deep learning library that is a bit simpler to get started with than e.g. Tensorflow. In the first iteration, I created a super-simple convolutional neural network (CNN) with just three convolutional layers and one fully-connected layer (and some MaxPooling and Dropout layers in between).

Training this network was faster than I thought and only took a few minutes. In my latest run, the accuracy settled at around 79% and validation accuracy (i.e. for photos that were not used to train the network) at 77% after 57 epochs of roughly six seconds each. This is not very impressive, but for binary classification, anything above 50-60% accuracy is at least better than a coin flip.

Finally, I created a simple website for testing the classification. I did not bother using a JavaScript transpiler/bundler like Babel/Webpack, so the site only works in modern browsers. You can try the simple classification here if you like.

The results from this initial experiment were interesting. In the validation set, most of the photos containing Mila were correctly classified as Mila, and a few were classified as not Mila for no obvious reasons. For example, these two images are from a similar setting, with similar lighting, but with different positioning of Mila, and they are classified differently:

Mila, correctly classified
Mila, correctly classified
Mila, incorrectly classified as not Mila
Mila, incorrectly classified as not Mila

Perhaps more surprising though are the false positives, the photos classified as Mila when they do not have Mila in them. Here are some examples:

Sports car, classified as Mila
Sports car, classified as Mila
Rainbow crosswalk, classified as Mila
Rainbow crosswalk, classified as Mila
Goats, classified as Mila
Goats, classified as Mila

Mila is certainly fast, but she is no sports car :-)

As of writing this, I am still uncertain what the simple network sees in the photos it is given. I have not investigated this yet, but it would be an interesting topic to dive into at a later stage.

Going deeper

A cool feature of Keras is that it comes with a few pre-trained deep learning architectures. In an effort to improve accuracy, I tried my luck with using a slightly modified MobileNet architecture using pre-trained weights for the ImageNet dataset, which contains a big and diverse set of images.

The Keras-provided MobileNet network is 55 layers deep so it is quite a different beast than the “simple” network outlined above. But by freezing the weights of the existing network layers and adding a few extra output layers as needed for my use case (binary classification of “Mila” and “not Mila”), the complexity of training the network was reduced since there were less weights to adjust.

After training the network for 48 epochs of about 18 seconds each, the training accuracy settled around 97% and validation accuracy at 98%. The high accuracy was surprising and felt like an excellent result! For example, the Mila pictures shown above were now both correctly classified, and the sports car and rainbow cross walk were no longer classified as being Mila. However, the goat was still “Mila” so something was still not quite right…

You can try out the network here if you like.

At this point, I had a hunch that the increased accuracy of MobileNet was mainly due to its ability to detect dogs in pictures (and the occasional goat). Unfortunately, it was worse than that, and photos of both dogs, cats, birds, butterflies, bears, kangaroos and even a squirrel were classified as being Mila.

It seemed I had not created a Mila detector, but an animal detector. I had kind of expected a result like this, but it was still a disappointing realization, and this is also where the story ends for now.

Sneaky squirrels and other animals

To summarize, I tried to create an image classifier that could detect Mila in photos, but in the current state of the project, this is not really possible. Writing this blog post feels like the end of the journey, but there are still many tweaks and improvements that could be made.

For example, it would be interesting to know why the “simple” network saw a rainbow crosswalk as Mila, and it would be nice to figure out how to improve the quality of the predictions for the MobileNet version such that it does not just say that all pets are Mila. One idea could be to clean the training data a bit more, e.g. by having more pets in the “not Mila” photo set or perhaps restrict the Mila photos to close-ups to improve consistency and quality in that part of the data.

One thing is for sure: there is always room for improvement, and working on this project has been a nice learning experience so far. As an added benefit, I managed to mention squirrels in a (technical) blog post, and I will leave you with a picture of the sneaky “Mila” squirrel:

Sneaky squirrel, classified as Mila
Sneaky squirrel, classified as Mila

(I like squirrels. A lot. It was all worth it just for the squirrel.)

Generating music with deep learning

Automatic, machine-generated music has been a small interest of mine for some time now. A few days ago, I tried out a deep learning approach for generating music… and failed miserably. Here’s the story about my efforts so far, and how computational complexity killed the post-rock.

The spark of an idea

When Photo Amaze was created in 2014, I thought it would be fun to have some kind of ambient music playing while navigating through the 3D maze. But I did not want to play pre-recorded music. I wanted it to be automatically generated on-the-fly, based on the contents of the pictures in the maze.

That was the spark. A picture is worth a thousand words, so why can’t it be worth a few seconds of music as well? For example, take a look at this picture:

Mountain with running water in the foregroundmotional impact, like the ambient sound of a running water stream or the whistle of the wind picking up speed over the mountain.

Since I can make a connection between photo and music, perhaps a machine could do this automatically as well. This is not a novel idea, but it is a nut that has yet to be cracked, and it was an intriguing idea to start exploring.

Hard-coded music mappings

Modern browsers have all the ingredients necessary for doing both image analysis and sound generation. There are numerous JavaScript libraries for analyzing and manipulating pictures, and the Web Audio API makes it possible to create synthesized sound in a fairly straightforward way. Thus, it made sense to start here.

The first experiment I did was to create a more-or-less fixed mapping between an image’s content and some kind of sound output. The high-level idea of the implementation was to simply map brighter colors to brighter sound notes. The steps to produce the output sound were something like this:

  • Find at most 200 “feature pixels” in the input image using trackingjs.
  • For each found “feature pixel”:
    • Calculate the average pixel value between the three RGB color values. This produces a single number per pixel between 0 and 255.
    • Normalize the pixel value from 0-255 to 20-500. This produces a base frequency for the output sound.
    • Create a sine wave oscillator using the Web Audio API for the pixel value.
    • Combine the oscillators to a single sound output.
    • While playing the sound, randomize the frequency of each oscillator slightly over time.

Using this approach, an image would be turned into a randomly changing output sound consisting of about 200 sine waves, each with a frequency between 20 and 500 Hz.

Here is an example output using the mountain from above as the input image (the red dots mark the found “features” of the image).

That might not sound terrible, until you realize that the sound is basically the same for any input image:

Mila might be a monster dog, but that output is just too dark :-)

There were a ton of problems with this implementation, to the point that it was actually outright silly. For example, the “feature pixel” selection mostly found edges/corners of the image, and using just 100 pixels as input corresponded to just 0.01% of all available pixels in the test images. Another problem was how the final pixel value was calculated from the average between the red, green and blue value of the pixel. Some colors arguably have more impact to the viewer than others, but this fact is not captured when taking the average.

Even with all its problems, the first experiment was a good first step, considering I did not know where to start before. It is possible that with a lot of tweaks, lots of new ideas and lots of time, this approach could start producing more interesting soundscapes. However, the downside of the approach was also that the music creation would always be guided by the experimenters: the humans. And I wanted to remove them from the equation.

Machine learning to the rescue

The second experiment ended before it even really started. It was clear that some kind of machine learning was needed to move forward, and it seemed that an artificial neural network might be the solution.

This was the idea:

  • Use every pixel of the input image as a single input node of the neural network.
  • Treat every output node as a single sound sample.

For the purposes of this blog post, everything that happens between input and output nodes of the network is largely hidden magic. With that in mind, here is how the network would look (P1 – Pm are the input pixels and S1 – Sn are the output samples):

Photo to sound neural network
Mapping a photo’s input pixels (P1-Pm) to a soundwave output (S1-Sn).

To get an idea of the size of the network, consider this: the mountain test image from above is 1024 by 683 pixels, so the network would have 699,392 input nodes when using images of that size. Digital sound is just a collection of amplitudes in very tiny pieces called samples. The most commonly used sampling rate for music is 44.1KHz, which means that every second of digital music consists of 44,100 individual samples. For a neural network of this design to produce a five-second sound, it would thus require 220,500 output nodes.

The intentions were good but the implementation never happened. After having the initial idea, I started Python and tried to simply read and write soundfiles, but it didn’t go so well, and the weekend was nearly over, and, “oh a squirrel!”… and the code was never touched again.

Machine learning is great, but the motivation was suddenly lacking, and the project was put on ice. This was about two years ago, and the project was not revived until quite recently.

AI art

Deep learning has been steadily on the rise in recent years, often outperforming other machine learning techniques in specific areas such as voice recognition, language translation and image analysis. But deep learning is not limited to “practical” use cases. It has also been used to create art.

A well-known example of “AI Art” is Google’s Deep Dream Generator. The software is based on an initial project called DeepDream which produced images based on how it perceived the world internally. Some of its images were even shown and sold at an art exhibition.

The A.I. Duet project shows another interesting use case for deep learning: the creator of the project, Yotam Mann, trained a model that can produce short sequences of piano notes based on the note input of a human. So if I played C-D-E, the software might respond with F-G-A although the result would most likely be slightly more interesting than that.1

A.I. Duet is impressive, but it still has a big limitation: it only works for specific notes for a specific instrument. So while the result is amazing, what I really want is more complex arrangements and raw audio output. Even so, the above examples show that deep learning is a powerful and versatile machine learning technique, and it is now finally becoming more feasible than ever to achieve the goal of creating music using AI.

Mila drawn with Deep Dream
Although it is not really relevant to this blog post, I could not help myself: here is the Mila image from above processed with default settings of Deep Dream. It is slightly disturbing to see that… chicken?… coming out of her left paw. Thank you for ruining my sleep, robot!

The bleeding edge, where the story ends

While doing some research on the latest state of the art for machine-generated sound, I stumbled upon yet another Google project called WaveNet. In an interesting blog post, the authors of WaveNet discuss how their research can be used to improve text-to-speech quality, but what is really exciting to me is that they also managed to produce short piano sequences that sound natural (there are some examples at the bottom of their blog post).

The big surprise here is that the piano samples are not just based on specific notes. They are raw audio samples generated from a model trained with actual piano music.2

Finally! A tried and tested machine learning technique that produced raw audio. Reading about WaveNet marked the beginning of my final experiment with music generation, and is the entire reason this blog post exists.

I found an open source implementation of WaveNet, and to test the implementation, I wanted to start simple by using just one sound clip. For this purpose, I extracted an eight-second guitar intro from the post-rock track Ledge by Seas of Years3:

My hope was that by training the model with a single sound clip, I would be able to reproduce the same or a very similar clip to the original to validate that the model produced at least some sound. If this was possible, I would be able to train the model with more sound clips and see what happens.

Unfortunately, even with various tweaks to the network parameters, I could not manage to produce anything other than noise. Sharing an example of the output here is not even appropriate, because it would hurt your ears. The experiment ended with an early failure.

So what was the problem? I soon realized that even with this fairly simple example, I had been overly optimistic about the speed at which I would be able to train the model. I thought that I could train the network in just a few minutes, but the reality was very different.

The first warning sign showed itself pretty quickly: every single step of the training process took more than 30 seconds to complete. In the beginning, I did not think much about this. Some machine learning models actually start producing decent results within the first few steps of training so I was hoping it would be the same here. However, after doing more research on WaveNet, it became clear that training a WaveNet model did not just require a few learning steps, it required several thousand. Clearly, training WaveNet on my machine was completely unfeasible, unless I was willing to wait more than a month for any kind of result.

Where do we go from here?

Machine learning has been rapidly evolving in recent years, propelled by software libraries like TensorFlow, and the technology is more accessible than ever for all kinds of developers. But there is also another side of the coin: in order to use the state of the art, we are often required to have massive amounts of computing power at our disposal. This is probably why a lot of high-profile AI research and projects are produced by companies like Google, Microsoft and IBM, because they have the capacity to run machine learning at a massive scale. For lone developers like me that just want to test the waters, it can be difficult to get very far because of the complexities of scale.

As a final example to illustrate this point, consider NSynth, an open source TensorFlow model for raw audio synthesis. It is based on WaveNet and on NSynth’s project page, it says:

The WaveNet model takes around 10 days on 32 K40 gpus (synchronous) to converge at ~200k iterations.

Training a model like that would cost more than $5,000 using Google Cloud resources4. Of course, it is possible that a simpler model could be trained faster and cheaper, but the example still shows that some technologies are most definitely not available for everyone. We live in a time where there is great access to many technological advances, but the availability is often limited in practice, because of the scale at which the technologies need to operate.

So where do we go from here? Well, computational complexity killed my AI post-rock for now, but I doubt that it will take long before significant progress is made in this field. For now, I will enjoy listening to human-generated music. In a way, it is re-assuring that machines cannot outperform us in everything yet.


  1. The video explaining how A.I. Duet works is quite good. 

  2. Describing how WaveNet works is beyond the scope of this blog post, but the original paper for WaveNet is not terribly difficult to read (unlike most other AI research). 

  3. Seas of Years’s album The Ever Shifting Fields was one of my favorite post-rock albums of 2016. I recommend a listen. 

  4. I used Google’s pricing calculator with 4 machines, each with 8 GPU cores. 

Apache Beam MongoDB reader for Python

The Apache Beam SDK for Python is currently lacking some of the transforms found in the Java SDK. I created a very minimal example of an Apache Beam MongoDB read transform for Python that might be useful for someone else looking for an answer.

I will update this post in the future if/when the Apache maintainers include support for MongoDB in the SDK. I know I could contribute to the project directly, but I don’t have time for it right now unfortunately :-)