Hello World is a Dream

Generating song lyrics using neural networks… that sounds cool! Those were my thoughts on an evening in the recent past, so I started experimenting with the idea.

I decided to name the neural network that came out of this project Gene Lyrica One, a very AI-sounding name (it is “the year of AI” after all). But before we get into the details of how Gene was born, let’s set the stage for this blog post with Gene’s unique take on the classic “hello world”:

hello world is a dream
i know when i been like your love
and i can’t go home

i can’t cry
i don’t want to see
i don’t know why i can’t run
i got me
i got the way you know
i want to feel my love
i want to go
i need you
and i want to feel so good
i want to get it and i can’t see

i’m gonna stop if i had it
i can’t really want to see

Gene Lyrica One

The neural network is given some initial text, called the seed text, and then the network creates new lyrics based on this text. As mentioned above, the seed text for these lyrics were “hello world” which, given the subject matter, makes sense on multiple levels.

If you want to create your own lyrics, you can try it out here, 1 and the code that generated the network can be found on GitHub.

In the following sections, I will describe the process that led to Gene Lyrica One, including more lyrics from Gene as well as other networks that were part of the experiment.

I have no clue

i have no clue
i want to lose the night
and i’m just a look in your mind
but i know what you want to do
all the way
i gave you love
and i can think you want to

Gene Lyrica One

Generating lyrics is not a walk in the park, and I have not quite cracked the nut yet. To be honest, I would say I generally have no clue what I am doing.

I know where I started though: To get a feeling for how to “predict language” with a neural network, I created neural networks to generate text based on two different techniques:2

  1. Given a sequence of words, predict another full sequence of words.
  2. Given a sequence of words, predict just one word as the next word.

The second kind of model (sequence-to-single-word) is the one that conceptually and practically was easiest for me to understand. The idea is this: For an input sentence like “all work and no play makes jack a dull boy”, we can split the sentence into small chunks that the neural network can learn from. For example “all work and no” as input and “play” (the next word) as output. Here is some code that does just that.

With the basic proof-of-concept architecture in place, I started looking for a dataset of song lyrics. One of the first hits on Google was a Kaggle dataset with more than 55 thousand song lyrics. This felt like an adequate amount so I went with that.

New Lines

new lines
gotta see what you can do

oh you know

Gene Lyrica One

Lyrics consist of a lot of short sentences on separate lines, and while the texts on each line are often related in content, they do not necessarily follow the same flow as the prose in a book.

This led to two specific design decisions for creating the training data. First, newline characters (\n) are treated as words on their own, which means that a “new line” can be predicted by the network. Second, the length of the input sequences should not be too long since the context of a song is often only important within a verse or chorus. The average length of a line for all songs happens to be exactly 7 words, so I decided to use 14 words for the input sequences to potentially capture multiline word relationships.

A few other decisions worth mentioning:

  • Words are not pre-processed. This means that e.g. running, runnin, and runnin’ will be treated as three different words.
  • Words are not removed or cleaned. For example, the word “chorus” sometimes appear in the dataset to mark the beginning of the song’s chorus.

Well Known With a Twist

well known with a twist for the bed

i got the

oh oh

what you want to do
i’m goin’ down

Gene Lyrica One

The first attempt at training the network yielded some funny results. Because there were hundreds of thousands of parameters to tune in the network, training was extremely slow, so I initially tested it on just the first 100 songs in the dataset. Because of alphabetical ordering, these all happened to be Abba songs.

The final accuracy of the network was somewhere around 80%. One way to interpret this is to say that the network knew 80% of the Abba songs “by heart”. Thus, the network was creating “Abba songs with a twist”. For example, it created the verse:

so long see you baby
so long see you honey
you let me be

Baba Twist

The Abba song “So long” has the phrase “so long see you honey” so it improvised a little bit with the “so long see you baby” (“you baby” appears in a different Abba song “Crying Over You” which probably explains the variation). Or how about:

like a feeling a little more
oh no waiting for the time
if you would the day with you
’cause i’m still my life is a friend
happy new year
happy new year
happy new year
……
[many more happy new years] :-)

Baba Twist

which is probably “inspired” by the Abba song “Happy New Year”. The network was overfitting the data for Abba, which turned out to be fun, so this was a promising start.

Too much information

too much information
i can’t go

Gene Lyrica One

With decent results from Baba Twist (the Abba-network), it was time to try training the network using all 55 thousand songs as input data. I was excited and hopeful that this network would be able to create a hit, so I let the training process run overnight.

Unfortunately, my computer apparently could not handle the amount of data, so I woke up to a frozen process that had only finished running through all the songs once (this is called one epoch, and training often requires 50 or more epochs for good results).

Luckily, the training process automatically saves checkpoints of the model at certain time intervals, so I had some model, but it was really bad. Here is an example:

i don’t know what i don’t know

i don’t know what i don’t know

i don’t know what i don’t know

Tod Wonkin’

Not exactly a masterpiece, but at least Tod was honest about its situation. Actually, “I don’t know what I don’t know” was the only text Tod produced, regardless of the seed text.

In this case, I think there was too much information for the network. This feels a bit counter-intuitive. We usually seem to always want more data, not less, but for a small hobby project like this, it probably made sense to reduce the data size a bit to make the project more manageable and practical.

Famous Rock

famous rock are the dream

chorus

well i got a fool in my head
i can be

i want to be
i want to be
i want to be
i want to be

Gene Lyrica One

After the failure of Tod Wonkin’, I decided to limit the data used for training the network. I theorized that it would be better to only include artists with more than 50 songs and have a smaller number of artists in general, because it would potentially create some consistency across songs. Once again, this is a case of “I have no clue what I’m doing”, but at least the theory sounded reasonable.

A “top rock bands of all time” list became the inspiration for what artists to choose. In the end, there were 20 rock artists in the reduced dataset, including Beatles, Rolling Stones, Pink Floyd, Bob Dylan etc. Collectively, they had 2689 songs in the dataset and 16389 unique words.

The lyrics from these artists are what created Gene Lyrica One.

It took some hours to train the network on the data, and it stopped by itself when it was no longer improving, with a final “accuracy” of something like 22%. This might sound low, but high accuracy is not desirable, because the network would just replicate the existing lyrics (like Baba Twist). Instead, the network should be trained just enough that it makes sentences that are somewhat coherent with the English language.

Gene Lyrica One felt like an instant disappointment at first, mirroring the failure of Tod Wonkin’ by producing “I want to be” over and over. At the beginning of this post, I mentioned Gene Lyrica One’s “Hello World” lyrics. Actually, the deterministic version of these are:

hello world is a little man

i can’t be a little little little girl
i want to be
i want to be
……
[many more “i want to be”]

Gene Lyrica One

At least Gene knew that it wanted to be something (not a little little little girl, it seems), whereas Tod did not know anything :-)

The pattern of repeating “I want to be” was (is) quite consistent for Gene Lyrica One. The network might produce some initial words that seems interesting (like “hello world is a little man”), but it very quickly gets into a loop of repeating itself with “i want to be”.

Adding Random

adding random the little

you are

and i don’t want to say

i know i don’t know
i know i want to get out

Gene Lyrica One

The output of a neural network is deterministic in most cases. Given the same input, it will produce the same output, always. The output from the lyric generators is a huge list of “probabilities that the next word will be X”. For Gene Lyrica One, for example, the output is a list of 16389 probabilities, one for each of the available unique words.

The networks I trained were biased towards common words like “I”, “to”, “be”, etc. as well as the newline character. This explains why both Gene Lyrica One and Tod Wonkin’ got into word loops. In Gene’s case, the words in “I want to be” were the most likely to be predicted, almost no matter what the initial text seed was.

Inspired by another Kaggle user, which in turn was inspired by an example from Keras, I added some “randomness” to the chosen words in the output.3 The randomness could be adjusted, but adding too much of it would produce lyrics that do not make sense at all.

All the quotes generated by Gene Lyrica One for this post have been created using a bit of “randomness”. For most of the sections above, the lyrics were chosen from a small handful of outputs. I did not spend hours finding the perfect lyrics for each section, just something that sounded fun.

The final trick

the final trick or my heart is the one of a world

you can get out of the road
we know the sun
i know i know
i’ll see you with you

Gene Lyrica One

A few months ago, TensorFlow.js was introduced which brings machine learning into the browser. It is not the first time we see something like this, but I think TensorFlow.js is a potential game changer, because it is backed by an already-successful library and community.

I have been looking for an excuse to try out TensorFlow.js since it was introduced, so for my final trick, I thought it would be perfect to see if the lyrics generators could be exported to browser versions, so they could be included more easily on a web page.

There were a few roadblocks and headaches involved with this, since TensorFlow.js is a young library, but if you already tried out my lyrics generator in the browser, then that is at least proof that I managed to kind-of do it. And it is in fact Gene Lyrica One producing lyrics in the browser!

This is the end

this is the end of the night

i was not every world
i feel it

Gene Lyrica One

With this surprisingly insightful observation from Gene (“I was not every world”), it is time to wrap up for now. Overall, I am pleased with the outcome of the project. Even with my limited knowledge of recurrent neural networks, it was possible to train a network that can produce lyrics-like texts.

It is ok to be skeptical towards the entire premise of this setup though. One could argue that the neural network is just an unnecessarily complex probability model, and that simpler models using different techniques could produce equally good results. For example, a hand-coded language model might produce text with better grammar.

However, the cool thing about deep learning is that it does not necessarily require knowledge of grammar and language structure — it just needs enough data to learn on its own.

This is both a blessing and a curse. Although I learned a lot about neural networks and deep learning during this project, I did not gain any knowledge regarding the language structure and composition of lyrics.

I will probably not understand why hello world is a dream for now.

But I am ok with that.


Is it Mila?

One of the great things about the Internet is that people create all sorts of silly, but interesting, stuff. I was recently fascinated by a deep learning project where an app can classify images as “hotdog” or “not hotdog”. The project was itself inspired by a fictional app that appears in HBO’s show Silicon Valley, and the project was organized by an employee at HBO.

The creator of the app wrote an excellent article, outlining how the team approached building the app. From data gathering, over designing and training a deep learning neural network to building an app for the Android and iPhone app stores.

Naturally, I thought to myself: perhaps I can be silly too. So I started a small project to try and classify whether an image contains my dog Mila or not. (Also, the architecture for the hotdog app is called DeepDog, so as you can see, it is all deeply connected!)

The is-mila project is not as large and detailed as the hotdog project (for example, I am not building an app), but it was a fun way to get to know deep learning a bit better.

The full code for the project is available on Github, and feel free to try and classify a photo as well.

A simple start

One of the obstacles to any kind of machine learning task is to get good training data. Fortunately, I have been using Flickr for years, and many of my photos have Mila in them. Furthermore, most of these photos are tagged with “Mila”, so it seemed like a good idea to use the Flickr photos as the basis for training the network.

Mila as a puppy
Mila as a puppy

I prepared a small script and command-line interface (CLI) for fetching pictures via the Flickr API. Of course, my data was not as clean as I thought it would be, so I had to manually move some photos around. I also removed photos that only showed Mila from a great distance or with her back to the camera.

In the end, I had 263 photos of Mila. There were many more “not Mila” photos available of course, but I decided to also use only 263 “not Mila” photos so the training set for the two classes “Mila” and “not Mila” had equal size. I do not really want to discuss overfitting, data quality, classification accuracy, etc. in this post, but there are many interesting topics to discuss there for another time.

For the deep learning part, I used Keras which is a deep learning library that is a bit simpler to get started with than e.g. Tensorflow. In the first iteration, I created a super-simple convolutional neural network (CNN) with just three convolutional layers and one fully-connected layer (and some MaxPooling and Dropout layers in between).

Training this network was faster than I thought and only took a few minutes. In my latest run, the accuracy settled at around 79% and validation accuracy (i.e. for photos that were not used to train the network) at 77% after 57 epochs of roughly six seconds each. This is not very impressive, but for binary classification, anything above 50-60% accuracy is at least better than a coin flip.

Finally, I created a simple website for testing the classification. I did not bother using a JavaScript transpiler/bundler like Babel/Webpack, so the site only works in modern browsers. You can try the simple classification here if you like.

The results from this initial experiment were interesting. In the validation set, most of the photos containing Mila were correctly classified as Mila, and a few were classified as not Mila for no obvious reasons. For example, these two images are from a similar setting, with similar lighting, but with different positioning of Mila, and they are classified differently:

Mila, correctly classified
Mila, correctly classified
Mila, incorrectly classified as not Mila
Mila, incorrectly classified as not Mila

Perhaps more surprising though are the false positives, the photos classified as Mila when they do not have Mila in them. Here are some examples:

Sports car, classified as Mila
Sports car, classified as Mila
Rainbow crosswalk, classified as Mila
Rainbow crosswalk, classified as Mila
Goats, classified as Mila
Goats, classified as Mila

Mila is certainly fast, but she is no sports car :-)

As of writing this, I am still uncertain what the simple network sees in the photos it is given. I have not investigated this yet, but it would be an interesting topic to dive into at a later stage.

Going deeper

A cool feature of Keras is that it comes with a few pre-trained deep learning architectures. In an effort to improve accuracy, I tried my luck with using a slightly modified MobileNet architecture using pre-trained weights for the ImageNet dataset, which contains a big and diverse set of images.

The Keras-provided MobileNet network is 55 layers deep so it is quite a different beast than the “simple” network outlined above. But by freezing the weights of the existing network layers and adding a few extra output layers as needed for my use case (binary classification of “Mila” and “not Mila”), the complexity of training the network was reduced since there were less weights to adjust.

After training the network for 48 epochs of about 18 seconds each, the training accuracy settled around 97% and validation accuracy at 98%. The high accuracy was surprising and felt like an excellent result! For example, the Mila pictures shown above were now both correctly classified, and the sports car and rainbow cross walk were no longer classified as being Mila. However, the goat was still “Mila” so something was still not quite right…

You can try out the network here if you like.

At this point, I had a hunch that the increased accuracy of MobileNet was mainly due to its ability to detect dogs in pictures (and the occasional goat). Unfortunately, it was worse than that, and photos of both dogs, cats, birds, butterflies, bears, kangaroos and even a squirrel were classified as being Mila.

It seemed I had not created a Mila detector, but an animal detector. I had kind of expected a result like this, but it was still a disappointing realization, and this is also where the story ends for now.

Sneaky squirrels and other animals

To summarize, I tried to create an image classifier that could detect Mila in photos, but in the current state of the project, this is not really possible. Writing this blog post feels like the end of the journey, but there are still many tweaks and improvements that could be made.

For example, it would be interesting to know why the “simple” network saw a rainbow crosswalk as Mila, and it would be nice to figure out how to improve the quality of the predictions for the MobileNet version such that it does not just say that all pets are Mila. One idea could be to clean the training data a bit more, e.g. by having more pets in the “not Mila” photo set or perhaps restrict the Mila photos to close-ups to improve consistency and quality in that part of the data.

One thing is for sure: there is always room for improvement, and working on this project has been a nice learning experience so far. As an added benefit, I managed to mention squirrels in a (technical) blog post, and I will leave you with a picture of the sneaky “Mila” squirrel:

Sneaky squirrel, classified as Mila
Sneaky squirrel, classified as Mila

(I like squirrels. A lot. It was all worth it just for the squirrel.)

University is what you make of it

Being a developer in a position far removed from academia, I am often confronted with the question of whether my university degree was worth the effort or not. Or to put it more mildly: would I be where I am today without it. I usually arrive at the same conclusion: yes, it was definitely worth it for me. And here is an important thing to keep in mind about higher education: it is what you make it out to be.

Anecdotally, I know both sides of the education opinion spectrum very well. When I was growing up, higher education was the most important thing in the world, and people that did not go through university were frowned upon. I have also often heard the song of how companies hunger for computer science graduates, and how good it is to have a Master’s degree and not “just” a Bachelor’s degree.

On the other hand, I have met many people that told me that education is a waste of time. I also know at least a handful of professional developers that are self-taught and some of them wear that as a badge of honor — sometimes also dismissing education outright and calling it useless.

I reject the mentality of both these extremes, and at least statistics like the 2017 Stack Overflow Survey seem to indicate that the industry as a whole has a more nuanced view of education. According to the survey, 76.5% of all professional developers have a Bachelor’s degree or higher which means that roughly one out of every four professional developers do not have a formal education. At the same time, 32% (almost a third of all developers) respond that education is not very important, but most of the responses are grouped around the middle with education being “somewhat important”.

Education or not, neither is right or wrong, and I think it is important to have a balanced view of this. However, I do not want to dismiss the feelings involved here. I would be lying if I said it did not affect me when I was a mid-twenties graduate without professional experience, and I saw much younger self-taught programmers with better business opportunities than myself. But then I realize that they probably did not build a neural network for image classification by hand, nor did they have the opportunity to discuss computer ethics with like-minded peers. And those things gave me immense joy. Likewise, I can sympathize with feelings of the opposite, although it would be disingenuous of me to presume what those feelings are.

The outcome in both cases is the same: it is easy to feel doubt and resentment. From my point of view, this comes in the shape of “why the hell did I waste time in university”, and “how come they got by without a degree?”. When these feelings emerge, they have to be put to rest quickly, because they are not helpful, and most importantly, they are missing the point.

Because in the end, when it comes to professional development, like many other parts of life, there is no right or wrong path to take. Higher education is not a measure of success, but it should not be dismissed either. University can be a tremendously rewarding experience, and the outcome is what you make of it, if you want it.

… and let’s not forget the parties…


Photo by Ian Schneider on Unsplash.

Dig the Data, StoreGrader edition

A new Dig the Data was published yesterday. It has some data insights from StoreGrader which is an app I have been working on for a while now.

For this edition of Dig the Data, I wanted to create a nice looking interactive infographic, and I wanted to combine both static and interactive elements. My previous Dig the Data visualization was quite minimal, but had full interactivity. However, it lacked a bit of the feeling of “niceness” that some static graphics can provide (as well as the magic touch of a designer, which I am not). A good example of this “niceness” is the first Dig the Data, where the entire visualization is a static image created by my colleague Julia.

This time, I teamed up with Maria to create a visualization that combines both static insights (with a bit of animation) as well as interactive graphs to explore.

I am very pleased with the result, and you can check out the post here.

A year of stock and fund trading

A year ago, I started saving up by investing on the fund and stock market. In this post, I will share some of my learning, strategy (or lack thereof) and results from a year of fund and stock trading. I will skip a lot of small details and instead provide a high-level overview of what I have been doing.

Disclaimer: This is not financial advice. It is purely informational. I probably don’t need to write this disclaimer since I’m not a business, but now that I’ve done it, you have no excuse to blame me if you follow the same strategy as me and lose all your money :-)

Starting out

The financial market is very overwhelming at first, because there are so many terms and weird financial instruments. To me, stocks and funds are the easiest to grasp, because they are the most intuitive. Owning a stock is like owning a small part of a company. Owning a share of a fund is like owning a small part of an investment portfolio. It thus made sense to focus on stocks and funds to begin with.

Even with a limited scope, it took many hours of reading to learn about indicators such as P/E number and sharpe ratios as well as finding tips on different strategies for investment. My bank had some excellent and digestible articles (in Swedish, sorry) that were very helpful.

The first basic conclusion from this research was this: when investing, spread the risk. This is colloquially known as “don’t put all your eggs in one basket”. There are different ways of spreading the risk, and I decided to follow three common suggestions:

  • Invest at regular intervals over time instead of irregular one-time payments. This ensures that investments are made during both economic ups and downs.
  • Invest in multiple regional markets (e.g. North America, Europe, Asia). Trouble in one market might be offset by an economic boom in another.
  • Invest mostly in funds. Funds generally have at least 50+ stocks in their portfolio which is good for spreading risk over many companies.

I also wanted to get my feet wet trading stocks, so I decided to divide my monthly savings with 80% for buying funds and 20% for stocks. That has been a bad idea so far, but I will get back to that later.

Finding the right fund

Learning about fund trading is one thing, actually starting to buy them is another. The first task was to figure out the regional composition of the funds. Reading various blog posts and using my bank’s automatic fund portfolio generator, I got an overall idea of what the composition could look like. I ended up with this distribution:

Pie chart of regional composition of funds.
Regional composition of funds.

A few notes on the chosen distribution (and perhaps specific for my situation):

  • It might seem curious why I put 10% in Sweden, but this is simply because I live here :) Interestingly, the portfolio generator from the bank suggested investing a full 25% in Sweden. I decided this was too much, but 10% is still a pretty big chunk for a single country.
  • “Emerging Markets” is a blanket term that actually often covers multiple regions, for example China and Brazil.
  • Asia/Pacific funds often mostly exclude Japan which has its own category of funds.

With the regional composition figured out, the next step was deciding what specific funds to invest in. This was not as easy.

Passive vs Active Funds — a brief digression

Feel free to skip this section if you know what active and passive portfolio management is and/or if you are not interested in listening to me rant about it. I included extra information on this topic, because I think it is important and interesting. There will be a conspiracy theory as well.

It is useful to mention the two primary portfolio management styles for the funds I’m interested in: active vs passive. The distinction was very confusing to me at first, so it might be helpful to first re-iterate what they mean:

  • Passively managed funds are also known as index funds. They follow a specific market index (e.g. S&P 500, MSCI World Index), and their stock portfolio is a more-or-less direct reflection of that index.
  • Actively managed funds have investors that make active decisions (hence the name) on what to buy and sell with the goal of out-performing “the rest of the market”. This is a little vague, but it often means that they try to beat a specific index or simply “outperform the market”.

I think “passive” is a bit of a misnomer. From my perspective, “passive” means something that just “takes care of itself” and that is not the case of passive funds. Most of them are still managed by an actual human that buys and sells the stocks in the fund’s portfolio.

Besides the management style, the most important distinction between a passive and an active fund is the management fee. Passive funds are usually much cheaper than active funds. The management fee for a typical passive index fund on my trading platform ranges from 0.2%-0.5% while active funds range from 1%-2%.

This difference does not sound like a lot until it is visualized on a graph, so allow me to do that now. Assume you save $100 per month over 25 years, i.e. the total investment is $30,000. Assume also that you have a 7% rate of return each year. The graph below shows the value of the investment over time for different management fees.

Chart of fund value over time at various fund values
Fund value over time with 7% return rate and various management fees. I generated the graphs in this sheet.

The result is quite interesting. Without any fees, the $30,000 investment ends up at about $81,007 (a 170% increase). That’s pretty good, but probably unrealistic since most funds have at least a small fee. With a cheap fund (0.2% fee), the money would have reached $78,490. Still pretty good! With a more expensive fund (1% fee), the money would have reached $69,299.

Think about this for a second. With just 1% in fees, the total outcome is 12% lower ($9,191) than if the fee had been just 0.2%. For a fund with 2% in fees, the outcome is a whooping 24% lower. The reason for this is that fees are double-bad. Not only do they dig into the investment each year, but that lost capital is also lost potential for future investment. So the effect is much worse over time as seen on the graph.

At this point, it is relevant to ask the big question: do active funds have higher rate of return than passive funds?. This is literally a million-dollar question, and I don’t know the answer. I can appeal to your intuition and share some articles though.

First, how well should an active fund perform for it to be a better choice than an index fund? In rough numbers, a fund with a 1.5% fee needs to be 1.3 percent-points better than an index fund with a 0.2% fee. Let’s say the index fund has a rate of return of 7% like in the example above. The active fund thus needs to have a rate of return of 8.3% just to break even with the index fund.

Here is a follow-up question: do you think it is possible for an active fund to outperform the market for 25 years straight? Of course not, it is just not possible. Or at least: it is not mathematically possible for all active funds to do this.

Don’t take my word for it though. Here are a few articles that I found: Almost no one can beat the market, Mission Impossible: Beating the Market Forever, How Many Mutual Funds Routinely Rout the Market: Zero.

The only people that say it is possible to beat the market are the fund managers. And for good reason. If a billion-dollar fund takes out a 2% fee, that is a very hefty salary for potentially only a handful of managers. In fact, it is a salary so overwhelmingly high that it becomes very difficult to trust that they always have their investors’ best interest in mind.

This is actually my biggest issue with active funds, and I admit it is ideological and not necessarily rational. I hate the thought of some rich manager investing my money with absolutely no risk for themselves. If a manager under-performs, they might lose their job, but I lose all my money. If I invest in an index fund and the market crashes, I cannot really blame the market.

This section has already gotten a bit out of hand, but I think I promised a conspiracy theory at the beginning of this section so let me end with that: could the term “passive fund” be cooked up by the financial elite to make it sound less attractive? We are often told to have active lifestyles, make active choices in life, participate actively in our community etc. It is very rare that “passive” is a good thing. So when choosing a fund to invest in, we might subconsciously be turned off by “passive” funds. Think about it. It all makes sense! :-)

Low cost funds for the win

It should be clear by now that I have a preference towards passive index funds. However, that is not to say that active and passive funds cannot coexist. There are a few fairly cheap active funds out there, so when I finally had to actually choose funds for my portfolio, I decided to take a somewhat mixed approach based on the following rules of thumb:

  • Keep fees as low as possible, preferably below 1%.
  • Invest in about two funds per region. At least one passive and possibly one active.
  • Only consider funds with a sustainability profile/statement.
  • Look at past performance, sharpe-ratio and Morningstar rating but do not get hung up on it.

I ended up with 12 funds initially last year, and have changed out two funds and added another, so I now invest in 11 funds monthly, but I still have 13 funds in the portfolio. I could share the entire portfolio at this point, but I could not find any good tools for sharing a Swedish fund portfolio (besides typing them out in Excel), and the information will not really be helpful for anyone living outside Sweden anyway.

Here are some numbers though…

  • The average management fee for my portfolio is 0.47%. The lowest fees is 0% and highest is 1.8% (I recently changed this for my monthly transfer). Without the most expensive fund, the average fee is 0.35%.
  • The investment has grown between 6% and 10%. The number varies quite a lot during any given month.
  • The highest growth for a single fund is 14.4% and is a fund for the Emerging Markets region.
  • The lowest growth for a single fund is 6.3% and is a fund for the North America region.

A note on growth: The numbers I quote above are the actual growth in terms of how much extra money I would get if I sold all funds right now. The growth of the portfolio in terms of the market itself is calculated by my bank to be 25.9% in one year. Honestly, I don’t understand how this number is calculated, but it looks pretty on a graph when compared to the Dow Jones World Index:

Hopefully, my fund portfolio will continue to grow steadily. Anything above 5% is a win for me, so I am currently very happy with how it is going.

Not a stock trader

While my funds have done well, my stocks have not. I currently have a few shares in 19 different companies. My buying strategy has been a bit random, and I have tried out different approaches to trading stocks. Here are a few highlights:

  • The Cash Dividends. Many big companies pay dividends. This sounded fun, so I bought some stock in a few boring, stable companies like Nordea (a bank), Telia (a phone/telecommunication company) and Knowit (an IT consultancy). I got my first payouts a few months ago, and actually, it was fun to get extra pizza money during the early months of the year! Besides the dividends, the stock prices of these companies are also the most stable with the exception of Knowit that has increased 124% in one year and is pretty much the only reason why my portfolio is not showing red numbers (yet).
  • The Stock Emissions. When companies want extra money for whatever reason (e.g. more research, new investments or paying off loans), they will often emit new stock. I participated in one of these emissions for a company called BrainCool that produces and sells equipment for medicinal cooling of the brain. So far, the stock has not been very cool though. The value of my stock is down 48% as of this writing.
  • The IPO. I participated in the initial public offering (IPO) for a company called Isofol Medical that produces a drug used during colorectal cancer treatment that potentially works better than existing drugs. The value of the stock fell immediately after the IPO and my stock is now down 20% so it was also not a good investment.
  • The company in trouble. I purchased stock in Eltel without realizing the company was in an obviously bad shape. I basically just looked at their (previously) fairly generous dividend payouts, and failed to pay attention to the numerous scandals and trouble the company was going through. Because of their problems, they decided to not pay any dividends to investors this year. But not only that, the stock price itself has dropped 51% since I bought it.

Overall, my stock portfolio has increased 0.21% in value and that includes cash dividends. Let’s round that to 0%, and the only redeeming factor is that at least I did not lose any money (yet).

The last story in the list above is the best example of my incompetence as a stock trader, and why I should probably just stick to funds. After all, I will always be an amateur trader, and I just do not have the time and skills to really analyze the market and make informed decisions about my stock purchases.

The path forward

A year ago, I started at $0 and grew my portfolios slowly using a monthly money transfer. As you can imagine, this means my total investment is still quite small. This also explains why I have been willing to take some risks and experiment a bit with stock trading. Losses suck, but they are not catastrophic when the investment is small. However, as my savings keep growing, there is more at stake. My investments are probably going to be a significant portion of my future pension, so having a fairly stable growth using funds is probably better than trying my luck with stock trading, although the yields are potentially much higher.

I have not fully decided whether to give up on stock trading completely though, because there is a certain allure to the idea of owning a part of many companies directly as well as the dream of the 5,000% stock increase.

How to shape the path forward is still an open question for me. Writing this post has helped gather my thoughts on the subject, but I have not concluded anything yet so I will probably keep doing what I am doing. Money decisions are always difficult :-)


Photo by rawpixel on Unsplash