80/20 and data science

“20% of the task took 80% of the time”. If you have ever heard someone say that, you probably know about the so-called 80/20 rule, also known as the Pareto principle.

The principle is based on the observation that “for many events, roughly 80% of the effects come from 20% of the causes.”. As an example, the Wikipedia article mentions how in some companies “80% of the income come from 20% of the customers” along with a bunch of other examples.

In contrast to this, most of the references to the 80/20 rule that I hear in the wild are variations of the (often sarcastic) statement at the beginning of the post, and it is also this version that is more fun to talk about.

Estimating the finishing touch

In software development, the 80/20 rule often shows up when the “finishing touches” to a task are forgotten or complexity is underestimated. An example could be if a developer forgot to factor in the time it takes to add integration tests to a new feature or underestimated the difficulty of optimizing a piece of code for high performance.

In this context, the 80/20 rule could thus be seen as the result of bad task management to a certain extent, but it is worth noting that it is not always this simple. Things get in the way, like when the test suite refuses to run locally, or the optimized code cannot work without blocking the CPU, and the programming language is single-threaded, forcing the developer to take a different approach to the problem (this is purely hypothetical of course…).

Related to this, Erik Bernhardsson recently wrote an interesting treatise on the subject of why software projects “take longer than you think”, and I think it is worth sneaking in a reference. Here is the main claim from the author:

I suspect devs are actually decent at estimating the median time to complete a task. Planning is hard because they suck at the average.

Erik Bernhardsson, Why software projects take longer than you think – a statistical model

The message here resonated quite well with me (especially because of the use of graphs!). The author speaks of a “blowup factor” (actual time divided by estimated time) for projects, and if his claims are true, there could be some merit to the idea that the 20% of a task could easily “blow up” and take 80% of the time.1

Dirty data

Sometimes, the perception of data science is that most of the time “doing data science” is spent on creating models. For some data scientists, this might be the reality, but for the majority that I have spoken to, preparing data takes up a significant amount of time, and it is not the most glorious work, if one is not prepared for it.

I recently gave an internal talk at work where I jokingly referred to this as the 80/20 rule of data science: 80% of the time is spent on data cleaning (the “boring ” part), and 20% on modeling (the “fun” part).

This is not really a 80/20 rule, except if we rephrase it as “80% of the fun takes up only 20% of the time” or something like that.2

When it comes to deploying models in production, the timescales sometimes shift even more. The total time spent on a project might be 1% on modeling and 99% on data cleaning and infrastructure setup, but it’s the 1% (the model) that gets all the attention.

The future of data cleaning

In the last couple of years, there have been loads of startups and tools emerging that do automatic machine learning (or “AutoML”), i.e. they automate the fun parts of data science, while sometimes also providing convenient infrastructure to explore data.

If we assume that the 80/20 rule of data science is correct, these tools are thus helping out with 20% of data science. However, the first company that solves the problem of automatically cleaning and curating dirty data is going to be a billion-dollar unicorn. Perhaps the reason that we have not seen this yet is that dealing with data is actually really difficult.

For now, I suspect that the “80/20 rule of data science” will continue to be true in many settings, but that is not necessarily a bad thing. You just gotta love the data for the data itself :-)

Neurons spike back

Neurons Spike Back (https://neurovenge.antonomase.fr/) was featured in the latest data science weekly newsletter. I would normally pass on such long articles, but the history of AI is interesting, so I gave it a shot. Reading the paper felt like a marathon, and I only completed it through sheer force of will, lots of coffee, and because it is cold and raining outside.

The article is very difficult to read (at least for me), not because it is filled with theory, but because the language is dry and academic, and it tries to condense decades of history around the research of AI into a fairly short (given the topic) article that discusses two opposing sides of research and thought: connectionist and symbolic AI.

Despite my warning above, if you have the patience, I found it to be a fairly good overview of how we ended up where we are today with deep learning dominating the state of the art for AI in many fields.

I found it particularly interesting to learn about the Mark I, a hardware neural network constructed during the 1960’s for simple object recognition. It is a good reminder that the concepts we are using today have been around for a very long time, and I often find that knowing a bit of the history behind them help understand what we are doing in the present.

AI computing requirements

Whenever there is a new announcement or breakthrough with AI, it always strikes me how out of reach the results would be to replicate for individuals and small organizations. Machine learning algorithms, and especially deep learning with neural networks, are often so computationally expensive that they are infeasible to run without immense computing power.

As an example, OpenAI Five (OpenAI’s Dota 2 playing bot) used 128,000 CPUs and 256 GPUs which trained continuously for several months:

In total, the current version of OpenAI Five has consumed 800 petaflop/s-days and experienced about 45,000 years of Dota self-play over 10 realtime months.

OpenAI blog post “How to Train Your OpenAI Five”

Running a collection of more than a hundred thousand CPUs and hundreds of GPUs for ten months would cost several million dollars without discounts. Needless to say, a hobbyist such as myself would never be able to replicate those results. Cutting edge AI research like this has an implicit disclaimer: “Don’t try this at home”.

Even on a smaller scale, it is not always possible to run machine learning algorithms without certain trade-offs. I can sort a list of a million numbers in less than a second, and even re-compile a fairly complex web application in a few seconds, but training a lyrics-generating neural network on less than three thousand songs takes several hours to complete.

Although a comparison between number sorting and machine learning seems a bit silly, I wonder if we will ever see a huge reduction in computational complexity, similar to going from an algorithm like bubble sort to quicksort.1

Perhaps it is not fair to expect to be able to replicate the results of a cutting edge research institution such as OpenAI. Dota 2 is a very complex game, and reinforcement learning is an area of research that is developing fast. But even OpenAI acknowledges that recent improvements to their OpenAI Five bot are primarily due to increases in available computing power:

OpenAI Five’s victories on Saturday, as compared to its losses at The International 2018, are due to a major change: 8x more training compute. In many previous phases of the project, we’d drive further progress by increasing our training scale.

OpenAI blog post “How to Train Your OpenAI Five”

It feels slightly unnerving to see that the potential AI technologies of the future are currently only within reach of a few companies with access to near-unlimited resources. On the other hand, the fact that we need to throw so many computers at mastering a game like Dota should be comforting for those with gloomy visions of the future :-)