With the rise of machine learning MOOCs and cheaper compute power, it’s becoming much easier for data enthusiasts to explore the depths of machine learning and the data science toolkit. Sprinkle in the continuous success stories surrounding machine learning in popular news, and data laymen begin to have an appetite for data science. Before I go on I need to be clear in that I believe the growing supply of self-taught data scientists is overall a phenomenal development for the field of Data Science. It will only lead to advancements down the road as more people are included. What I am focused on here are those models that are practically copy-pasted from the online course you just passed.
“Give a small boy a hammer, and he will find that everything he encounters needs pounding. It comes as no particular surprise to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.” — Abraham Kaplan
Just because you learned TensorFlow doesn’t mean the next project you work on requires a 12-layer neural network. The increased appetite we see for machine learning, AI, and all other data science buzzwords from the news often picks up more steam in the business world; especially in large corporations. It propagates a feeling of “If I don’t use ML, my company will be left in the dust”. Pair that with analysts now learning ways to create deep learning models via a few lines of python code and we quickly forget the fundamentals of good old-fashioned analysis.
A beautifully presented PowerPoint covering the latest neural network, decision tree, or regression model that an analyst created does not provide any confidence in the usefulness of the model. For some reason, there seems to be a sense of credibility granted to models generated by a machine versus those created by a human regardless of their performance, at least in the business world.
For a consumer, it’s fantastic to know that this website will learn what you like and make relevant recommendations based on your feedback. That sounds fantastic! It doesn’t matter that the model recommends The Godfather even though you have only been watching The Office, Parks and Rec, and Reno 911. The model knows you, it’s learning, it just made a cute mistake… right?
Managers and executives in most corporations today often know enough about machine learning to follow along during the amazing presentation, but rarely do enough have the ability to recognize a bad model. This is not a shot against management. They weren’t hired to evaluate models they were hired to evaluate people. They know Johnson is a quality analyst, he has the best intentions, heck he even has been taking those machine learning courses from Coursera!
More stock is put into the fact that the model exists than what the model produces. Now, this is clearly a generalization that does not apply to the more technically inclined industries. Though, it’s much easier for a business to see the ill-effects of a poor neural network that fails to catch manufacturing defects than it is for one to misclassify customer sentiment. Sure we nailed 82% accuracy during testing, and when we scrubbed the results everyone agreed things were looking good. But once the model has been pushed into production, what company is going to spend the time or money to keep tabs on it?
So long as it’s still spitting out results, that’s less manual work another employee has to do. On top of that, there is likely some new KPI tracking the volume the model is spitting out so naturally, everything is going great. Let’s put Johnson on the next initiative, he does great work!
I often find these models end up establishing a dependency or are the baseline for a business process. This eventually causes employees to shift from the manual work that was actually more effective, albeit likely more limited in scope. Yes, a machine is now scanning 1,000 events and I only have to review the 5 it spits out, whereas I was manually reviewing 30 cases before. That sounds like a fantastic improvement, right?
If your manual process was hot garbage and the model is slightly cooler garbage, then okay, maybe it’s better. It’s more likely your manual process actually was decently effective at identifying the use cases you were looking for. Plus, you would be much more intimately familiar with the output, process, and capabilities. Every incorrect use case you review means time wasted and generally, people don’t like wasting their own time on meaningless tasks. This results in small optimizations to the manual process over time and a more specific and accurate process as compared to the general ML model.
Again, I am focused on the “easy button” models that make their way to production. Models that are given the proper time, resources, and oversight and are pointed at solving the right problem will nearly always outperform a manual human process. Because the shift away from manual work is often gradual for larger companies, it’s rare there is any attribution paid to the model itself or the developer. Instead, management may see the decline in performance and seek a new model to help solve what seems like a new business problem. Further perpetuating the cycle but hopefully, the next model can gain at least some lessons from the first.
Make sure this new problem you are handed can truly benefit from machine learning. If so, then absolutely give it a shot but be critical of the results. If it’s not looking great then just know you tried and likely learned something in the process. Otherwise, don’t forget about basic analysis.
Pivot tables get a bad rap with the rise of big data, but let’s be real — Critical business processes are still running off them, and done right they can be more insightful than most machine learning models.
Now, a count and sum of sales aren’t going to spur new actions, but you just attempted to come up with new features for your fancy new model right? Let’s pop those puppies in and start looking around. Yes, I get this is exploratory analysis. The results from the exploratory analysis are often much more insightful and actionable than spinning up a Jupyter Notebook that attempts regression and uncorrelated variables.
As slow as humans are compared to machines, we can very easily see that Feature 1 seems to be really out of wack in Dimension B. Follow that thread down through more exploration and you will be more likely to uncover the actual cause of the problem rather than implementing a model to help you mitigate the problem.
- MOOCs are great, more people learning data science is great, but not everyone who passes a class should push models to production.
- Just because a model is live, doesn’t mean it’s always better than the old-school manual process. Spreadsheets aren’t always the enemy.
- Don’t attempt to develop an ML model to address a problem that could be resolved through fundamental/exploratory analysis.
- If a model is relevant, be critical of the results and accept that sometimes it won’t return meaningful results. Data science is supposed to meet a lot of dead-ends if you haven’t found any then you aren’t doing it right.