Automation and AI can be applied to innovate products, automate processes, and improve customer experiences, often by making predictions. Those predictions can produce tremendous value for the business but predicting the future is inherently difficult.
The components of the automation tasks may be doing something relatively straightforward, such as using OCR (optical character recognition) for transcription into a database. Or, they may be more complex, like using chatbots and your model’s predictions to route customer inquiries in real-time.
Even the most well-executed automation and AI can get confused by exceptions, edge cases, and unusual data, and when that happens, a person must be involved to process data, moderate a situation, or make a decision.
Also, machine learning models will produce predictions that vary in their confidence, often producing what are called propensity scores. You can act on both high and low scores with confidence, but you also have to deal with what CloudFactory’s LinkedIn Live guest Dean Abbott likes to call the “squishy middle,” where the scores indicate that either outcome is likely.
As with so many things in business, something similar to an 80/20 ratio applies. That is, you are likely to discover you can automate about 80% of a process but will need people to be involved about 20% of the time.
Data scientists are enthusiastic puzzle solvers, and they enjoy their work. Given “just a little more time,” they can figure out some of the edge cases and increase the comprehensiveness of the model.
However, to unlock the highest potential of automation and AI, you must strategically apply people alongside automation and AI. That is the best way to get value now, while you concurrently improve the model. Sometimes called the humans in the loop, these people must have a good understanding of your rules and your domain but, perhaps surprisingly, they rarely have to be subject matter experts.
In this article, the first in a series of three, we explore how you can make strategic choices about those people in the process, so you can generate the best outcomes for your organization.
The first phase of the well known Cross-Industry Standard Process for Data Mining (CRISP-DM) is business understanding. It is critical to begin with an assessment of the current situation from a non-technical business perspective. It is too early to worry about algorithms or model accuracy.
In this phase, it is better to ask straightforward questions, such as:
- What processes are taking too long and creating a backlog?
- Is there an identifiable subset of cases that are relatively easy to process or to predict a likely outcome?
- Are there other cases that are ill-defined or difficult to predict?
Unless your organization is a startup, you have existing processes that are mature, aspects of which are working well, and others that are creating challenges. At this stage, it’s important to consider what tasks and decisions benefit from ML predictions and which ones make more sense to continue to route to people.
CEO of Decision Management Solutions James Taylor describes it this way: “Companies that focus on the decision they want to improve before doing their analytic work are much more likely to succeed in operationalizing an analytic or data-driven approach.”
So there you have it — for better outcomes, start with the decision you want to improve. Across the industry, we’ve all seen enough automation and ML projects fail that we know a few things about what not to do.
Here are three common mistakes modelers make that you’ll want to avoid:
It’s best not to disrupt existing people or processes in more than one area at a time. So if you are targeting multiple outcomes, it is often best to start by modeling just two. You can keep the third category routed to a person or an existing process. Also, experienced modelers know that predicting three or more outcomes complicates every phase of the process. Go to market with the more simple version first.
There is a classic example from the earliest days of the practical application of machine learning. When predicting telecommunications customer loyalty there is often a distinction made between voluntary churn when a customer leaves for a competitor and involuntary churn when a customer is lost to non-payment. If you have a successful collections process in place, keep it, and use the model only to reduce voluntary churn. Let the existing involuntary churn processes remain intact.
Modelers with a lot of field experience learn early not to throw all of the variables into the model too early and rush to modeling. You have to start with a solid foundation of input variables and a first-draft model and then iterate by adding complexity over time.
In the LinkedIn Live session, Dean Abbott described iteration as working with the “data elements you bring into the model and the features you create from those data elements.” In other words, iteration effort should not be spent simply tweaking the model, but rather should involve starting with an initial dataset and adding width — that is, more and more variables — with each iteration.
Data preparation takes time and a lot of it. So the strategic approach is to deploy first with your structured data and start to earn ROI, while both routing unstructured data to existing processes and letting the data scientists work on the next iteration. If you wait for perfect you’ll never deploy. Dean described this tendency in a powerful way:
“It’s really easy for researchers, for all of us who are trying so hard to build the best model possible, to just put our heads down in the weeds, where you can spend months. There’s no end to interesting questions to answer about the data,” he said.
How do you support the data science team in their efforts? Make sure to allow for an integrated system, human processing, and computer processing working together. By not switching prematurely to 100% automation, you take the pressure off the first iteration and allow you to put it into production sooner.
Deployment is like triage: real-world solutions rarely involve a single model, multiple models are routing cases to different processes. If your model is working great but some processes aren’t, build a separate model to deal with that. This mistake is closely related to our first mistake. Building a first iteration model that is too complicated is a mistake but when you are able you should add complexity, and that often takes the form of adding models.
For instance, during our LinkedIn Live interview, Dean shared a data science example related to tracking medical causes when Special Forces trainees dropped out of a training program. On this project, they initially addressed the primary mission: to predict the trainee’s likelihood to pass or fail training.
Once they had established a working model, they tried to predict a third outcome, dropouts due to medical reasons. And eventually, they added more complexity to address “roll over” cases that graduated but in a different cohort from the one where they began training.
Can you imagine if they had tried to take on all of this complexity on the first pass? They would have never finished the project.
One of my favorite examples of this third mistake in my career was articulated by a member of a project that worked in customs and border control inspections. He was on the project team as a Subject Matter Expert (SME) and he described it elegantly. He envisioned each model as if they were a lane in his inspection area with a different risk model for each of the three physical lanes that the passengers pass through in the actual building: the x-ray lane, the canine lane, and the “mud on boots” lane.
He had the right idea. We continue to do human processes with human expertise and with machine learning models making those processes more efficient. If we get caught up in the math and forget about the organizations that these models serve, we get distracted from our real mission.
In the two articles that follow, we will elaborate on this strategy with a focus on how to build complete systems with people and machines working together. We will look at some real-world applications of automation and ML and explore the challenges development teams encountered in building them. We’ll also share how they solved those challenges. Check out our LinkedIn Live session, featuring SmarterHQ’s Chief Data Scientist Dean Abbott.
This is the first article in a three-part series.