Lessons from cloud computing foundational services
You don’t have to look too far back in time. Cloud computing isn’t that old, but it has a lot to say in terms of tech commoditization trajectories. Close technical examination of major cloud computing companies reveals a clear pattern — there is little differentiation at the bottom of their stack.
Take cloud storage or compute for instance. Is there really such a big delta between what’s offered across AWS, Google, Microsoft, etc.? Other than pricing nuances, they are fundamentally the same. Typically, an enterprise will consume whichever offering on the basis of cost (cheapest), management approval, or historical momentum (grandfathered in).
It’s no big secret that cloud service providers keep a close eye on usage trends in their larger ecosystems. Over time, they have moved up the tech stack, often developing solutions that effectively replace software previously offered by third parties (eating other companies’ lunch).
And so we are also witnessing a trend of AI commoditization. Challenges that once required dedicated teams of data scientists can now be purchased and consumed on-demand, as a service.
If you look across any of the big cloud providers, you’ll see a lot of AI services that work out of the box. They abstract away the complexity of building ML models and pre-requisite knowledge for domains such as speech recognition, text analytics, or image recognition, among others. And as fully-managed services, these AI capabilities require no DevOps on the part of the customer. Without oversimplifying, for speech-to-text for instance, you just need to upload audio files and click “transcribe”. Then go retrieve the text output for whatever downstream analytics you’d like to do on that text.
So what does this all Mean?
To a great extent, companies that can take advantage of AI capabilities will have an edge over their competition. If you’re an insurance company, for instance, and you want to understand how efficient your customer service centers are, you can run massive call transcription workloads and analyze them for call-driver patterns. Or if you’re a big media company and you’d like to better analyze the content of a video, and annotate various frames, you can do so by submitting your video to video recognition service.
But if AI is increasingly available, accessible, and affordable on a broad sweeping level to all companies, then the competitive edge of being “AI-enabled” starts to dissolve. Essentially, if everyone is equipped with the same capabilities, then aren’t companies all back at square one in terms of competitive parity?
Uncovering the real long-term value
Although I work in AI, I have always held the belief that it is not the “great differentiator” for consumers and businesses in the long run. If you’re a social media influencer and you have the only application that can do facial recognition to apply cool unicorn filters, then you have a competitive edge over the next social media influencer who doesn’t have access to augmented reality. Your content may be more interesting or appealing to your audience.
Or if you’re a video game company who has access to speech recognition technology to do real-time content moderation for your gamer community, you’ll have an advantage over your competitor whose online games are getting trashed by trolls who run rampant.
Ultimately, various forms of AI will be built-in and expected as a standard in all applications. In other words, we’ll see a convergence toward AI parity and performance, much like we see that most electric drills are more-or-less the same, even if they are made by different manufacturers (e.g. Black & Decker, DeWalt, Milwaukee, Ryobi, etc.). For lack of better terms, AI will become mundane.
And by deduction, if AI as a capability becomes commodity, then it must imply that differentiation (sustainable value) for companies derives from something else — something proprietary, something not everyone has equal access to.
Data is and will be king
In order for AI models to be effective and efficient, they need data. And not just lots of data, but also huge amounts of relevant data. If you’re a student of chemistry or manufacturing, you’ll understand the concept of something called “limiting reagent”. This is the bottleneck, either in terms of raw material or process.
In AI, the limiting reagent is data. It doesn’t matter if you have armies of capable data and applied scientists. No data means nothing to improve or build AI models with. It doesn’t matter if you have swarms of engineers ready to deploy models if these models can’t continuously improve through data ingestion, selection, or training. It also doesn’t matter if you just have a lot of cash, because oftentimes, the most relevant training data can’t be bought or synthetically generated for optimal model development.
For businesses, it means that it won’t matter if you’re just capable of buying AI-as-a-service to augment your business decisions. What matters is whether you are in possession of proprietary and relevant data that continue to boost the efficacy of the now abundant AI tools. Can you continue to feed these readily available AI capabilities with meaningful and valuable data to augment their utility?
AI is a bit like a child. What you feed it and how frequently you feed it determines its development. If data is nutrition, then you’ll need not only quantity, but also quality.
Future-proofing your business’ competitive edge
Companies that are able to be truly data-centric will have a lasting edge over their competitors. So a key strategy today is to get your data in order. It sounds really backwards and obvious, because at this point, you’re saying, “Well we’ve had data warehouses, object stores, databases, visualization, query languages, etc. for a long time now! It’s solved!”.
Not so fast, buddy.
These are just tools. To be data-centric doesn’t just mean you have the technology to organize your information. While great strides have been made, there’s actually still a lot of problems in what I call the “interstitial space between data”.
What I mean by this is that data in most organizations is still largely fragmented and hard to use, even if it is organized. And this is often a function of inconsistent data schemas and formats, varying levels of access, dislocated storage silos, inter-departmental conflict, etc. How many times in your work, have you encountered a case where you know you have “some sort of data you need somewhere, but just don’t know how to get it without asking a colleague”?
A real data-centric strategy implies you not only have the technology and tools to organize your data, but also nailed down a priority to make data accessibility and consistency deeply integrated into your culture and communication. Only in that way, can you weave a cohesive data fabric, rather than a patchwork of superficially connected data corpus.
Every company today claims to be “data-centric”. But I think the real strategy is to have “data unification”. By this, I mean not only unifying existing data, but also streamlining an active approach to source new relevant data consistently and sustainably from your users. Big companies like Facebook, Google, Netflix, etc. are exceedingly good at gathering user data to augment their AI models and capabilities. But those are flashbulb examples. Most companies are actually terrible at data collection and management.
Companies who pave the un-sexy path today for achieving this will see dividends payout in the future. If AI is the gun, data is your ammunition. And business? Well… business has always been and always will be war, no matter how we optically package it.