I don’t recall my college roommate ever mentioning the term “AI” or “artificial intelligence.” For the better part of two years of college, though, he talked about neural networks non-stop.
For the past several years, almost every new or re-invented company we read about is an AI company. I’m not sure how that’s even possible, yet here we are.
There is an emerging discussion about what constitutes good and bad AI — not just from an ethical perspective but also from a functional perspective.
Unfortunately, there are a lot of AI branded capabilities that just do not work.
Poorly performing AI cannot predict correctly, it cannot classify correctly, it is vulnerable from a cybersecurity perspective, it is not durable or reliable, and it’s outcomes lead to bad decision-making. What is even more concerning is that it is difficult for most organizations to diagnose or quantify AI performance. The issues that lead to poorly performing machine learning models over a longer period of time are more complex than just properly tagged training data or model drift. Robust situational awareness is required.
How reliable are the neural networks classifying and predicting outcomes? Most organizations aren’t sure, but just glad they have AI