As an aspiring data scientist, I feel confident telling others that a Hollywood-style artificial intelligence (AI) nightmare where machines rise up against humans and attack them is preposterous. Instead, I propose a more probable future AI doomsday scenario: one where judges are advised by criminal-sentencing AIs that bestow harsher sentences on people of color, fueling mass incarceration; college admissions officers and recruiters are replaced by admissions and hiring algorithms that disproportionately favor white males, dismantling meritocracy; law enforcement is aided by facial recognition and surveillance technology that misclassifies darker-skinned individuals as criminals, enabling human rights abuses. The continued expansion of AI is inevitable however if we are not careful, artificial intelligence will learn systemic racism and embed it into society under the guise of fairness and equality.
A core component of modern AI technologies is machine learning (ML), the ability for programs to learn rules from prior experience. Massive amounts of data from the past are fed to ML models which identify patterns in the data and then use those patterns to make future decisions. This “training” data reflects the biases of the humans who collect and label it — biased data means algorithms can learn biased patterns. Structural racism is a constitutive feature of our society, past, and present, so generating massive amounts of data completely free of bias is impractical.
If we accept that algorithms must be trained on biased data, then it becomes necessary to identify and weed out the algorithms learning racist patterns. This is a tough task — AI/ML algorithms are black-boxes: we feed input to them and they provide decisions, but we have no way of determining why they make the decision they do. Consider an example: does a surveillance algorithm classify a black man as a criminal for committing a crime or for being black? In most situations, we have no way of knowing. This lack of interpretability is dangerous — how can we act on decisions that we don’t know the justification for?
AI’s potential to learn biased patterns and lack of interpretability separates it as a technology that can be used to perpetuate discrimination, yet AI is commonly touted as an objective solution to the irregular, prejudiced behavior of humans. We perceive AI/ML algorithms to be objective because technology and numbers are impersonal and logical. Algorithms have no emotions, desires, or biased subconscious to skew their decision-making as humans do. This illusion of AI’s objectivity has spurred a shift in power. Presidents, legislators, and judges could lean on AI/ML algorithms to make more “objective” decisions and squash concerns of unfairness and inequality. These algorithms may come to define how and to whom American core values such as justice, fairness, and equality of opportunity apply.
Left un-criticized, AI will learn the prejudiced behavior of humans and repackage it under a veil of unchallengeable objectivity. Racism will not disappear, rather it will shift to subtle and insidious forms that are harder to identify and challenge. There will be little recourse for the oppressed in such a world. We can challenge the judgments and intentions of humans, but who dare challenge the objective, intellectually superior algorithms whose decisions are beyond our understanding?
Do not complacently believe in AI’s objectivity. Algorithms must be held to the same standards as humans because they learn human behavior. Until we can better interpret AI/ML algorithms and determine how they make decisions, we should be cautious of and criticize the rapid rise of AI to positions of power.