Not answering the “but should we” question in AI applications can have disastrous consequences
A new study reinflamed my long-standing problem with facial recognition technology. In it, scientists claimed that facial recognition could expose the political orientation of individuals. The study conducted by Michal Kosinski, Facial recognition technology, can disclose political orientation from naturalistic facial images argues that the faces of liberals and conservatives consistently differ (at least in the US., Canada, and the UK). Spoiler alert, it cannot.
It may be that by selecting participants, they had a controlled data set that could be considered appropriate for this research. However, I beg to differ on two different levels. Firstly, there are a lot of studies that have pointed out how biased this technology is. Not only is it discriminatory, but facial recognition has also come under scrutiny lately, especially after the facial recognition app of Clearwater AI was ruled illegal in Canada. But what concerns me more is, secondly, why was this study even conducted in the first place?
While scientists determine whether something is possible, engineers focus on the how. It’s their job to figure out ways to make an idea become a reality. But as so often, the question of whether technologies should be used for specific applications takes a back seat. Which, I have been arguing, is wrong.
What I mean precisely by this can be read well in some of the statements from the discussion of this study. For example, Kosinski writes that an “algorithm’s ability to predict our personal attributes from facial images could improve human–technology interactions by enabling machines to identify our age or emotional state and adjust their behavior accordingly.” At first glance, this sounds terrific. Wouldn’t it be great to have a smart coffee machine that knows when I’d prefer to have a cappuccino over a latte macchiato?
Or, maybe not. It could be more harmful than beneficial: What if the machine begins to trick me into preferring latte macchiatos over cappuccinos? How can researchers be sure that they aren’t lured and deceived by the systems they are inventing? The study goes on to say that “the algorithms would likely quickly learn how to extract relevant information from other features — an arms race that humans are unlikely to win.” Why is this arms race a given?
Humanizing algorithms is dangerous — especially when talking about facial recognition technologies. As history unfolds, we have to keep in mind that as a world community, we have to understand who ultimately decides which technologies are further developed and which ones are not: a few powerful corporations decide. Meaning that they determine what technology is worth developing and what the development’s goal will be.
This most certainly cannot be in the interest of all of us. Let’s think of software that knows exactly how we politically tick: Why is this an application we want? How is this useful? It’s not. Don’t be too optimistic — while some applications are great, and others are not useful, dangerous, problematic, or based on wrong assumptions.
New technologies powered by AI models are powerful tools for good or for evil. Or, as David Watson put it in a recent article, “The choice, as ever, is ours.”