Why “Computational Cognition” might be more accurate, and how that matters
As I have observed in other essays, our human languages give us the power to communicate by means of discrete concepts, but that requires us to make sharp distinctions between those concepts, leaving us with disagreements over their boundaries. The ideas behind words like “life”, “consciousness”, “intelligence”, and “god” are just a few examples. We share the words, but we don’t agree on the ideas behind them, especially at the edges. That can result in miscommunication and ultimately pointless arguments.
“Artificial Intelligence” is a phrase that has caused a lot of misunderstanding and unnecessary controversy. It invokes our ideas associated with the words “artificial” and “intelligence”, and how those ideas (networks of nerve cells in our brains) combine. We don’t all understand them the same way, but for most of us who use English, “artificial” makes a distinction with “natural”, and “intelligence” makes a distinction with “unintelligent”.
It’s the use of the word “intelligence” in the phrase that is most misleading. Many of the recently-developed computer programs that are referred to by that phrase don’t involve what we usually associate with “intelligence”. Those programs succeed at matching complex patterns — identifying faces, recognizing spoken phrases, or detecting appearances of cancerous lesions in medical images — processes that people can do, or be trained to do, with parts of our brains that we don’t normally associate with “intelligence”. (Dogs can reportedly detect cancers by smell. Is that a sign of “intelligence”? Most people, intelligent or not, can recognize faces, yet some people who are considered intelligent suffer from prosopagnosia, impairment of the ability to recognize faces.)
The word “artificial” in the phrase is less problematic. It invokes a distinction between “human-made” and “natural”, and the computer programs that I am aware of that do complex pattern-matching and other useful things fit that distinction. So far they have been constructed by people. But systems that will involve a combination of biological and human-made components, and that can improve themselves without human intervention, are probably not far off.
The phrase “artificial intelligence” has been a useful marketing term for several decades. Besides suggesting intelligence in certain computer programs, it also suggests the intelligence of the people who have created those programs. It has convinced people and organizations to spend money on projects so described — money that they might not otherwise spend on those projects. I think it has seemed intelligent to do that. (How flattering to the buyer to be buying an “intelligent” product!) Of course, “artificial” has a negative connotation (“It ain’t natural.”); for some people that can work against it’s marketing appeal.
What might be a more accurate term for what people are currently trying to describe by the phrase “artificial intelligence”? A quick one that comes to mind for me is “computational cognition”. I think it’s an improvement, though still not perfect. “Cognition” invokes a broader idea than “intelligence” does, though it doesn’t necessarily invoke perception, too. And “computational” invokes the idea of computers, of course; so it’s narrower than “artificial”. Maybe “artificial cognition” would be better, but I like the alliteration of “computational cognition”, which might make it a better marketing phrase(!)
Anyway, my purpose with this essay is not just to criticize the phrase “artificial intelligence”, specifically. It’s also to use a recent example to show how our human languages work both to help us and to mislead us.