In his recent book Human Compatible, leading AI researcher and computer scientist Stuart Russell describes the dangers of “overly intelligent” algorithms, and proposes a solution based on incorporating uncertainty into the algorithms’ “understanding” of human preferences. Russell is puzzled how statisticians, control theory researchers, and operations researchers haven’t thought of this:
“In all the work on utility maximization, and loss function, the reward function, and the loss function are known perfectly. How could this be? How could the AI community (and the control theory, operations research, and statistics communities) have such a huge blind spot for so long, even while embracing uncertainty in all other aspects of decision making?” (p. 176)
In my recent work ‘Improving’ prediction of human behavior using behavior modification I stumbled upon a similar arid land in trying to use statistical notation to describe the combination of two operations used by digital platform: prediction and modification. The lack of language puzzled me. Furthermore, my work has received surprised reactions by statisticians.
Why the surprise?
The answer has to do with the term ‘control’ — what it means to the different communities, and the role of control in algorithms/models.
Let me share my insights about the meaning of ‘control’ to different research communities, from my journey as a trained statistician, collaborating with researchers in social science, human-computer interaction, and machine learning, while also keeping an eye on what operations research and industrial engineering colleagues are talking about.