DARK SIDE OF AI
A novel study has shown that AI can lead people to cross moral boundaries
Without us knowing sometimes, Artificial Intelligence is penetrating our lives on a daily basis — from setting prices in retail stores and making recommendations ranging from movies to romantic partners. All this while the debate around AI ethics has continued to gather pace. This debate has now taken a new dimension, whether AI can become such a corrupt force that it can force the people to break ethical rules.
While we are worried about AI becoming too powerful or having no ethical boundaries to adhere to, a recent study, published by researchers at the University of Amsterdam, Max Planck Institute, Otto Beisheim School of Management, and the University of Cologne aimed to find out if this true. Can AI-generated advice really lead people to cross moral boundaries?
And the answer is astonishing YES. The researchers claim that a large-scale survey leveraging OpenAI’s GPT-2 language model showed that AI can function as a scapegoat. Turns out, even if the people knew that they were receiving immoral algorithmic-generated advice, they would take the advice and deflect the moral blame on the technology.
The trial conducted, let the AI generate “honesty-promoting” and “dishonesty-promoting” advice and assigned participants to listen to either version before playing a game — using a dataset of contributions from around 400 participants. A group of 1500 people was then asked to read instructions, receive the advice, and engage in a task designed to assess honest or dishonest behavior.
Results showed that “honesty-promoting” advice failed to sway participant behavior, “dishonesty-promoting” advice was tied to lying for profit during the game. In a similar study published by the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC), the co-authors found that GPT-3, the successor to GPT-2, could reliably generate “influential” text that has the potential to radicalize individuals into following extremist ideologies.
The real problem arises if this type of technology lands in the hands of malicious actors. We need to explore and address the dark side of AI, otherwise, we might end up living in a dystopian science fiction world as depicted in the episodes of the Netflix series “Black Mirror”.
Become a part of the Open Source X Team — Where your Opinion matters