In 2019, Goldman Sachs was being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards.1
Optum was investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients in comparison to sicker black patients.2
The unprecedented events of 2020 have brought the artificial intelligence (AI) under legal and societal scrutiny as governments across the globe are leveraging AI in the public sector healthcare to respond to the COVID-19 pandemic while the use of digital AI-enabled interactions with customers increase, as customers seek contactless interactions with all organizations.
Till a few years back, the discussions on “AI ethics” were limited to academics and non-profit organizations. But today, the tech giants — Google, Facebook, Twitter, Microsoft are putting together frameworks and governance models to tackle the ethical problems that arise with having access to the collection of an insane amount of data that is being analyzed and design everyday business algorithms such as what to advertise and whom to target.
As per the latest report on “AI and the Ethical Conundrum”3 by the Capgemini Research Institute where over 800 organizations and 2,900 consumers were surveyed, Customers are increasingly trusting and willing to reward positive AI engagements and While organizations are ethically aware, progress in ethical dimensions is patchy and can result in loss of customers’ trust.
According to the European Commission, the ethics of Artificial Intelligence is a subfield of applied ethics and technology that is concerned with the ethical issues generated by the design, development, implementation, and usage of AI.
According to the ethics guidelines for Trustworthy AI issued by the European Commission High-Level Expert Group on AI, AI systems should abide by seven principles throughout their lifecycles.
1. Human agency and oversight
2. Technical robustness and safety
3. Privacy & Data governance
5. Diversity, non-discrimination and fairness
6. Societal and environmental wellbeing
1. Explainability: AI systems that can explain how it works in a language people can understand
2. Transparency: AI systems that work in a clear, consistent, and understandable manner
3. Fairness: AI systems and data are designed and tested to ensure fair treatment of all customer groups
4. Auditability: AI systems that can be audited from an ethical standpoint to provide assurance that the outputs can be trusted
However, barring “explainability,” most other dimensions of ethics are underpowered or failing to evolve.
1. Employment Discrimination
2. Differential access to Insurance & Benefits
3. Housing & Education Discrimination
4. Credit Discrimination (E.g. not presenting certain credit offers to members of certain groups)
5. Differential access to Goods & Services (E.g. product discounts based on “ethnic affinity”)
6. Narrowing of Choice for Groups (E.g. advertisements based solely on past “clicks”)
7. Filter/Network Bubbles (E.g. algorithms that promote only familiar news and information)
8. Stereotype Reinforcement
9. Confirmation Bias (E.g. all-male image search results for “CEO”, all-female results for “nurse”)
10. Increased Surveillance (E.g. predictive policing of certain neighborhoods more)
11. Disproportionate Incarceration (E.g. Incarceration of groups at higher rates based on historic policing data)
Racial bias in healthcare risk assessment algorithm and gender bias in credit card and loan issuers’ credit limit algorithms have come under the spotlight and are facing major backlash from all stakeholders.
Outcomes that are biased and unfair to certain groups could have their origin either in the usage of biased data to train the AI algorithm or the lack of developers’ sensitivity to demographic parameters during the design and development of the AI system.
The COVID-19 pandemic and the change in consumer behavior have also brought about short-term disruptions the functioning of the AI algorithms in the short term.
The new inputs and lack of enough training data for similar situations in the past affected many preexisting AI systems. Organizations facing this issue are redesigning their AI and including factors suited to the new reality, leading to less transparency vis-à-vis the pre-pandemic situation, at least in the short term.
Most of the organizations realize that failing to operationalize data and AI ethics would not just hurt their bottom line by exposing them to reputational, legal and regulatory risks but also it can lead to wasted resources of product development and deployment.
Despite knowing the cost of non-compliance to ethical AI practices, these organizations still grapple with this by ad-hoc discussions on a per-product basis.
This is neither sustainable nor scalable in nature. Problems arising from being callous towards the negative impact of AI grow by orders of magnitude when 3rd party vendors are involved.
Just like all risk-management strategies, an operationalized framework to identify and tackle the ethical risks of AI is imperative for all organizations.
1. Existing Infrastructure for AI Ethics: Identify existing infrastructure (such as data governance board) that a data and AI ethics program can leverage for bubbling up the concerns of “on the ground” product owners and managers.
2. Create an AI ethical risk framework that is relevant to your industry. For example, in the retail e-commerce industry, all products choices are presented by the recommendation engines employed in the back end and often lead to associative bias by stereotyping populations
3. Leverage learnings from other industries: Change how you think about ethics by taking cues from the successes in health care that are the result of practices of privacy, self-determination and information consent explored by medical practitioners, regulators etc.
4. Optimize guidance and tools for product managers to facilitate guidance at a product level that help product managers evaluate all dimensions of AI ethics that can be applied to any AI product (Explainability, Transparency, Fairness and Auditability)
5. Build organizational awareness
6. Formally and informally incentivize employees to play a role in identifying AI ethical risks
7. Monitor impacts and engage stakeholders