In a recent post I offered a few predictions on how the Biden Administration may change the AI playing field. The main takeaway is that AI regulation is coming sooner than you think, and that you had better start preparing by implementing internal AI governance. If you operate in North America, it’ll help you to get ahead of regulators and competition. If you’re doing business in Europe, you’ll need it to be compliant with European laws.
Algorithmic Impact Assessments (AIAs) and tools like AI registers are a simple way to get started with documenting your AI. Given recent developments, though, you may need to add another tool to your AI governance toolbox: a Human Rights Impact Assessment (HRIA). Why? Let’s look at the debate raging in Europe around AI regulation, and see what lessons we can learn. Some of them may soon apply to us, since the US approach shares similarities with the approach being pursued in Europe.
Both approaches prefer selective, targeted regulation, focussing on specific sectors rather than “one size fits all”. And both say that the purpose, context and scope of technology application matter. For example, highly controversial facial recognition may not be high-risk if it is used to unlock a car, says Elham Tabassi, Chief of Staff, Information Technology Laboratory at NIST. It really depends on the use case.
AI regulation: risk-based or values-based?
The European debate was prompted by the European Commission’s White Paper on Artificial Intelligence. This paper went into public consultation in February 2020, and it is generally understood as the foundation for the upcoming European legislative proposal on AI. (This proposal is expected in Q1 2021). The paper has garnered a lot of responses, including a highly critical open letter from more than 60 civil society organizations.
At the heart of the debate is the question of whether AI regulation should risk-based or rights-based — and also whether a risk-based approach would be adequate to protect human rights, in light of the many violations documented so far.
The Commission reasons that a risk-based approach will ensure a regulatory intervention that is proportionate, balanced and focused. Such an approach will protect citizens and consumers, without imposing undue burden on organizations deploying AI/ML or stifling innovation in this space. The approach will also help lawmakers and oversight bodies to focus on the areas where harms are most likely to occur. (And it will presumably help them to better manage resources.)
The Commission’s intent was reiterated last week by Lucilla Sioli, its Director for Artificial Intelligence and Digital Industry. She noted that the Commission wants to establish a “flourishing AI market”. She was speaking at the event “The Just AI Transition: Where Are The Opportunities and Threats?”, which was organized by EURACTIV, an independent pan-European media network specializing in EU policies.
The Commission’s White Paper proposed two criteria to determine whether an AI system is high-risk:
1) If it “is employed in a sector where […] significant risks can be expected to occur […] For instance, healthcare; transport; energy and parts of the public sector.”
2) “The AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise.”
The Commission also noted that some applications could be considered high-risk, even if they don’t meet the above criteria. For example, using AI in recruitment would qualify, as well as “remote biometric identification and other intrusive surveillance technologies”. So, what’s wrong with this approach, you might ask?
What is the purpose of (AI) law?
The first point of contention is the purpose of the law around AI/ML: is it to establish standards, maintain order, resolve disputes, and protect liberties and rights, or is it to promote AI uptake “while addressing the risks associated with certain uses of this new technology”? The Commission’s White Paper states that it has both objectives — the Commission is “committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans — improving their lives while respecting their rights.”
In response, Access Now wrote that “[t]he uptake of any technology, particularly in the public sector, should not be a standalone goal and it is not of value in itself”. Access Now is a non-profit that focuses on defending digital rights and one of the signatories of the open letter above. In continuing, Access Now wrote that “AI is not the right solution in all cases and should not be viewed as a panacea”.
This is also the advice I offer to many of InfoTech’s clients: not all business challenges can or should be solved with AI. AI is only one tool in your business toolkit, and it should not replace other approaches (including some low-tech ones).
Rather than emphasizing the adoption of AI, said Access Now, the European regulators and lawmakers should instead “ensure that the technology is trustworthy by mitigating risks. […] the EU should earn people’s trust for its AI initiatives by putting the protection of fundamental rights ahead of concerns about global competitiveness in AI. The primary objective should be to avoid individual and societal harms, not to mitigate them.” (Ibid.)
Which brings us to the next point of contention.
Should AI regulation be reactive or proactive?
The Commission outlined several requirements, in addition to existing legislation, that it is looking to develop for high-risk AI applications. (These requirements may be further fleshed out as standards.) Their range includes ensuring that training data used in AI/ML systems complies with EU safety rules, disclosing systems’ capabilities and limitations, and protecting human autonomy through human oversight.
All of this is good and much needed, say the proposal’s critics, but it is not enough. A risk-based approach would leave too much room for interpretation and corporate legal maneuvering. Meanwhile, automated decision-making systems are already harming people, communities and the environment, and they are also violating fundamental human rights. These rights, however, are “non-negotiable and they must be respected regardless of a risk level associated with external factors.” (original emphasis)
The open letter above urges the Commission to create “clear regulatory red lines to prevent uses of artificial intelligence which violate fundamental rights”. The letter states that “it is vital that the upcoming regulatory proposal establishes in law clear limitations as to what can be considered lawful uses of AI” (original emphasis). In other words, certain AI use cases should be either completely banned or legally restricted as incompatible with democratic society. Specifically:
· Biometric surveillance
· Predictive policing
· AI in criminal justice, e.g., risk assessment tools
· Immigration and border control
· AI for social scoring and in systems deciding access to social rights and benefits, such as welfare, education, and employment.
The open letter also calls the Commission to “unequivocally address” AI uses that:
· exacerbate existing structural discrimination;
· restrict access to healthcare, social security and other essential services;
· enable surveillance of workers and violate their rights;
· facilitate large-scale manipulation of public opinion and human behavior (e.g., “nudging”, deepfakes) and “associated threats to human dignity, agency, and collective democracy.”
A quick refresher on human rights
According to the UN Human Rights Office of the High Commissioner, “Human rights are rights we have simply because we exist as human beings — they are not granted by any state. These universal rights are inherent to us all, regardless of nationality, sex, national or ethnic origin, color, religion, language, or any other status. They range from the most fundamental — the right to life — to those that make life worth living, such as the rights to food, education, work, health, and liberty.” In other words, human rights are universal, inalienable, indivisible and interdependent:
· Universal: They apply to every single human being; we are all equally entitled to all of our human rights.
· Inalienable: No one can take away an individual’s human rights “except in specific situations and according to due process” — for example “if a person is found guilty of a crime by a court of law”.
· Indivisible and interdependent: All human rights have equal status; “one set of rights cannot be enjoyed fully without the other”; and violation of one right may negatively impact other rights.
Human rights were codified in the Universal Declaration of Human Rights (UDHR), adopted unanimously by the UN General Assembly in 1948, and in subsequent documents that together make up the International Bill of Rights.
What do human rights have to do with AI?
If you haven’t been following the news about the many abuses and harms inflicted by AI and automated decision-making systems, the open letter and the two articles by Access Now (quoted above) are a good place to start.
Or you can take a look at the 2019 report by Amnesty International “Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights”. This report explains crisply how Google’s and Facebook’s pervasive surveillance machinery violates core human rights, such as the right to dignity, autonomy, and privacy; the right to control information about ourselves; and the right to a space where we can freely express our identities.
If you are up for a long read, here are a few books that I recommend:
· “Weapons of Math Destruction” by Cathy O’Neil
· “Automating Inequality” by Virginia Eubanks
· “Invisible Women” by Caroline Criado Perez
· “Biased” by Jennifer Eberhardt
· “The Black Box Society” by Frank Pasquale
· “The Age of Surveillance Capitalism” by Shoshana Zuboff’s
There are also many videos. One is Kate Crawford’s NIPS 2017 keynote “The Trouble with Bias”. Another is this hilarious 3-minute comedy sketch on YouTube “Scottish Voice Recognition Elevator — ELEVEN!”, which illustrates how biased speech recognition systems punish even native speakers.
AI technologies are not the first to raise ethical concerns
I’d like to revisit the assertion by Access Now that “Our rights are non-negotiable and they must be respected regardless of a risk level”. Military applications of AI aside (it’s a topic that deserves its own space), we need to ensure that we are building systems that benefit all of humanity. This goal requires proceeding carefully with some technologies and use cases, and perhaps even banning them. Human cloning, for example, is banned in 70 countries, for good reason. Still, cloning of other species is allowed — including commercial cloning of pets and livestock. Cloning of endangered species may actually ensure their survival.
We may also need to augment the list of human rights with new ones for the age of AI, for example:
· The right to disclosure: being informed about how one’s data is used, how technology is developed and how it is impacting individuals, communities, the society and the environment;
· The right to opt-out: having a choice to use a low-tech path (where possible) or to interact with a human instead, and still be able to meaningfully participate in the economy and society;
· The right to redress when dealing with an automated decision making system;
· The right to data agency;
· …
What do these concerns imply for your organization?
Access Now argues that “the burden of proof [should] be on the entity wanting to develop or deploy the AI system to demonstrate that it does not violate human rights via a mandatory human rights impact assessment (HRIA). This requirement would be for all applications in all domains, and it should apply to both the public and private sector, as part of a broader due diligence framework.”
Whether regulators agree with this proposal remains to be seen. In any case, I support Access Now and the 60 organizations who have signed the open letter — that HRIAs are foundational for building ethical, safe and responsible AI. Moreover, depending on the nature of your business and the regions you operate in, you may already be conducting such assessments. Extending them to your AI/ML projects should be a natural next step.
Not sure how to get started with HRIAs? — Consult this excellent guide by the Danish Institute for Human Rights. And here’s Nestle’s Experience Assessing Human Rights Impacts In Its Business Activities.
HRIAs will help you to keep your AI and your organization out of jail (paraphrasing James Taylor’s Expert Panel at the Predictive Analytics World 2020 conference). So you may sleep better at night. They will also help your organization to establish a competitive edge and promote trust, which is the foundation of business and society.