Reading Cambridge Handbook of the Law of Algorithms (edited by Woodrow Barfield, 2021) appeared to me like flying on a long-range aircraft.
I departed from good old days of law, enjoyed a 360-degree coverage of social and legal challenges caused by algorithms, and arrived to a brave new world.
Here I post an incomplete list of questions the Handbook inspired me to reflect on.
Grey Area
Many would like to regulate — but frequently disregard why.
For example, one may regulate to rectify problems poor algorithms create, or justify algorithmic decision-making processes, or objectify individuals.
- What is — and should be — the legal status of algorithms?
Basically, the legal status is the starting point of almost every legal discussion about algorithms.
Product, service, legal entity, contract, agent — this is an open list of plausible options.
The classification determines, e.g., who, under what conditions, and to what extent should be held liable for damages caused by algorithms.
Free Speech
- Is an algorithm a speech?
The speech-like dimension of algorithms may appear counter-intuitive, albeit opens a fresh view on the scope of the human right to free speech.
Speech protections for algorithms could mean protections for —
- algorithms themselves on the theory that each algorithm is a speech
- the content that algorithms produce
- those who use algorithms as a prop or an illustration, or
- the fact that algorithms play a significant role in the exercise of free speech or act as advisors.
Also it remains disputable what contributes to a speech — a particular line of code, or a set of lines of code, or code output, or else.
- How to qualify search algorithms in view of free speech?
See Search King v. Google Inc. — a case in which Search King alleged that Google had intentionally lowered its page rank.
The court ruled — in favour of Google — that —
- Google’s page rank is an ‘opinion,’ and Google has no obligation to express any opinion other than the one it wants to express, and
- Google has a right to express the opinion of its choice — even if it generated this opinion through the use of algorithms rather than by just making a mental judgement.
Sir Evidence
- What is admissibility test for computer-generated evidence?
I hardly see any way the practice can deny the usefulness of software in formation of the evidence base. The question is how to ensure software outputs are admitted in the proceedings.
As regards the term ‘computer-generated’: electronic storage does not make data computer-generated.
For example, data associated with cell phone use or credit card swipes would likely qualify as computer-generated evidence, while a document prepared by a person and then electronically stored in a computer would likely not.
- If admissibility test is whether software/hardware operates properly, what does it essentially mean to show that software/hardware operates properly?
- How to describe software/hardware?
Both the form and extent of the description remain disputable.
To describe software/hardware, one can use bare metal code, assembly language, source code, propositional logic, decision trees, logic diagrams, etc.
At the same time, explainability has limits.
For example, humans cannot explain to the full extent how neural networks make their decisions.
For the sake of illustration, check these quotes from The Dark Secret at the Heart of AI, W. Knight, MIT Technology Review (11 April 2017):
‘’We can build these models,’ Dudley says ruefully, ‘but we don’t know how they work.’
‘[…] It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.’
- Can hearsay objections apply to computer-generated evidence?
Check this quote from the Postmodernism Generator webpage:
‘The papers produced are perfectly grammatically correct and read as if written by a human being; any meaning found in them, however, is purely coincidental.’
- How to cross-examine software/hardware?
One of the options — validation tests. But what counts as validation and what makes one trust validation tests?
Even with competent cross-validation, the performance of the program in the ‘field’ may not match that in the lab.
Those who know nothing — or inexcusably little — about the underlying mechanisms can hardly reasonably trust validation tests.
- Do benefits of using computer-generated evidence outweigh associated risks?
For example, one needs to mitigate the risk of attributing a greater authority to machines as machines are ostensible free of bias and errors and the risk of relying on the software that reproduces human biases.
Agent Smith
- Can algorithms act as agents for companies for the purpose of contract formation?
This covers the case when a company uses algorithms to select a counterparty or negotiate contract terms on its behalf. (Algorithmic outputs thereby effectively substitute a human judgement.)
- Can a company be bound by decisions machine-learning algorithms make on the company’s behalf?
Some claim machine-learning algorithms are too complicated to bound a company using them.
The rationale is that companies cannot foresee the outputs of machine-learning algorithms, and fail to precisely reverse-engineer their reasoning.
Decision-making procedures algorithms employ are hardly — if ever — human-intelligible before the program runs and can hardly — if ever — be parsed after the program runs.
Others argue that even a very complex, opaque algorithm can be an agent for a company.
Agree Algorithm
- Can algorithms execute legal agreements?
Why not if an algorithm is in a verifiable state — a state that can be proven to the court that would enforce the agreement.
- Can legal agreements be expressed as algorithms?
Again, why not if there is no conceptual difference between an agreement that sets up procedures to manage performance and an algorithm.
In effect, writing contracts and writing software involve very similar skills —
- both require thorough case analysis, accounting for various possible future states, addressing them with logic and structure, and then releasing an attempt at a solution into an uncertain world
- doing both clearly and manageably is rare
- generating both can be automated
- producing both relies on significant reuse of prior work, and
- both (may) have bugs.
At Service
- Can algorithms be used by state agencies?
It depends on whether algorithms meet core principles of the administrative law — like due process, equal protection, privacy, transparency, and limitations on delegation of authority.
The principles were initially designed for human decision-making space. Time will tell how these principles adopt to machine-learning algorithms.
In case of machine-learning algorithms, it is likely not possible to tell that any particular variable provided the ‘basis’ for making any particular decision, or to prove the discrimination intent is present, or to supply causal reasoning for machine-learning algorithms.
- In what cases will state agencies be authorized to employ algorithms?
Before employing algorithms, agency officials should ask what problem are they trying to solve, why, and how will they solve it.
Also agency officials should check whether there are —
- adequate auditing, testing, and evaluation procedures
- due process protections for affected individuals
- procedures for challenging both proper and improper functioning of algorithms, and
- oversight in relation to choice, exploitation, update of software and systems.
- How should state agencies reason about algorithms?
Some suppose agencies should reason about algorithms in the same way they reason about other machines.
For example, the agency need not provide a detailed reason about the inner workings of a thermometer to justify imposing a penalty on a food manufacturer for failing to store perishable products at a cold temperature — just show that the thermometer reads the temperature accurately.
Following this logic, it would suffice if the agency discloses operational information about algorithms.
Others argue such disclosures might not suffice — due to poor technical literacy, laypeople may not be able to extract useful knowledge from operational information.
Instead or — or in addition to — disclosures, agencies should provide reasons for any and all decisions they make by use of algorithms.
Markets + Algorithms = ?
- How to mitigate the algorithm-related competitive harm?
Check United States v. David Topkins — here and here — a case about using software to carry out algorithmic price-setting in line with the conspirators’ agreement in the e-commerce context.
- What are the legal implications of algorithm-determined markets?
First, markets can potentially be displaced.
Check this quote of Uber’s former CEO:
“[W]e are not setting the price, the market is setting the price … [W]e have algorithms to determine what the market is.”
Second, price discrimination of consumers by sellers can intensify.
Price discrimination may be privately beneficial, but is very likely socially harmful.
Creative Enough
- Will patents survive if there is nothing non-obvious for algorithms?
- Will patents survive if algorithms become the average worker?
There is no limit to how sophisticated algorithms can become. So, it may be that every invention will one day be obvious to commonly used algorithms.
Once inventive algorithms become the average worker, the average worker becomes inventive. As inventive outputs occur in the ordinary course and become routinized, routine inventions no longer qualify as inventive enough to be protected by patents.
As a result, a non-obviousness standard will become far higher than it is today.
To become a patent-protected inventor, one will need to do more than routine invention — for example, invent a skilled algorithm that can outperform standard algorithms.
- Will patents survive if inventorship can be programmed without human involvement?
Algorithms can ultimately automate knowledge work and render human researchers redundant.
Regardless of patent protection, algorithms have a good chance to replace humans once the former are significantly more efficient than the latter.
As regards reduced or eliminated human involvement: automation through algorithms may generate innovation with net societal gains, but have side effects — like unemployment, financial disparities, and decreased social mobility.
- Will patent law survive in the algorithmic era?
Some argue that setting a non-obviousness standard too high would reduce the incentives for innovators to invent and disclose.
(Traditionally, the patent law rewards inventors for their inventions to foster creativity and subsequent creations. It works perfectly for humans inventors, but might turn to be inappropriate for inventive algorithms.)
Yet, patents are not the only means of promoting product commercialization.
Alternative forms of intellectual property protection include, e.g., market exclusivity, tax incentives, grands, and prizes.
Also those who choose not to disclose their discoveries may rely on trade secret protection.
- Should patent applicants be required to disclose the role of algorithms in the inventive process?
It is a matter of fact that humans are already augmented by algorithms in some cases.
Sue Algorithms
- Do traditional liability regimes suit to thinking algorithms?
The problem is that traditional liability regimes do not focus on algorithms.
- Why and how apply a reasonableness standard to thinking algorithms?
Reasonable algorithm = reasonable human?
Or reasonable algorithm != reasonable human?
- How to eliminate differential treatment of victims injured by a human or by a thinking algorithm?
Under the principle of horizontal equity, like cases should be treated alike.
For example, pedestrians hit by vehicles should be treated similarly by the legal system, regardless of the identity of their injurer.
In reality, a pedestrian who gets hit by a car has a chance to recover quickly and easily — sue against the driver, and be awarded compensation if the driver has acted unreasonably.
A pedestrian who gets hit by a driveless car controlled by algorithms will most likely have to go through a lengthy, costly, and uncertain procedure — sue against owners and/or developers, and be awarded compensation if the algorithm and/or its owners and/or its developers have acted unreasonably. In other words, recovery will not be an easy task.
Window onto Future
- Why translate law into code?
Some argue that if code instructions are a complete and correct representation of the law, then the rule of law is ideally fulfilled (at least, in a formal sense).
Also automated decision-making the code enables potentially contribute to correct application of the law than that of manual case processing.
However, there is no such thing as error-free code/software/computer program.
Mal-representation of the law can never be excluded.
- How to translate law into code?
One can start with specifying and expressing legal algorithm by means of natural language and end up with expressing legal algorithm by means of programming language.
Before doing it, check whether the particular legal rule could in principle be translated into code and whether it is profitable to make such translation.
As of now, legislators formulate legal rules as if there would in fact be a human in charge of interpreting, implementing, and enforcing the law.
To progress with automation, legislation should fit the actual needs of automated decision processes.
- Is there law after code?
- Do algorithms lead to a mathematical turn of law?
First, any translation of legal rules into code essentially fixes one interpretation of law and reduces any further meaningful debate.
What ends up being enforced by algorithms is not the law in the books, but the code intended to implement it.
The rules the algorithms embed and apply — a hybrid of legal rules and their translation into code — may differ substantially from the law in the books.
Second, legal rules translated into the code become in part replaced by the patterns and correlation-constructing indicators.
The law ends up being transformed into a normative correlation of facts, and substituted by data-driven rules.
Both aspects potentially constitute a threat to the rule of law.
For example, people may be —
- subject to unknown, unpublished, or even unlawful prescriptions
- subject to different rules depending on the dataset or machine-learning techniques, despite the applicable law being the same, and
- deprived of due process as their procedural rights may be affected by inequality of arms between people and the algorithms’ operators or by increasing or shifting the burden of proof.
- Do algorithms affect the lifecycle of law?
Traditional lifecycle — law-making, adjudication/administration, and enforcement — does not include algorithms, so far.
An algorithmic context apparently requires the algorithmic rule of law.
- Should the code qualify as a source of law?
Check whether the code-
- is acknowledged by state agencies
- is intended to express a legal rule
- represents the practice of applying the law,
- has been long in use, etc.
- Whom to employ — a human lawyer or a legal algorithm?
The answer may depend on costs, expected returns, and profits.
For example, a human lawyer costs $300,000, while a virtual attorney (e.g., a tool that conducts legal research powered by AI) costs $400,000.
Adding a human lawyer will yield annual returns of $400,000, while adding a virtual attorney will yield annual returns of $450,000.
In this case, a human attorney will be more cost effective (yielding a profit of $100,000 versus a virtual attorney’s $50,000).