A year ago, none the wiser about what 2020 would bring, I reflected on the pivotal moment that the AI community was in. The previous year had seen a series of high-profile automated failures, like self-driving-car crashes and discriminatory recruiting tools. Later on, the field responded with more talk of AI ethics than ever before. But talk, I said, was not enough. We needed to take tangible actions. Two months later, the coronavirus shut down the world.
In our new socially distanced, remote-everything reality, these conversations about algorithmic harms suddenly came to a head. Systems that had been at the fringe, like face-scanning algorithms and workplace surveillance tools going mainstream. Others, like tools to monitor and evaluate students, were spinning up in real-time. After a spectacular failure of the UK government to replace in-person exams with an algorithm for university admissions, hundreds of students gathered in London to chant, “F**k the algorithm.”
At the same time, there was indeed more action. In one major victory, Amazon, Microsoft, and IBM banned or suspended their sale of face recognition to law enforcement, after the killing of George Floyd spurred global protests against police brutality. It was the culmination of two years of fighting by researchers and civil rights activists to demonstrate the ineffective and discriminatory effects of the companies’ technologies.
So here we are at the start of 2021, with more public and regulatory attention on AI’s influence than ever before. My New Year’s resolution: Let’s make it count. Here are five hopes that I have for AI in the coming year.
The tech giants have disproportionate control over the direction of AI research. This has shifted the direction of the field as a whole toward increasingly big data and big models, with several consequences. It blows up the climate impact of AI advancements, locks out resource-constrained labs from participating in the field, and leads to lazier scientific inquiry by ignoring the range of other possible approaches.
But much of corporate influence comes down to money and the lack of alternative funding. My hope is we’ll see more governments step into this void to provide non-defense-related funding options for researchers. It won’t be a perfect solution, but it’ll be a start. Governments are beholden to the public, not the bottom line.
The overwhelming attention on bigger models has overshadowed one of the central goals of AI research: to create intelligent machines that don’t just pattern-match but actually understand the meaning. While corporate influence is a major contributor to this trend, there are other culprits as well. Research conferences and peer-review publications place a heavy emphasis on achieving “state of the art” results. But the state of the art is often poorly measured by tests that can be beaten with more data and larger models.
It’s not that large-scale models could never reach a common-sense understanding. That’s still an open question. But there are other avenues of research deserving greater investment. Some experts have placed their bets on neuro-symbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use far less data, inspired by a human child’s ability to learn from very few examples.
In 2021, I hope the field will realign its incentives to prioritize comprehension over prediction. Not only could this lead to more technically robust systems, but the improvements would have major social implications as well. The susceptibility of current deep-learning systems to being fooled, for example, undermines the safety of self-driving cars and poses dangerous possibilities for autonomous weapons. The inability of systems to distinguish between correlation and causation is also at the root of algorithmic discrimination.
If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed. I could feel the tenor of the proceedings tangibly shift. There were more talks than ever grappling with AI’s influence on society.
Diversity in numbers is meaningless if individuals aren’t empowered to bring their lived experience into their work. I’m optimistic, though, that the tide is changing. I hope this momentum leads to long-lasting, systemic change.
One of the most exciting trends from last year was the emergence of participatory machine learning. It’s a provocation to reinvent the process of AI development to include those who ultimately become subject to the algorithms.
In governance procedures for soliciting community feedback; new model-auditing methods for informing and engaging the public; and proposed redesigns of AI systems to give users more control of their settings.
My hope for 2021 is to see more of these ideas explored and adopted in earnest. Companies must follow through with allowing their external oversight board to make binding changes to the platform’s content moderation policies, the governance structure could become a feedback mechanism worthy of emulation.
Thus far, grassroots efforts have led the movement to mitigate algorithmic harms and hold tech giants accountable. But it will be up to national and international regulators to set up more permanent guard rails. The good news is lawmakers around the world have been watching and are in the midst of drafting legislation. We’ve already introduced to facial recognition, AI bias, and deepfakes. Several individuals have also sent a letter to Google expressing their intent to continue pursuing this regulation addressing the above AI issues.
So my last hope for 2021 is that we see some of these regulations and bills pass. It’s time we codify what we’ve learned over the past few years, and move away from the fiction of self-regulation.