T.S. Eliot wrote disparagingly of "dreaming of systems so perfect that no one will need to be good."
To those who come to AI seeking systems that do not discriminate: abandon hope here. As seismic engineers accept that a large enough earthquake will make any building fall down, we accept that all systems exhibit bias—for better and for worse. Our goal is to appreciate that bias’s social meaning and bend it into parameters we find acceptable—legal boundaries, ethical boundaries. If we cannot do that fitting—or, as is often the case, cannot find or agree on comprehensive and appropriate constraints—we must lay down our tools. A seismic engineer limits the risk of likely earthquakes and makes a decision to allow the building to proceed. Our job is similar. We seek likely sources of risk until we cannot think how to find more; we catalog and assess what risk we cannot mitigate, and we add our signature, or refuse to. That signature is our consent.
I recently read We're Missing a Moral Framework of Justice in AI by Matthew Le Bui and Safiya Umoja Noble. It's a great piece that grapples with the overall dysfunctional state of professional practice in AI ethics. In it, Le Bui and Noble give a depressingly narrow (though realistic) vision of what AI ethics too often means in practice: to “effect and propagate systems that are neutral and objective and do not render any specific groups as advantaged over others.” As Le Bui and Noble (and readers of this blog) well know: no system can or will ever be neutral or objective for any reasonable definition of either word, nor can we expect systems to show no favor to any group when it is the desire to do this favoring that calls AI into being in the first place.
In my view, the goal of a practitioner of AI justice is to seek evidence that the system may systematically disadvantage one or more historically marginalized groups, then do something about what they’ve found.
On the first part of that mandate, seeking evidence: In this practice lies (must lie) an “understanding of historical analogies,” “sociotechnical approaches... to understanding the implications of AI,” and their “long-term risks and harms” (Le Bui & Noble, 2020, p. 167). (This is why we at the Center for Long-Term Cybersecurity so insist on the long-term in our name, no matter how often someone cracks that “it seems what we really need is more short-term cybersecurity...”).
On the second part of that mandate, doing something: The challenge is how to make good on the radical thinkers on whose thought much work on AI fairness has been built, rather than simply use those thinkers’ tools for colonial ends—“to end up with little more than a set of reactionary technical solutions that ultimately fail to displace the underlying logic that produces unjust hierarchies in the first place” (Hoffmann, 2019, p. 911). As Alkhatib identifies, meaningfully liberatory work is perfectly impossible when the premise of its practice is to “look for ways to argue for the continued existence” of any given AI system. Any work only matters insofar as a practitioner can deny their consent, make that denial bindingly meaningful and, if need be, public, legible to the machinations of the institutions of a robust civil society. The process of making those denials count is social, not technical work. It will be won in the way political matters always are: by the slow and unrelenting demands of people who must be free.
Still, here, I voice my defense of technical practice. Only through technical practice can we know AI on its own terms, however partial that knowledge must be. Interpreting the meaning of that technical work—having not just the skill, but also the humility to know one's partial perspective and seek meaningful input from affected groups—is also our work. As is denying one's consent when no reasonable assurance can be found.
The technical plays its part. It is an actor in an ensemble piece. The question I ask, and the ones Le Bui and Noble gesture toward, is: who will train the people who do this work?
I began working on this question in 2019 when I developed the MLFailures bootcamp along with my students Inderpal Kaur, Sam Greenberg, and Jasmine Zhang. Our observation was this: even well-intentioned engineers never learn about how to effect just AI systems in school. Students—even Berkeley graduates—learn about AI justice, but not (for example) how to tell if a particular algorithm exhibits bias against a particular group, let alone what to do to make that bias less harmful.
To help address this problem, we developed MLFailures, a series of labs and lectures that teach students how to identify and address bias in supervised learning algorithms. All of these labs are freely available and licensed under the creative commons (CC0-BY-NC-SA) license. Teachers worldwide are welcome to use and modify them.
Over time, people reached out. Government and business, U.S. and international. People wanted to learn about AI justice as it mattered to them. The issues we described, anti-Black bias in a privatized healthcare system, matter in the U.S. But New Zealanders care more about Maori issues. Housing authorities care about housing issues. The list goes on. The appetite for practical experience in AI justice in customized, realistic contexts was clear.
From that need, we founded Daylight AI. We help engineers, policymakers, designers, and stakeholders handle real issues in AI safety and justice. We focus on building internal capacity, teaching tools, and strategies to identify, interrogate and ameliorate algorithmic bias. We deliver training, education, and advising to for-profit, non-profit, academic, and government organizations.
You’re a housing authority, and you need to know what AI bias looks like in housing issues. You’re developing a medical imaging tool, and you need to understand if your tool disadvantages non-white patients. We help you understand how to ask these questions in safe, controlled contexts.
Flight simulators are to airline safety as we are to AI safety. We help people experience and handle realistic AI safety risks in controlled, safe environments. This training equips people to handle real situations as they arise.