Public Philosophy

 

Essay

Stop Building Bad AI

Boston Review
Redesigning AI: Work, Democracy, and Justice in the Age of Automation

Guest editor: Daron Acemoglu


Contributors: Daron Acemoglu, Rediet Abebe, Aaron Benanav, Erik Brynjolfsson, Kate Crawford, Andrea Dehlendorf, Ryan Gerety, Anna Romina Guevarra, William S. Isaac, Maximilian Kasy, Molly Kinder, Nichola Lowe, Shakir Mohamed, Lama Nachman, Marie-Therese Png, Rob Reich, Daniel Susskind, Kenneth Taylor, Rachel Thomas, Annette Zimmermann.

May 25, 2021. In my essay “Stop Building Bad AI”, I address the question of whether there are some types of AI that should never be built in the first place.

The ‘Non-Deployment Argument’ has been subject to significant controversy recently: non-deployment skeptics fear that it will stifle innovation, and argue that the continued deployment and incremental optimization of AI tools will ultimately benefit everyone in society.

I argue against the view that we should always try to build and optimize AI tools: making things better isn’t always good enough. In specific cases, there are overriding ethical and political reasons why we ought not to continue to build, deploy, and optimize specific AI tools.

Instead, we should critically interrogate the value and purpose of using AI in a given domain in the first place.

 

Op-Ed

The A-Level Results Injustice Shows Why Algorithms are Never Neutral

New Statesman

 
 
 

Essay Symposium
Guest Editor

Philosophers on GPT-3
DailyNous

July 30, 2020. Nine philosophers explore various issues raised by OpenAI’s newly released 175 billion parameter language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann.

The contributors (Amanda Askell, David Chalmers, Justin Khoo, Carlos Montemayor, C. Thi Nguyen, Regina Rini, Henry Shevlin, Shannon Vallor, and Annette Zimmermann) comment on the following questions: how does GPT-3 actually work, and what distinguishes it from other NLP systems? Can AI be truly conscious—and will machines ever be able to ‘understand’? Does the ability to generate ‘speech’ imply communicative ability? Can AI be creative? How does technology like GPT-3 interact with the social world, in all its messy, unjust complexity? How might AI and machine learning transform the distribution of power in society, our political discourse, our personal relationships, and our aesthetic experiences? What role do language and linguistic categories play for machine ‘intelligence’?

 

Video

Algorithmic Fairness and Decision Landscapes

May 25, 2020. This is a video recording of a talk that I gave at the Algorithmic Ethics workshop at the University of Rochester.

Given available evidence that algorithmic decision-making in many different domains of deployment leads to outcomes that reflect and amplify social inequalities, such as structures of racial and gender inequality, these are the right questions to ask about algorithmic decision-making models—but they are not the only ones, and often not the most important ones. If what we care about is fairness, we have to move beyond an approach that focuses exclusively on the decision quality of algorithmic models. In addition to evaluating decision quality for each algorithmic model, we ought to critically scrutinize the decision landscape. Doing so requires investigating not only which alternative decision outcomes are available, but also which alternative decision problems we could, and should, be solving with the help of algorithmic models. This is an underexplored approach to algorithmic fairness, as it requires thinking beyond the internal optimization of a given model, and instead taking into account interactions between models and the model-external social world.

August 14, 2020. In this Op-Ed, I comment on the role of uncertainty in statistical modelling, social justice, and the value of giving people the benefit of the doubt.

The context is the Ofqual controversy in the UK, which arose because A-Level exams had been replaced by a crude statistical model determining A-Level results based (amongst other things) on the historical performance of a student’s school.

Concerns had been raised that this model could downgrade 40% of students, and exacerbate social inequality.

I argue that “statistical models often seem rather opaque to us. Therefore, it’s unsurprising that many of us view them as something that risks increasing uncertainty. But actually, and counterintuitively, the opposite is the case. When we abstract and generalise, we artificially downplay uncertainty.

 
 

Essay

If You Can Do Things with Words,
You Can Do Things with Algorithms
DailyNous

July 30, 2020. Part of “Philosophers on GPT-3”. In September 1988, researchers at MIT published a student guide titled “How to Do Research at the MIT AI Lab”, arguing that “[l]inguistics is vital if you are going to do natural language work. […] Check out George Lakoff’s recent book Women, Fire, and Dangerous Things.” Indeed, social meaning and linguistic context matter a great deal for AI design—we cannot simply assume that design choices underpinning technology are normatively neutral. We must take a closer look at the linguistic categories underpinning AI design. If we can politically critique and contest social practices, we can critique and contest language use. Here, our aim should be to engineer conceptual categories that mitigate conditions of injustice rather than entrenching them further. We need to deliberate and argue about which social practices and structures—including linguistic ones—are morally and politically valuable before we automate and thereby accelerate them.

 

Essay

Technology Can’t Fix Algorithmic Injustice
Boston Review

This essay won the Hastings Center’s 2020 David Roscoe Award for an Early-Career Essay on Science, Ethics, and Society.

January 9, 2020. Co-authored with Elena Di Rosa and Sonny “Hochan” Kim. We argue that “we must resist the apocalypse-saturated discourse on AI that encourages a mentality of learned helplessness. To take full responsibility for how technology shapes our lives, we will have to make the deployment of AI democratically contestable by putting it on our democratic agendas. Citizens must come to view issues surrounding AI as a collective problem for all of us rather than a technical problem just for them.”

Ultimately, “the data we have […] is neither the data we need nor the data we deserve—and there may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them. Developers cannot just ask, ‘What do I need to do to fix my algorithm?’ They must rather ask: ‘How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?’”

Our article is currently being featured on a number of philosophy syllabi across the world. If you are using our text in your teaching, please let us know!

But on Ofqual’s model, the space for uncertainty, the space for giving someone the benefit of the doubt, does not shrink in the same way for everyone.

We should carefully consider who merits the benefit of the doubt – who typically gets given more leeway to prove their potential due to their proximity to prestige.

We should treat this as an occasion to ask: when can the act of withholding judgement, and making space for uncertainty, ultimately promote social justice? Who is typically afforded, or denied, the space to impress and surprise? Who has the most to lose when the space of uncertainty shrinks?"

 
 

Video

COVID-19 Tracing Apps, Surveillance,
and Democratic Legitimacy
University of York (Youtube)

July 9, 2020. Part of the University of York’s video series “Philosophy in a Time of Coronavirus”. Featuring quotes from Ruha Benjamin’s book Race After Technology: Abolitionist Tools for the New Jim Code (Polity, 2019) as well as a 1932 issue of the MIT Technology Review.

Topics: how do COVID19 tracing apps work—and what are the moral and political implications of using this technology? Might they infringe on our rights and enable a form of surveillance, and does it matter whether we have democratic control over them? How can the philosophical debate on the ethics of risk and uncertainty help us evaluate these technological tools—which persons and groups are most affected by risks associated with tracing apps, and how can an equal distribution of risk have unjust consequences in an unequal world? Who is responsible for ensuring that tracing apps are not susceptible to ‘mission creep’—and what kinds of questions should democratic constituencies be asking before deciding to implement a given tracing app?

 

Blog Post

AI Ethics: Seven Traps
Freedom to Tinker

March 25, 2019. Co-authored with Bendert Zevenbergen. Freedom to Tinker is a blog hosted by the Center for Information Technology Policy at Princeton University. The pursuit of AI Ethics is subject to a range of possible pitfalls, which has recently led to a worrying trend of industry practitioners and policy-makers dismissing ethical reasoning about AI as futile: ‘is ethical AI even possible?’. Much of the public debate on the ethical dimensions of machine learning systems does not actively include ethicists, or experts in relevant adjacent disciplines, such as political and legal philosophers. Therefore, a number of inaccurate assumptions about the nature of ethics, and its usefulness for evaluating the larger social impact of AI, have permeated the public debate.

We outline seven ‘AI ethics traps’: the reductionism trap, the simplicity trap, the relativism trap, the value alignment trap, the dichotomy trap, the myopia trap, and the rule of law trap.

In doing so, we hope to provide a resource for readers who want to navigate the public debate on the ethics of AI in an informed and nuanced way, and who want to think critically and constructively about ethical considerations in science and technology more broadly.

Policy Work

 
annette zimmermann german aerospace center policy work.png

Expert Roundtable

German Aerospace Center & Federal Ministry for Economic Affairs and Energy

May 27, 2021. An expert consultation on ethical issues associated with contemporary natural language processing (NLP) tools, as well as other philosophical problems surrounding language and AI. German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt), organized on behalf of the Federal Ministry for Economic Affairs (Bundesministerium für Wirtschaft und Energie).

australian human rights commission logo.png

Expert Roundtable

Australian Human Rights Commission
& Australian National University

May 28, 2020. I was invited to speak at an expert roundtable with Australian Human Rights Commissioner Edward Santow, commenting on ethical and political themes in the HRC’s Discussion Paper “Human Rights and Technology”, which was the result of a public consultation process.

The event was organised by the Australian National University’s Humanising Machine Intelligence Project.

UNESCO logo.png

Expert Consultation

UNESCO

August 8, 2020. I was invited to give a talk on my research about AI ethics, justice, and power at an event organized by UNESCO’s Bangkok Office, as part of UNESCO’s global series of youth consultations about the draft recommendations on the Ethics of Artificial Intelligence.

 
oecd logo.png

Expert Consultation

OECD

June 22, 2020. I gave expert guidance to a team at the OECD about a project on AI and the future of work, including themes like algorithmic hiring discrimination, corporate surveillance and privacy, and workers’ rights in an age of automation.

UK+Parliament+Logo.png

Written Evidence Submission

UK Public Bill Committee: Voyeurism (Offences) (No. 2) Bill, 'Upskirting'"

July 10, 2018. Co-authored with Alice Schneider.
Abstract: We welcome the addition of the proposed section 67A to the Sexual Offences Act 2003 in an effort to tackle the practice of ‘upskirting’ in a comprehensive, conceptually clear, and victim-centered way, instead of relying on the option of prosecuting upskirting perpetrators under the more general offence of outraging public decency. However, we argue that the current draft of 67A relies on an overly restrictive picture of the relevant purposes of upskirting. In addition, we draw the Committee’s attention to upskirting-adjacent practices of image-based online sexual harassment currently not covered by 67A. Lastly, we provide a number of critical feminist reflections on defining particular areas of persons’ bodies in an explicitly sexualised way, which fails to take into account important cultural and religious differences, and which might thus constitute an obstacle to the adequate legal protection of minorities.