Media Coverage & Interviews

 
Policy+Punchline-final-Logo2-1.jpg

Podcast

Policy Punchline, Princeton University

June 22, 2020. What is algorithmic bias? Is all bias bad? Where do we see algorithms exacerbating structural injustices in society? What are the dangers of private companies providing AI services to public institutions—and should we ban some technologies, such as facial recognition tools used in law enforcement? I gave an in-depth interview about my research to Princeton University’s Policy Punchline Podcast, in which I talk about these and other questions.

apa-member-interview-annette-zimmermann-2020_orig.png

Interview

Blog of the American Philosophical Association

March 27, 2020. In this interview, I talk about Ada Lovelace, poetry, political wrongdoing, 'sexy' philosophical problems, and my current research interests in political philosophy and the philosophy of AI more broadly. As I argue here, “I think that it is important not to adopt too narrow of a view about what philosophy is supposed to be like, and which topics count as distinctively philosophical topics. As philosophers, we are often (understandably) consumed by our search for big, ‘sexy’, dazzingly complex problems. But it is important to remember that we are already surrounded by them. The Small, the Clearly Wrongful, the Ostensibly Boring warrants our philosophical curiosity, our moral concern, and our political action.”

annette-zimmermann-quoted-in-qz-article-03-09-2020-2_orig.png

Interview

“The Quest to Make AI Less Prejudiced”, Quartz

March 9, 2020. I was featured in an article on algorithmic injustice at Quartz (qz.com). I argue that not all AI is biased in the same way. This has important moral and political consequences: deciding whether we should deploy AI in a given domain or not requires nuanced, case-specific critical scrutiny. I also emphasize the importance of democratic control over decisions to deploy AI, and of developing procedures which allow citizens to hold powerful corporations and governments accountable for any unjust outcomes of AI deployment. I was also honored to be included on Quartz’s list of leading AI bias researchers and experts to follow in 2020.

whyy-the-pulse-annette-zimmermann-interview-1_orig.png

Podcast

The Pulse, NPR/WHYY

February 21, 2020. I was featured on an episode of The Pulse, a weekly science, health, and innovation podcast produced by NPR member station WHYY-FM, Philadelphia's public radio station. Episode: “Deciding What's Fair”. Segment: “Can Algorithms Help Judges Make Fair Decisions?”.
Algorithms—like humans—will make mistakes, not all of which we can foresee when we design technology; and algorithmic models can interact with the social world in complex ways over time. Therefore, I argue that we are never completely ‘done’ with AI ethics: instead of checking whether an algorithm meets certain fairness criteria once at the design stage, ethical thinking about algorithms has to be an ongoing process of deliberation, which continues after we deploy AI tools in the real world. Aside from my own comments, the episode features work by Aaron Roth & Michael Kearns (UPenn) and Megan Stevenson (GMU).

kings-college-london-law-and-society-the-verdict-annette-zimmermann.png

Podcast

The Verdict: Law & Society, King’s College London

October 4, 2019. Audio recording of the Law & Justice Forum "AI and the Criminal Law," King's College London.
My comments on a panel with Roger Brownsword and Sylvie Delacroix, chaired by John Tasioulas. My comments begin at 50:00 and end at 1:12:00. Topics include my research on how wrongful treatment can compound over time in iterative decisions, why choices about technological design are choices of political significance, and what kinds of challenges arise for determining appropriate sentencing constraints when we make predictive assessments in a criminal justice context.

screenshot-radio-interview-zimmermann_orig.png

Radio interview

WPRB Princeton 103.3 FM

Aired April 9, 2019, on These Vibes Are Too Cosmic, a science and music show on WPRB Princeton 103.3 FM. The interview starts at 49:00. In this interview, I talk about the democratic implications of AI Ethics, and more specificially, algorithmic injustice in the criminal justice system and in law enforcement. I also address some common misconceptions about what AI Ethics is, and I explain how to do AI Ethics well.