DataKind UK 19/01/2020
All our 2019 DataKind UK Ethics Book Club materials, in one place. For convenience - to save people having to go to each topic.
-
Weapons of Math Destruction. Cathy O’Neil, 2016. We kinda had to start with this, right? One of the books that got people talking about ethical issues with AI, data science etc. NOTE: This is a book that costs money! If you want to buy it that’s great, but if anyone has a copy they can lend out then post here. Also check local libraries as a good resource!
-
The controversial tech used to predict problems before they happen. Manthorpe, 2019. A report from Sky News (Warning! auto-playing video) This is a news report on a longer report on algorithm use on UK governments by the Data Justice Lab Cardiff. Read the Sky story, watch the news report, go dig out the 144-page pdf of the full report - it’s all good :)
-
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making - Veale et al., 2018. Journal article based on interviews with data scientists and others involved in creating and implementing those algorithmic tools in the UK - a chance to dig in to what goes on behind the scenes!
This session we will be discussing Facial Recognition - the good, the bad and the ugly! No need to read everything - pick a few! If you’ve just got time for one, pick the one in yellow.
Facial recognition is bad?
- Facial recognition is the plutonium of AI. by Luke Stark, Published in Association for Computer Machinery it argues that facial recognition has few legitimate uses article
Facial recognition is good?
- BBC article on pigs - Facial recognition tool ‘could help boost pigs’ wellbeing
- Collateral article on art - Art director Lim Si Ping has created Facebook, the first digital book that, based on facial recognition, tells the perfect story for you
- China uses facial recognition technology to cut down on toilet paper theft article
Should we get ugly? Activism
For a general background, have a browse of the Algorithmic Justice League website. The recent resistance to Amazon’s use of FR is a case in point. Further reading below:
- Joy Buolamwini’s response to Amazon (Medium
post)
- Response: Racial and Gender bias in Amazon Rekognition — Commercial AI System for Analyzing Faces
- Letter from ‘Concerned researchers’ (Medium
post)
- On Recent Research Auditing Commercial Facial Analysis Technology
- AI Now
Letter
re: Dangers of using facial recognition for housing.
- Write up of the letter (The Verge
article)
- AI researchers tell Amazon to stop selling ‘flawed’ facial recognition to the police
- Write up of the letter (The Verge
article)
*Reports *
- Oxfam report on *Biometrics in the Humanitarian Sector *
- Short report on police use of facial recognition (UK Govt, here)
- US Government'sEthical Framework for Facial Recognition
- SAFE pledge from the Algorithmic Justice League https://www.safefacepledge.org/pledge
*Articles *
- Article: *Silicon Valley Thinks Everyone Feels the Same Six Emotions *
- FT article: Taser stun gun maker files facial recognition patents
- Non-consensual use of data for facial recognition training (Slate,
here)
- The Government Is Using the Most Vulnerable People to Test Facial Recognition Software
- New York Times - One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority (here)
How do we define fairness?
-
21 Fairness definitions and their politics - Arvind Narayanan, 2018 [video; 1hr]
-
DataKind UK’s Giselle Cory wrote a blog summarising and reflecing on Narayanan’s work, launched ahead of the book club.
Background
- Machine
Bias
- the Propublica COMPAS story that is the key reference point in talks about algorithmic bias and unfair outcomes [article]
- Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse Anna Lauren Hoffman, 2019 [academic article]
Hands on
- IBM AI Fairness 360 tool [blog + links to interactive tutorials]
For those who want more….
A primer
- And for those who would like it, a primer on self-driving cars: Self-driving car technology explained to non experts (Medium)
*Impact on inequality *
- Academic paper: Predictive Inequity in Object Detection *- systems unable to identify across skin colors. *
- In this work, we investigate whether state-of-the-art object detection systems have equitable predictive performance on pedestrians with different skin tones. This work is motivated by many recent examples of ML and vision systems displaying higher error rates for certain demographic groups than others. We annotate an existing large scale dataset which contains pedestrians, BDD100K, with Fitzpatrick skin tones in ranges [1-3] or [4-6]. We then provide an in depth comparative analysis of performance between these two skin tone groupings, finding that neither time of day nor occlusion explain this behavior, suggesting_ this disparity is not merely the result of pedestrians in the 4-6 range appearing in more difficult scenes for detection . We investigate to what extent time of day, occlusion, and reweighting the supervised loss during training affect this predictive bias.
- See also news article on this paper, A new study finds a potential risk with self-driving cars: failure to detect dark-skinned pedestrians (Vox_, March 2019)
- Research paper: Self-Driving Cars: The Impact on People with Disabilities (Ruderman Foundation, January 2017)
Questions of morality….
- Interactive scenarios: The Moral Machine
(MIT)
- Academic paper: The Moral Machine Experiment (Nature, November 2018) - _this article lays some interesting foundations on the absence of universal moral rules (differing by country).
- See also the review of the paper, providing some more context:_ Self-driving car dilemmas reveal that moral choices are not universal (Nature, October 2018)
- News article: Will your driverless car be willing to kill you to save the lives of others? (Guardian, June 2016) People like utilitarian view in theory, but don’t want to buy a car programmed that way
Industry responses
- News article: 11 companies propose guiding principles for self-driving vehicles (Venture Beat, July 2019)
- Comment piece: Bob Lutz: Kiss the good times goodbye (Automotive News, November 2017) - *Bob Lutz (ex-head of General Motors) gives a summary of where he thinks things might be heading *
*Technical challenges and solutions *
- Mobileye CEO Amnon Shashua's recent lecture at MIT on the challenges of reaching full autonomy (video)
- The challenge of adversarial attacks (article)
- Mobileye’s 2018 paper laying out a reasonable model of self-driving behavior
Book : Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Pérez
News article : “The deadly truth about a world built for men – from stab vests to car crashes” by Caroline Criado Pérez
Crash-test dummies based on the ‘average’ male are just one example of design that forgets about women – and puts lives at risk
Journal article : The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition by Os Keyes
Automatic Gender Recognition (AGR) is a subfield of facial recognition that aims to algorithmically identify the gender of individuals from photographs or videos. In wider society, the technology has proposed applications in physical access control, data analytics and advertising. Within academia, it is already used in the field of Human-Computer Interaction (HCI) to analyse social media usage…I show that AGR consistently operationalises gender in a trans-exclusive way, and consequently carries disproportionate risk for trans people subject to it.
Report : I'd blush if I could: closing gender divides in digital skills through education by Unesco - specifically we're looking atThink Piece 2 - The Rise of Gendered AI and its Troubling Repercussions, pages 85-146
News article : “Digital assistants like Siri and Alexa entrench gender biases, says UN” by Kevin Rawlinson
Assigning female genders to digital assistants such as Apple's Siri and Amazon's Alexa is helping entrench harmful gender biases, according to a UN agency.
- Gender and AI in the workplace link
Our next data science ethics bookclub is on AI and bad credit: the impact of automation on financial inclusion. You are welcome to pick from this reading list, depending on your interest and the time you have:
- Blog: ‘We Didn't Explain the Black Box – We Replaced it with an Interpretable Model’, about the FICO explainable credit score competition winner here
- Academic article on fairness in credit risk evaluation, ‘Context-conscious fairness in using machine learning to make decisions’ here
- Government paper : The Centre for Data Ethics and Innovation look at AI and Personal Insurance here
- News article on how companies can use data with low levels of regulation, ‘The new lending game, post-demonetisation’ here
- Academic article on discrimination in consumer lending here
Our next data science ethics bookclub is on AI and Race. You are welcome to pick from this reading list, depending on your interest and the time you have:
Main read Book: Ruha Benjamin, Race After Technology
Journal Article: Sebastian Benthall & Bruce D. Haynes. Racial Categories in machine learning - https://arxiv.org/pdf/1811.11668.pdf
Quick reads Jessie Daniels, Mutale Nkonde, Darakhshan Mir Advancing Racial Literacy in Tech- Why ethics, diversity in hiring and implicit bias trainings aren't enough. https://datasociety.net/wp-content/uploads/2019/05/Racial\_Literacy\_Tech\_Final\_0522.pdf
Karen Hao, This is how AI bias really happens—and why it's so hard to fix. MIT Technology Review https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/
Short watch Poetry: Poem performed by Joy Buolamwini, AI, Ain't I A Woman? https://www.notflawless.ai/ Talk: Safiya Noble, author of Algorithms of Oppression provides a short discussion of her book https://youtu.be/6KLTpoTpkXo Comedy sketch: Full Frontal with Samantha Bee. Correspondant Sasheer Zamata discusses bias https://youtu.be/AxpWvMrPqVs