AP®︎/College Computer Science Principles
Facial recognition services use machine learning algorithms to scan a face and detect a person's gender, race, emotions, or even identity.
Here's an example output from a facial recognition service:
Unfortunately, facial recognition algorithms vary in their performance across different face types. MIT researcher Joy Buolamwini discovered that she had to wear a white mask to get a facial recognition service to see her face at all.
Buolamwini teamed up with researcher Timnit Gebru to test the accuracy of popular face recognition services from big companies (including Microsoft and IBM). They input a diverse set of faces into each service and discovered a wide range of accuracy in gender classification. All the services performed better on male faces versus female faces, and all the services performed the worst on darker female faces.
Another study from the National Institute of Science tried out 189 facial recognition algorithms on 18.27 million images, and measured how often each algorithm recognized that two faces were of the same person. They found false positives were up to 100 times more likely for East Asian and African American faces when compared to white faces.
Inaccuracy and injustice
The accuracy of those algorithms is now a matter of criminal justice, since law enforcement agencies have started using facial recognition to identify subjects. If the recognition algorithms are biased, then the resulting law enforcement decisions can be biased, potentially leading to false arrests and unnecessary encounters with police.
In January 2020, Detroit police used facial recognition technology on surveillance footage of a theft to falsely arrest a Black man. Robert Williams was arrested on his front lawn while his young children watched, shown surveillance photos of the man who was supposedly him, and detained for 30 hours. Williams said this about the surveillance photos: "When I look at the picture of the guy, I just see a big Black guy. I don't see a resemblance. I don't think he looks like me at all." He was finally cleared of the charges at a hearing when a prosecutor determined there was insufficient evidence.
Movements against facial recognition
In January of 2020, more than 40 organizations wrote a letter to the US government requesting a moratorium of facial recognition systems, a suspension until the technology can be thoroughly reviewed.
The country has yet to respond, but several cities and states have enacted moratoriums at the regional level.
In June of 2020, IBM announced it would no longer offer a facial recognition service: "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency."
🤔 Are there any situations in which it is okay to use facial recognition algorithms that are biased? If you were developing a facial recognition service using machine learning, how would you acquire a diverse set of training data?
Want to join the conversation?
- I want to build a simple facial recognition project that detects and identifies faces, what software do I need to do this? Do you have any references as well? Books or websites? Please help(3 votes)
- It depends on if you want to build all the algorithms from scratch or not. Building an algorithm from scratch takes a lot of time especially for difficult problems such as facial recognition. An example of a library you could use for facial recognition is tensorflow.(6 votes)
- "Are there any situations in which it is okay to use facial recognition algorithms that are biased?" I don't think so? Or is there something I'm just not thinking of here?(2 votes)
- No matter how much you try, an AI will be somewhat (even microscopically) biased and this is due to the data that it is trained on. No data is perfectly unbiased. So in actuality, any facial recognition will be biased, but a good one will only be biased by 0.001% or less.(2 votes)