How does AI bias happen?
Understanding AI bias: Lesson plan
Artificial intelligence is trained on real-world data that people have given it, and if that data contains biases (or is incomplete), the AI can end up being biased, too. In this lesson, students will think critically about the training data that informs what AI tools can do, and consider possible ways to reduce AI bias.
- Define AI bias.
- Understand how AI bias happens.
- Reflect on ways to reduce AI bias.
- AI bias – when an AI tool makes a decision that is wrong or problematic because it learned from training data that didn't treat all people, places, and things accurately
- training data – the information given to an AI to help it learn how to do specific tasks
- testing data – the information used to check whether the AI that was created is reliable and accurate
What you'll need
Before the lesson
We encourage teaching the following lessons to help set a foundational understanding of how AI works:
Step by step
- Say: When computer scientists create AI, they use two different types of data: training data and testing data (Slide 4).
Training data is the information given to an AI to help it learn how to do specific tasks (Slide 5). Testing data is the information used to check whether the AI that was created is reliable and accurate (Slide 6).
- Say: Imagine we are computer scientists and we're in the process of creating an AI tool. The purpose of the tool we're building is to identify different types of fruits. We have some training data to help us get started (Slide 7).
- Ask: Based on these examples of training data, what types of fruit might our AI be able to identify? (Slide 8)
- Show Slide 9 and explain that the images here show examples of the testing data used to check if the AI is working properly. The labels under each image are what the AI thinks each fruit is called.
Ask: Do you notice any mistakes? Why do you think the AI is making these mistakes? (Slide 10)
- Explain that the mistakes the AI made are an example of AI bias, which is when an AI tool makes a decision that is wrong or problematic because it learned from training data that didn't treat all people, places, and things accurately (Slide 11).
Show Slide 12 and say: In the training data, apples were the only example of a red fruit. The testing data shows that the AI learned to identify anything red as an apple. In other words, the AI we created has a bias toward thinking that every red fruit is an apple (Slide 12).
- Say: What are some ways we could reduce the AI bias of this fruit detector? (Slide 13)
Invite students to share out, and then review the suggestions on Slide 14.
- Say: While it's almost impossible to completely eliminate AI bias from a tool, we can do our best to reduce it by coming up with as diverse and complete a set of training data as possible (Slide 15).
- If time permits, read Slide 16 and have students work independently to come up with a list of image descriptors. Then, have them pair up to compare their lists and continue to add any additional image descriptors.
Review the descriptors on Slide 17 and continue to add to the list based on any other ideas the students have.
- Say: Remember that behind every AI tool are humans making decisions on what training data the tool will use. Understanding how AI bias occurs can help us think critically about its potential impacts (Slide 18).