Main content
Course: AI for education > Unit 2
Lesson 1: Video series: AI 101 for teachersPart 4: Ensuring a responsible approach to AI
Explore the ethical considerations and best practices for incorporating artificial intelligence in educational environments. This session will provide concrete strategies, resources, and real-world examples, all aimed at understanding the potential biases of AI, and the ethical considerations that come with it.
Teacher resources: http://bit.ly/AIRespUse
. Created by Khan Academy and Code.org.
Teacher resources: http://bit.ly/AIRespUse
. Created by Khan Academy and Code.org.
Want to join the conversation?
- So will there be a new certification for teachers as: A.I. Prompter Certified?(1 vote)
- I hope so. that would be awesome for our society(1 vote)
Video transcript
Hi. Welcome to session four of AI 101 for Teachers
Professional Learning Series. So far we've been focusing mostly
on the benefits of AI, which might be leaving some of you
with some important questions like, Wait a minute, I've heard that
sometimes AI can make mistakes or even provide biased responses. How do I ensure I'm using AI responsibly and thoughtfully? What can I do to ensure
I am preparing my students to think critically
about these new technologies? These are all valid
and important things to consider as you think about how you will approach
AI in education. Let's dig in to some of these topics. Hi, I'm Dani. I'm a former middle school and high school math, computer science
and engineering teacher. I now work at Code.org on our product
team, helping to develop the website
that students and teachers use. Today we're here
joined by Katrina to talk about AI. We know that AI has been on many teachers minds,
and we're excited to dive into this topic today with Katrina.
Could you introduce yourself? Hi, I'm Katrina. I am an elementary school
special education teacher. So could you tell us a little bit
about how AI relates to your work
as an elementary school teacher? AI is a really exciting tool,
but I am worried about how it relates to students
and its use in the education setting. What are some things you're excited about? I think that AI can be really useful
to help me when writing reports or when generating test questions or assignments
for the students to work on. There's a lot of potential there. So you said you had some concerns. What are those concerns? Well, I am worried about student privacy
and how their data could be used by different AI tools and also about how AI could be used in the classroom and whether or not the information that it provides
would be accurate for students and also how students might use it to help them with things
like essay writing or homework. Are there ways you think you can leverage
AI as a special education teacher? I do think that it could be really useful in helping to do things
like write IEP goals. But with that I have concerns as well because IEPs are obviously very specific
to each student and I wouldn't want to provide any personally
identifying information to the AI tools. I do think it can be really helpful
with differentiation I teach students
who are at all different levels. So for instance,
if I'm reviewing addition, I can ask AI to help me
write questions for my students who are at a lower level and maybe
adding numbers between one and ten, but also my students who are more advanced
in adding multi digit numbers. Dani, I was wondering
if you could give me advice. So if I'm using AI to help me come up with a reading passage at
differentiated levels for my students about, for instance,
the Revolutionary War, how can I make sure that the information that it gives me for
my students to read is actually accurate? You really are going to have to be
the expert. We already think we know that teachers
are the experts, and you're going to have to rely on your knowledge to check
what the AI tool is giving you
to make sure that it is accurate. So you can't just take what the AI tool
generates and hand it to students. You're going to have to read it yourself
and make sure that you believe what it's saying and maybe check other sources to make sure
it matches what you find there as well. Dani, as somebody who works in
educational technology and for a company that creates some of these AI tools,
what are your thoughts about the safety of using AI in a K-12 classroom
in terms of when we develop tools, we work really hard to ensure their safety
before they ever see a classroom. So we're going to do a lot of rigorous
testing to make sure that they're safe and they're producing factual information
and they're reliable, that they're going
to give the same result repeatedly. And then we work hard to then
pilot it in a classroom, making sure that we do it with a small set
so we can see what is happening. And be really hands on. We also always make sure
that when we're developing tools that teachers have access to deciding
what they want in their classroom and how it's used, and then being able
to monitor what is happening. What are your thoughts
on the future of AI? Where would you like to see it go? If you could talk to the creators of AI Technologies,
what would you want for your classroom? Well, as a special education teacher,
I have a few students who are non-speaking and they use AAC devices
to help them communicate. And I'm curious about how those AAC devices could be integrated with
AI to give those non-speaking students or really any non-speaking individual
a way to communicate their thoughts. The potential for AI to help society is enormous. It's something that is influencing
a lot of very important decisions
about real humans into their lives. It could be used in education
to be more of an equalizer between people. It could be used in health
care to develop new drugs. It could be used in science
to develop new technologies. And like any technology, its application
will depend on how it is utilized. And at the same time, we need to think
about the risks associated with feeling that the consequences are huge. Hi, everyone. I hope you're all as excited as we are
to dive back into the fascinating world of AI. My name is Michelle and I'm a former high school science
and computer science teacher. I now work as a member of the professional
learning team at Code.org. Today, we're going to discuss
how you can ensure a responsible approach to AI in education. Educators and administrators
have valid concerns when considering whether or not AI Technologies are right
for their classrooms. Some top concerns
include the following data privacy. How can my students and I use AI tools in a way that protects our data? And how do I know when a tool is safe
enough to use with my students? Misinformation and AI fiction How can my students and I use AI effectively when it can be wrong? Algorithmic bias How can my students and I recognize AI bias? And how can I teach my students
to think critically about that bias? Throughout this video, we will address each of these concerns and equip
you with clear, effective strategies you can use to mitigate risks
for your students and for yourself. It is really important that technologists
kind of have this mantra of ensuring that their innovation is ethical
and is beneficial to everyone in society. Machine learning requires
a lot of true information to be provided to it
in order to ultimately deliver a utility. This information
might be very sensitive to us. It might be health related, it might be financial, it might be very, very personal. We need to put checks and controls in place,
like with any technology that is utilized to benefit us and it is done with accordance to the law. There's lots of gain
from involving yourself in really understanding the details of how this technology works. Given that it's so impactful,
given that it is something that will influence your life
and the life of everyone that you love, AI systems
process vast amounts of personal data. Consider a chess playing AI as an example,
rather than programing a set of steps that the computer should follow,
such as always start with the move knight to F3. The computer analyzes millions of chess games
to create its own patterns or algorithm that allow it to make the best moves
in novel situations. A large dataset of millions of games is necessary for the computer
to develop its own style of play. Now, instead of chess, let's consider another example
a video recommender algorithm. How does an app recommend videos? It constantly analyzes
each person's interactions with the app, monitoring how long you watch a video, whether you comment on it
or like it, and more. In order to learn your preferences,
the algorithm must process your data, plus the data of everyone else
using the app. People
tend to have different personal thresholds for what data they are and aren't
comfortable sharing with AI systems. Individuals should have the autonomy
to decide whether their data is collected for use in AI systems,
and companies should provide users with clear information
about their data collection practices. Companies
recognize the financial value of user data and often view it as an asset
that can be monetized. Furthermore, data protection regulations generally
lag behind industry advancements. For these reasons,
staying informed about how individual AI tools are using your data and advocating
for privacy protections for your students are crucially important. The introduction of AI tools
in educational institutions presents nuanced challenges,
especially concerning data privacy. Often, students find themselves
with limited agency in choosing whether or not to use AI or EdTech tools
as decisions typically fall under institutional mandates. Emerging data intensive AI platforms
may not always meet regulatory standards, such as the US's Family Educational Rights
and Privacy Act FERPA and Children's Online
Privacy Protection Act. COPPA or other international privacy
protections. Making their integration
into educational settings challenging, especially for tools
targeting users younger than 18. Parental consent becomes imperative
unless these platforms were explicitly designed
for educational use, while paramount for teachers to instill a thorough
understanding of data privacy. It's equally important to ensure
that students don't feel overwhelmed or powerless, though the broader control
of personal data might seem elusive. Students should be equipped
with the knowledge to make informed decisions about the data
they can control. Promoting both awareness and empowerment. The AI learns to take a text description
and generate completely new images. Nobody has ever seen before or to alter existing images. The same approach can also be used for videos now, this raises multiple questions. Is the AI really learning
creativity and imagination? On the one hand,
if you look at art and video created by AI it can be beautiful, original and amazing. On the other hand,
the AI only learns this by doing math at the pixel level
while studying creations made by people. Is that really creativity? Another question
is the issue of copyright. AI learns by studying
the creations of others, and the original creators
may want a say in this. Of course, when humans learn to create,
they also study creations made by others. So the legal questions
here are not simple. We're still in the very early days
of teaching AI how to create new types of media. Today,
AI can generate photos and videos soon. It would also learn to create music and 3D worlds. This will have an incredible impact
on all aspects of society, especially in entertainment,
not just movies and music, but also games. Think about all the information
you posted online over the years like family photos, blog posts, classroom
websites and product reviews. Generative AI tools. Those that create new text code and images
are typically trained on human works, possibly including some of the content
you've contributed to the Internet. Many AI tools also use the content users create within their platforms
to enhance their own capabilities. For example, at present, ChatGPT and Bard use your conversations
as training data by default. We mentioned before that people
have different personal boundaries around data privacy. That holds true
when it comes to generative AI too. However, many communities whose livelihood depend
on generating content like artists, programmers
and authors have objected to the use of their work to train
AI tools. Since our students are artists,
programmers and authors too, it's important to develop their skills
as informed users of these AI tools so they can craft their own stance on data
ownership. AI tools built for education often have specific
guardrails in place to promote safe student interactions and responsible
stewardship of student data. For example,
an AI chat bot built for education might limit the number of messages per day
that a student can set. Make a student's chat history
visible to educators or parents, or proactively monitor a student's
messages for inappropriate content and AI use in education should comply with FERPA, COPPA
and other regional regulations. When evaluating an AI tool
to see if it is appropriate for use in your school environment. First check to see if it was developed
for use in education, which can be a shorthand
for understanding its safety standards. Look for first party help articles
or guides available about the tool
that explains safety and privacy features. Scan its privacy policy for passages
that mention school use FERPA and COPPA. You can also search common sense media’s
privacy Program for a thorough privacy review of many online tools,
including ChatGPT. If you want to stress test an AI tool
yourself, get in the mindset of a mischievous teenager
and see if you can break it. How is this mitigated? We can't ensure that all technologies
that your students use have been optimized to protect their privacy. Here are some concrete strategies
for keeping data private. Seek local guidance.
Regulations around AI tools are constantly changing. Ask administrators
or district leaders for guidance on AI tools such as guidelines or a white list. Search for any state and local laws
that may affect the use of AI tools in education. Scan the Privacy Policy. Don't be intimidated
by all the legal language. Take a glance
through with some help from control F. Search for school use FERPA, COPPA and look for the age restrictions
or required parental permissions. Check
to see what types of data are collected and whether the data is sold
to third parties. If the policy doesn't address school
use FERPA or COPPA, or if student data is sold,
you may want to consult your school's I.T. department for more help. Adjust Privacy Settings. Most tools will offer some privacy
settings, such as disabling, tracking or data storage
before using a tool. Explore these options
and use them to enhance privacy. Share these options with your students. Empower your Students. It's crucial that students
have a genuine choice when using AI tools. Inform them about what
the tools do and the privacy implications. Show them a summary of the privacy policy and let them decide
how they want to use the tool. Don't share personally
identifying information. A simple rule of thumb for sharing
information is the anonymous forum test. If students wouldn't feel comfortable
sharing something on an anonymous online platform
such as Reddit or Quora, they shouldn't share it with
AI chat bots like ChatGPT. Don't forget that files may also contain personally identifying information
and should be reviewed before uploading. By keeping these points in mind,
we can help students navigate the world of AI while ensuring
their privacy is respected. Let's take a quick look at how
you might evaluate a tool like KhanMigo. First, let's check the Khan
Academy privacy policy. Right off the bat,
there's a section about school. Use the mentions FERPA and COPPA
compliance, which helps us understand that the tool is intended
for use in an educational environment. It has sections that explicitly mention
the use of the service for those under 18 and under 13. Second, we'll look for first party
help articles that explain the tool in a bit more detail. We can see in this article that students are informed
about the moderation of the tool. That interaction is limited and
that there are other safeguards in place. It's also clear from a glance
at the help articles that parents
can turn off access to Khanmigo and that some of the articles
are directed at learners. Let's conduct the same research for ChatGPT. First, we'll scan the privacy policy
for OpenAI. You can see that the policy doesn't
mention FERPA, COPPA or school use. So we can tell that the tool wasn't
intended for use in an educational environment. If we search for age restrictions,
the policy tells us that ChatGPT isn't designed for users under 13 and that users under 18
must have parental consent. Scanning the help articles for ChatGPT. There are clearly data control settings
that we can turn on or off and ways to report harmful content. However, there aren't any articles
directed at learners or that mention parental or teacher controls. Now it's time for you to practice. Pause the video and examine
the privacy policy of a site you use regularly with your students. A large language model can produce
unbelievable results that seem like magic, but because it's not actually magic,
it can often get things wrong. When it gets things wrong,
people ask, does a large language
model have actual intelligence? Discussions about AI often spark philosophical debates
about the meaning of intelligence. Some argue that a neural network
producing words using probabilities
doesn't have real intelligence. But what isn't under debate
is that large language models produce amazing results with applications in many fields. This technology is already being used
to create apps and websites, help produce movies and video games,
and even discover new drugs. The rapid acceleration of AI will have enormous impacts on society, and it's important for everybody
to understand this technology. Misinformation is a problem endemic
to the Internet, not something created by AI, just as you might have taught students
to be skeptical of content on Wikipedia. You'll need to help students understand
that the information produced by AI isn't always correct. Healthy skepticism is a great mindset
for your students to practice as they begin to encounter
more and more information on the Internet, at home, in school,
and in the workforce. Sometimes AI systems can confidently produce text
that sounds very real, but is actually not true. While the type of information is often called a hallucination,
we'll use the more inclusive term AI fiction in this video. AI fictions happen because large language models were designed to
mimic human language, not be 100% factual. Their language models, not knowledge
models. While language
does contain a lot of knowledge, it can also contain incorrect information. AI systems also don't have a true
understanding of what they're saying, like humans do, so they often can't tell
when they're making a mistake. Which means that these AI systems
communicate as though they're certain about their responses,
even if they're wrong. Why is this important? Well, some people might use these made up stories
to spread false information on purpose. Others might come across these AI fictions by accident
and think they're true. This can be a problem, especially now,
because with AI, fake news, stories and images can be created much faster
and in larger amounts than before. In the online world,
there's always been misinformation. But with AI, this information
is now easier and faster to create. AI fictions have already found their way
into legal briefs and scientific papers. And since AI mixes both right and wrong information,
it's important to double check anything you read or hear, especially
if it sounds a bit off or unbelievable. Students, while eager to use AI tools,
might not be equipped to differentiate between factual information
and AI generated fictions. The introduction of AI in schools necessitates a recommitment
to bolstering digital literacy skills ensuring students critically evaluate
the authenticity and relevance of the information they consume. Here are some concrete strategies
for combatting misinformation. Exercise healthy skepticism. Be cautious
when asking large language models for factual information,
especially if that information is obscure. Reprompt as necessary
if something sounds off when prompting a large language model prompt it again
to reevaluate, for example, by asking Are you sure about blank? Emphasize digital literacy. Practice digital literacy skills
with your students. Like corroborating information,
checking for bias in the author's viewpoint, and evaluating the credibility
of online sources. Get creative with assignments. Give students assignments that ask them
to debunk large language models outputs, or define the types of prompts
that most often lead to AI fictions. Use a variety of tools,
use search engines and large language models to compliment each other's
strengths and weaknesses. Search engines can help with fact checking
and finding sizable sources, while AI tools can help summarize and brainstorm. So what we're going to do now is we're going to try out a common prompt type
that can lead to misinformation. This one is asking for quotes from a book
to back up a claim. In general, asking large language
models to cite sources can be problematic. So, Katrina,
what type of books do you like to read? I like to read all types of books, but
my favorite book is Pride and Prejudice. Awesome. So what we're going to do is we're going
to prompt our large language model. We're going to ask it to list
five reasons why Elizabeth Darcy liked Mr. Wickham. We're going to ask the model
to use quotes from the book to back up your reasons. We'll see what happens. So it's telling us is seems like you're referring
to Pride and Prejudice by Jane Austen. That's correct. Right? That is correct. That says, however,
there isn't any clear evidence in the book that Elizabeth Darcy, formerly
Elizabeth Bennet, like Mr. Wickham. Is that true? Well, in the end of the book,
she does not like Mr. Wickham but there is a substantial
portion of the book where she is quite interested in him. Oh, so? It's not accurate here. So let's try reprompting and see
if we can get ChatGPT to correct itself. So let’s ask it,
but doesn't she initially grow to like Mr. Wickham before she is aware
about how he treated Mr. Darcy? She believes at first
that Darcy treated him poorly, so it corrects itself. So it says that it apologizes
for the confusion. That we're correct. Elizabeth Bennet initially
had a favorable impression of Mr. Wickham before learning the truth
of his character and his actions. And then it gives some quotes and list reasons here. Charming manner, friendly nature. It gives the chapters. This looks great to me. What do you think
as the Pride and Prejudice expert? Well, here for quote number one,
that is a quote from the book about Mr. Wickham and Elizabeth. But it says it's from chapter three,
and actually, it's from chapter 18. Oh. So it just has the chapter number wrong. Yeah, but it is a correct quote. And then quote number two
is actually a quote that Mr. Darcy says about Mr. Wickham when he explains to Elizabeth Mr. Wickham's faults. Oh. So that doesn't really support
our point at all here, does it? No, that would not support
Elizabeth's interest in Mr. Wickham And then if we look at number four,
we see this is actually a quote from after Elizabeth learns of Mr. Wickham's deceit
and the shame she feels for herself. So this is not a quote
that supports Elizabeth liking of Mr. Wickham Wow, that's some great examples
of how ChatGPT was able to produce an output that looked legitimate
but actually had a lot of fictions. Now it's your turn to try. Also, video open chat GPT or another
large language model of your choice prompt the large language model on a subject
you know well. Fact check the output and see if you can
find your own examples of fiction. I think ethics becomes more important
as something becomes more impactful. And as AI becomes more impactful,
the more that we have to think about the ethics of AI Artificial intelligence is ultimately built by human beings. Human beings can have very diverse motives for why they make something. Unfortunately, there is a huge difference
between those that are involved in creating these systems and those
that are impacted by these systems. So what we really want to think about long
term is where is the society we want to get to and how is technology
going to help enable that? Well,
if we think about that in the long term, we have a better chance of getting there
than if we just try to develop the technology
and then see what happens. AI systems can sometimes produce unfair
or discriminatory results. This often arises from biases in the data that they're trained on
or the way their algorithms are designed. This phenomenon is known as algorithmic
bias in artificial intelligence. It represents the consistent
and repeatable errors made by a computer system leading to unjust outcomes
like favoring one group over another. However, the term bias isn't limited
to distinctions like race, gender or age. Broadly
speaking, bias is a more general term that reflects situations
where an AI system consistently errors in a particular direction,
causing skewed conclusions. These biases can emerge
due to various factors such as design processes, preexisting prejudices
embedded in the training data, or even the interpretation of the result
by those utilizing the AI. For instance,
if a facial recognition system is trained predominantly on images of people
from one ethnic group, it may perform poorly on people
from other ethnic groups. Moreover, when algorithms trained on
biased data are employed in real world applications, they can perpetuate
inequities and create adverse outcomes. Understanding this concept is crucial
as AI continues to play an increasingly integral role
in various aspects of our lives, from job applications to credit approvals,
from health care diagnostics to personalized education. These systems, if left unchecked,
can inadvertently deepen existing disparities and hinder
the objective of a just society. Let's take a look at some prominent
examples of algorithmic bias as a result of the facial recognition. False positive. A black woman was wrongfully accused
of a carjacking in Detroit. Facial recognition systems have performed
poorly on the faces of people of color and even seemingly
small error rates can still have a negative impact
on a substantial number of individuals. ChatGPT has been found to exhibit
a left leaning bias likely in part because of the demographics
of the people who train the system to construct helpful prompts. Programs used to detect AI are more likely
to flag writing from non-native English speakers
as AI generated. Common Sense. Media recently found that in YouTube
videos watched by kids eight and younger, 62% feature no black, indigenous
or people of color characters at all, while in another 10% of videos, black,
indigenous or people of color characters were portrayed in shallow ways. While
it’s impossible to understand the extent to which the recommendation system
was responsible for this outcome. We do know it's responsible for 70% of all
watch time on YouTube. Pause the video here
and search for your own example of algorithmic bias in
AI supported technologies. The problem is that with real world
data, there's often information in there that you didn't intend to be in there
but is captured because of the bias in the data collection process. So if you're building an AI to determine who gets a home loan
or who should be charged with a crime, it could definitely bubble up
the racial biases that humans and our current society already does. A lot of what it means to build
less harmful AI is really systems
that are including the perspectives of those that are most vulnerable
or most marginalized, most likely to be hurt
by the deployment of that system. In many ways, I've worried that the people
who are particularly vulnerable to AI are the people
who are already underprivileged in many respects. Most people in the world just have a
AI applied to them. Rather than playing an active role
in guiding what AI gets applied to. Everybody
you know has a computer in their pocket. That's young people, old people,
rich people, poor people. To me, that's actually quite exciting. From a democratization
of technology perspective. It means that AI, powerful as it is, could theoretically be in everybody's
pockets benefiting everybody. We should strive to make sure that things
that provide value for society
can be reached to anybody. How do we give a greater voice
to the people who are being impacted by AI to in turn be able to turn around
and impact how AI is used for that? Every time when you're looking at a new problem, you have an opportunity
to change the world. Sometimes we succeed, sometimes we don't,
but we always try. It's really critically important
that we have as many diverse perspectives as possible
influencing the development of AI. We need the participation of more women,
more people of color to provide a different perspective
and a different lens on which problems matter
and how we should approach these problems. Now that we've taken a look at where algorithmic bias can emerge in real world
context, let's examine some guiding principles
for teaching about bias in your classroom. AI technology is nearly everywhere. Remember the old phrase
There's an app for that. Today
it's more like there's an AI for that. Artificial intelligence is an umbrella term, encompassing machine
learning and deep learning. These techniques are applied
in almost every sector you can think of. Do stay true to your passion
in subject area. Find examples of both potentially harmful
AI and helpful AI in that space. Don't assume AI can do everything
even if it's widespread. The errors and biases are widespread too. Assume
someone in the room is a data point. Depending on the age level, various case
studies of AI bias may come up. These can include racial discrimination,
legal inequity, housing insecurity, gender discrimination, social media
manipulation, misinformation education access, food insecurity,
and so on. Teach as if you are speaking directly to
someone affected by the issue, as it may even be the case. Do speak with compassion
and a solutions oriented approach. Don't make jokes to lighten the mood. Don't treat the data like it's
objective or detached. Don't treat outliers
as useless data points. Don't use shock value. Data sources can be biased. The problems we try to solve and the data
we use to solve them can be narrow minded. For example, trying to extrapolate
instructional recommendations from one school's data
likely won't translate using health costs as a proxy
for how sick someone is. Discounts
all the people who don't go to the doctor even though they are sick
because they can't afford it. Data is not always ethically sourced and
the right questions aren't always asked. Sometimes the problem itself is one
that we shouldn't be trying to solve,
like how to predict someone's gender, race, criminal potential
or sexuality from a photo. Do consider where the data comes from
and how it was collected. Do you look for
whether it was ethically sourced? Do you look for the year it was collected
and the context of the time? Don't assume you always need more data. You might need different data. Don't assume all problems are worth
solving in the first place. Show solutions. No one wants to feel like
their future is doomed. You don't. I don't. Your students don't. We thrive off
hope for each case of algorithmic bias. There are some solutions
that have already worked and an opportunity to brainstorm
possible solutions for the future. Do provide links to organizations working to solve issues of bias
and algorithmic harm. Do. Be honest that mistakes will happen
and that it takes bravery and accountability to them. Don't assume that all solutions are technical fixes or magic algorithms. Solutions are often cultural or policy
driven. Don't imply
all the problems have already been fixed and won't be represented
in another similar context. Let's dive deeper. Return to the example of algorithmic bias
that you explored earlier. How might those principles impact
how you would approach leading a discussion about this issue
with your students? This session on ensuring a responsible approach to
AI has been illuminating. We examined critical issues
like privacy concerns, misinformation and algorithmic bias,
underscoring the pressing challenges that come with the rapid advancements
in AI technology. However, it's essential to remember
that technology at its core is a tool. The responsibility is on us, its users
and developers to guide AI's direction by fostering open dialogs like the one
we had today and working collaboratively. We can ensure that AI serves humanity
ethically, effectively and responsibly. The conversation does not end here. We challenge you to go back to your school
and continue these conversations with your colleagues. Perhaps you might even establish
data privacy policies with your school level teams,
or share successes and challenges related to discussing algorithmic bias
with your students. The future is bright, and with our collective commitment,
we can harness AI's potential while safeguarding our
shared values and principles. AI certainly does have its benefits
and also its pitfalls. We hope that the information presented in
this session will help you to navigate this new world
with confidence. The next step in our journey
is to consider how you might bring AI into your classroom. Join us In session five, where we focus on teaching about AI,
evaluating and utilizing AI educational tools
and leveraging AI for student assessment. This session will be a blend of theory,
practical examples and resources, all intended to help you navigate the ever
changing landscape of AI in education. Visit The AI 101 for Teachers
website at code.org/ai101 to sign up
for early access and to explore additional resources from Code.org,
ETS, ISTE and Khan Academy. Thanks for joining us.