If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

New tools for a new age: FAQ

Answers to some of the most urgent questions students, parents. teachers, and administrators have about the use of AI for education

Generative AI: a new tool for a new age

Just as the arrival of calculators made educators ponder the future of math education, the appearance of chatbots driven by large language models has prompted some tough questions.
In this article, we’ll try to answer some of the most urgent questions we’re being asked by students, teachers, parents, and school administrators.

FAQ: Students

How can I use this tool in a way that helps me learn?
There are many different ways AI tools can help you learn. Here are a few examples:
  • Get unstuck on a math problem: If you don’t know what to do first, what the next step is, or just want to check if you are thinking the right way, you can interact with the AI to help.
  • Find out how or why something works the way it does: “Why is the sky blue?” "How does a radio work?" You may not be able to explain, but an AI tool can help you find the information and engage you in a fascinating conversation.
  • Ask it to simplify a confusing concept, and then quiz you on it: With a little practice with your prompting, you can start using an AI-powered chatbot to prepare you for a test.
  • Ask it to brainstorm activities to help you learn: LLMs can provide great ideas for making learning fun and relevant—just ask.
How do I know if what the tool produces is right or wrong?
This is a very important question because AI-powered tools can be wrong. They sometimes make up facts—AI developers call these falsehoods “hallucinations.” They sometimes give wrong math answers and might even tell you your answer is right when it isn’t. How should you deal with this?
  • Never trust that the answer the tool gives you is 100% accurate: Don’t rely on it to give you answers when it is really important for you to be right.
  • Trust your instincts: If you’re saying to yourself, “that doesn’t seem right,” it may not be! It is easy to convince yourself the AI must be right and you must be wrong. Instead, investigate further.
  • Consult other sources: It’s usually a good idea when learning new things to do this anyway, but it’s especially important when learning with AI-based tools. Think like a fact checker. What are the most essential facts to verify? Where else could you verify information?
  • Remember some places the tools are more likely to hallucinate: They won’t know information about current events because they take time to train and are not always up-to-date. As language models, they aren’t always experts at math. They aren’t always able to link to other web sources unless the specific site feeds them.

FAQ: Teachers

Are there ways this can make my job easier?
What are the planning tasks that take you the longest? Could an AI-powered tool that is really good at summarizing and writing help you?
Here are a few ideas to get your creative juices flowing:
  • Generate ideas for lesson hooks: Need an engaging introductory question, or a quick activity to grab attention and spark your students’ curiosity? Ask the bot for some ideas.
  • Compile ideas for hands-on classroom activities: AI-powered tools are incredible idea generators. You won’t like all of their ideas, but it can be fun to pick and choose the ones that are the best match for your classroom, and improve them with your imagination, insight, and expertise.
  • Generate feedback and suggestions for improvement on drafts of student writing: This use case might sound controversial, but it doesn’t have to be. Any feedback the tool offers is yours to consider. You can keep it to yourself, or review and revise it before sending it to the student.
Is there something about interacting with these models that is likely to help my students learn more?
We don’t know yet, but we have some hypotheses that we're testing based on what we already know about how people learn.
Perhaps the biggest potential learning benefit to this technology may be the personalized, differentiated, tutor-like interactions it can generate. With so many different levels of learning in your classroom, you know how difficult it is to reach all students at their level. There are too many students and not enough of you.
Here are some ways AI-powered tools might be able to help:
  • Get your students unstuck: You simply can’t be next to every student as they're working on questions and answering problems. This type of technology can help coach students through a question.
  • Enable deeper conversations: You likely have engaging, deep discussions in your classroom, but does every student participate? With AI tools, each student can engage with deeper questions (How? Why? What if?) and respond and be encouraged and validated.
  • Help your students see relevance, which we know increases motivation: How many times have you been asked, “When will we need to know this?” AI tools can help your students make those personalized connections.
  • Improve student writing by providing feedback on drafts: What will AI’s role be in the future of writing instruction? What writing skills do students really need in an AI Age? These are provocative open questions—we don’t know the answer, but there is much room for experimentation.
What are the implications of these models for academic honesty?
This is definitely a hot topic for academic communities, and each community is developing answers that make the most sense for them. Given that a core principle of academic honesty is to present proper authorship of work, there are a number of paths we are seeing emerge:
  • You can forbid all use of these models: Since might sound straightforward, but then you have to figure out a way to detect when AI is being used. This will become exceedingly difficult as the models become more advanced. Our view is that detection technologies won't be able to keep up.
  • You can allow use of the models but require students to acknowledge their use: In addition, you could ask students to document each prompt they crafted and/or describe how they used the AI tool, then reflect on that experience.
  • You can design assignments that are difficult or impossible to use these tools with: For example: Assignments that rely heavily on in-class discussions or activities of which the model would have no knowledge.
  • You can have students complete assignments in class where you can monitor what tools are used

FAQ: Administrators

How can I understand what these models do?
The best way to understand what large language models do is probably to start by interacting with one, through interfaces like OpenAI’s ChatGPT or Microsoft’s Bing. After you get a better sense of “the What”, you can learn more in this course about how they might apply to education in your community. You can also ask the AI tool itself to explain what it is and how it works.
What are the implications of these models for academic honesty? How should I think about setting policies for the use of these models?
Here’s some great advice from our friends at Montclair State University’s Office for Faculty Excellence:
“Most students who engage in academic dishonesty are doing so impulsively or without reflection. Anticipate this human behavior and engage students in an open discussion about academic dishonesty.”
For a quick update to your anti-plagiarism policies, consider adding a clarifying statement, such as
“Use of an AI text generator when an assignment does not explicitly call or allow for it without proper attribution or authorization is plagiarism.”
We encourage you to engage your numerous stakeholders (parents, teachers, students, school committee) in an open conversation about their questions, concerns, and approaches.

FAQ: Parents

How can I understand what these models do?
This course will give you a pretty good overview, and you can read about large language models elsewhere, but it may remain fairly abstract until you actually interact with one yourself. Models like OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Bing are being guided by different commands (“prompts”) behind the scenes, so we recommend trying a couple of them out before reading more about how they work.
How can I control or monitor how my child interacts with AI-powered tools?
It is important to know that LLM-based chatbots like ChatGPT are available for use to the public. Monitoring your child’s use of them is dependent on how you monitor your child’s use of the internet or technology generally, and we encourage you to read beyond this course for advice on age-appropriate ways to guide responsible technology use for your child.
Ask other parents, your children's teachers, or school administration to develop a strategy and communicate it out to the school community. We know how hard it can be to keep up, but you're not alone!
How can I guide my child to use these models responsibly?
You should think about using these models responsibly in the same ways you guide your children to use the internet and other technologies. Teach them the WHYs and the HOWs of what responsible use looks like. Guidelines might include:
  • Avoid conversations with it that might be categorized as hate, violence, or sexual in nature
  • Acknowledge using it to help with drafts or feedback on schoolwork
  • Ask their teachers what their policies are about using it, and then comply with the policy
You can also start to help your child learn how the tools can serve as helpful collaborators. To get started, you might try using it to write a story together, or ask it to answer questions your child has about the world and how it works, and then discuss the answers.

Want to join the conversation?

  • starky sapling style avatar for user Owen
    I have been following how to bypass the regulations set on AI chatbots, getting them to say things that they were designed not to say, things that can be dangerous, bigoted, or creepy. Do you feel it is responsible for students and teachers to use these resources without knowing how it can go wrong?
    (14 votes)
    • piceratops ultimate style avatar for user Dave Travis (Khan Academy)
      From the author:Hi Owen! You bring up a great point. Khan Academy and OpenAI are doing our best to let students know all the ways in which AI-based chatbots can get things wrong, and the kinds of things they are good at, and not so good at. Headline-grabbing "jailbreaks" of AI tools were sensational in December '22 and Jan '23, but we do wonder if they will continue to hold as much public interest. Any tool can be misused, and LLM-based generative AI is no different.

      While we hope our community will learn more about these tools, and focus on the positive use cases of AI, we're committed to setting up guardrails so that any AI tool that Khan Academy produces will be safe and trustworthy.

      We hope that our community will continue to hold us accountable.
      (27 votes)
  • leaf green style avatar for user jhill
    Can we create an LLM for education that only allows certain capabilities, or certain prompts? (ie "Help me research polar bears?" as opposed to "Write a report about polar bears."
    (5 votes)
    • stelly blue style avatar for user Evan Lewis
      This is something that Khan Academy has been exploring with Khanmigo's student vs teacher mode. Creating an entirely new LLM for education would be very difficult, but this same effect can be achieved through creative prompting.

      When a user is in student mode, we prompt Khanmigo behind the scenes to ensure it does not simply provide students with an answer, but rather asks the student to explain their thought process and walks them through the problem step-by-step.
      (7 votes)
  • sneak peak green style avatar for user codeMaster - #Opes FTW
    Why have I seen a lot of new AI chatbots on different websites being used for things lately? It seems like they are all just popping up now.
    (4 votes)
    • piceratops ultimate style avatar for user Overcomer
      Truly, it is because of ChatGPT. Once ChatGPT started interacting with the public, the media took ahold of it and ran. From then, other companies decided, "Hey, it might be a good idea if we got on this trend, else we be left in the dust!" From there, companies started racing to see who could integrate the AI into their products faster. We don't know where it will go from here, but we can only hope it goes in a good direction.
      (7 votes)
  • duskpin ultimate style avatar for user georgie2009
    When is a program considered AI?
    For example, what if a program used a data set for hours spent reading weekly and a corresponding score on a language arts final. The average amount of hours spent reading is inputted. It then comes up with an estimate for the score on the final (through training the data and then calculating percent of error). Would this be an AI program?
    (3 votes)
    • ohnoes default style avatar for user HSstudent16
      The term "Artificial Intelligence" is very broad, at least to programmers.

      Some people would say no, since the program does not mimic human activities. Others would say yes, since the program makes an informed decision based on input.
      (5 votes)
  • blobby green style avatar for user 25PetersonG
    how can I make AGI?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user nazm6593
    how many dimensions are in our universe
    (3 votes)
    Default Khan Academy avatar avatar for user
  • hopper cool style avatar for user lianmos1
    Can we trust AI in the future or will it malfunction and pose a threat to humanity? It's a thought-provoking question that many of us have been pondering. I would love to hear different perspectives on this topic. If you're reading this, feel free to share your predictions on whether we should be worried about AI turning against us or not.
    (2 votes)
    Default Khan Academy avatar avatar for user
  • mr pants pink style avatar for user hannah.foister1
    I thought AI was kinda dangerous and we weren't supposed to interact with it. Are you saying were allowed/supposed to use it for school? or just learn about it?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user florence.i.chukwurah
    Since AI can give you wrong answers, why do they use it in schools and in medical field. Parents and teachers need to know how to control how their kids use the material.
    (1 vote)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user miranda austin
      As a teacher i can tell you Chatbots are a great asset. The fact that they may lie forces students to investigate further, and the more they investigate, the more they learn and the more interested and invested they become, on the topic. I encourage them also to collaborate with the ai, and keep going back to the chatbot to revise, rewrite and take the next steps.
      (3 votes)
  • blobby green style avatar for user Nipun
    Will AI take over manual education? because in every field we are using AI. This means that in the future, there will be no teachers only chatbots.
    (2 votes)
    Default Khan Academy avatar avatar for user