If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

A new chapter in education

How might AI might be used as a tool to power educational outcomes? 

How do students learn?

We know a lot about how students learn. We know students learn more when they:
  • are actively engaged with the material to be learned
  • get immediate feedback on their responses to new material being learned
  • work on material that is just at the edge of what they can do when provided with a little support
  • see value in what they are learning
All of these things are hard to manage consistently in classrooms with 25-30 students and one teacher. The teacher simply can't provide each student with the individual attention they need.
Since personal computers have become widely available, educational technology has tried to fill this gap. But these solutions have struggled to really act as tutors, in part because they were not easily able to consume and produce open-ended responses in natural language.
In other words, it's difficult for teachers—and it has been difficult for computers—to provide consistently rich, personalized, insightful, actionable feedback to student writing on a large scale.
That is changing.

What will large language models mean for student learning?

Large language models have arrived—and they offer the potential to change how learners interact with educational technology.
We believe that activities in which students interact with LLMs as collaborators and thought partners could hold the key to a new generation of engaging and efficacious online learning experiences!
We know learning outcomes are improved through 1) active engagement, 2) immediate feedback, 3) working at a personal learning edge, and 4) perceived value. Therefore, for example, LLMs can promote learning by:
  • Leading students through the steps of the writing process while offering rubric-based feedback on drafts
  • Encouraging students to check their understanding of procedural and application-based skills and tasks
  • Engaging with individual students on deeper learning questions like why, what if, and how
  • Helping students link what they are learning to their goals, their lives, and the things they are interested in
These ideas aren’t wishful thinking or “maybe someday” ideas, but things that we can ask these models to do now. However, we need to understand how they act in the real world, with real students, and we'll need hard proof that they can improve engagement and learning outcomes.
We should be asking these questions of any educational technology—and we can also help find the answers together.
Onward!

Want to join the conversation?

  • duskpin ultimate style avatar for user ꒰ °P4STELB0MB 。꒱ ⊹˚.⋆
    I'm interested in learning if students, when faced with educational challenges, might rely on AI instead of their critical thinking. Therefore, AI might decrease the ability for a student to learn from their experiences. I hope to better understand Khan Academy's stance on these issues and mean no disrespect.

    Doesn't Khan Academy promote learning from mistakes, given that its motto is "You can learn anything"? Therefore, if a student allows AI to think for them, does AI hinder creativity in education? Does it disallow productivity to learn and grow from mistakes? Again, I mean no disrespect to the team at Khan Academy, instead I seek clarity on this subject.
    (14 votes)
    • piceratops ultimate style avatar for user Dave Travis (Khan Academy)
      From the author:Hi pastelbomb! This is a great question, and one that I think all teachers and students should be asking themselves. At Khan Academy, we're working hard to provide and create a learning experience where any AI tool we might develop will speak in such a way as to promote critical thinking and inquiry, and will serve as a personal tutor who encourages students to think more deeply and broadly, not less. That includes allowing students to make mistakes and helping them learn from them.
      (9 votes)
  • male robot johnny style avatar for user Brennan Caverhill
    This is phenomenal. I am a grade 6/7 teacher with 30 students in my class. Khan Academy is my main math tool. When will this be available in Canada?
    (9 votes)
  • blobby green style avatar for user Docedward
    Can you explain where in the structure of AI that Large Language Models sit? We have Artificial Intelligence and Machine Learning is a type of developing AI. We have Neural Networks which are one type of model for machine learning. Where does a LLM sit? And then, how is that distinct from Generative AI?
    (4 votes)
    Default Khan Academy avatar avatar for user
    • aqualine ultimate style avatar for user Lester
      Let's break this down like we're building a tower of toy blocks:

      Artificial Intelligence (AI): The big base block. This is the general field that aims to create machines that can perform tasks requiring human-like intelligence, like understanding language, recognizing patterns, and making decisions.

      Machine Learning (ML): A smaller block on top of the AI block. This is a subset of AI where we teach machines to learn from data, so they can make decisions or predictions on their own.

      Neural Networks: An even smaller block on top of the ML block. These are specific algorithms in machine learning that are inspired by the structure of the human brain. They are good for complex tasks like image and speech recognition.

      Large Language Models (LLMs): A tiny block sitting on the Neural Network block. These are specific types of neural networks designed to understand and generate human language. Models like GPT-4 (the one you're talking to) fall into this category.

      So, if you look at the tower, Large Language Models sit at the top but are supported by all the blocks beneath them: they are a specialized type of neural network, which is a category of machine learning algorithms, which themselves are a part of the broader field of artificial intelligence.

      Generative AI is a type of AI that can generate data that resembles some input data. So, Large Language Models are a specific type of Generative AI focused on text. They're like a little corner in the bigger playground that is Generative AI.

      Hope this helps!

      Source: Been working as a data scientist for a few years now and have taught AI at Northwestern University.
      (6 votes)
  • blobby green style avatar for user florence.i.chukwurah
    When students interact with LLMs as collaborators, then a lot will be achieved because they will get online tutors which can condensing large amounts of text into a more concise format, allowing them to quickly generate summaries of long documents,answer questions and get the answers for them.
    (6 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user akaminsky
    What do we do about ai and plagiarism?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user 7jrs7
    How is it guaranteed (or approximately so) that a chat box application of AI or even an educational platform that uses AI is not engineered with a bias, such as that exhibited just recently by Twitter et al. in regard to Progressive ideas and content, with more traditional or conservative ideas shadow banned, etc.?
    (2 votes)
    • piceratops ultimate style avatar for user Dave Travis (Khan Academy)
      Hi James! I can say that OpenAI and other developers of LLMs are aware of the issue of potential bias in the training data, and before releasing any new model, generally perform extensive testing and fine-tuning to ensure any potential harm is mitigated. But you're right! An LLM is only as free from bias as the body of its training data. Mindful, responsible developers (eg Open AI) therefore train their LLMs on a data diet that is as balanced as possible.
      (3 votes)
  • marcimus pink style avatar for user jessica gregg
    What do you mean by "personal learning edge"?
    (3 votes)
  • blobby green style avatar for user w.b.chance
    Evolution of language has been human directed and matches what we are able to grasp at that time (ie. heliocentric verbiage) and what we experience in our world (Inuits have many words for snow). In essence we speak what we need to grasp or are capable of perceiving. Language has been a force of dominance and culture. The Greeks for example grew powerful as they implemented vowels. Are we not in danger of the language of technological AI taking over and language no longer being a human construct? Are we not endanger of developing a language beyond our grasp?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • blobby green style avatar for user ottem_eric
      If AI starts spouting about things not interesting, germane, or comprehensible to Eskimos or anyone else... in a language or vernacular that is not accessible to the human ear or mind, we probably won't notice. Like a preschooler (or most undergraduates) in a symposium on metaphysics... like my wife if she turns on the TV to the sports channel I was watching yesterday... we'll just change the channel. If AI takes over every channel, I guess we can all just go outside and start renaming the flora and fauna whatever we like.
      (2 votes)
  • blobby green style avatar for user gajewskib
    Very interested in teaching this. I have 7th grade through 12th. Any idea when this would be available?
    (2 votes)
  • blobby green style avatar for user Blake O’Lavin
    Can AI set up steps to form a process to a goal or procedure?
    (2 votes)