If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content
Current time:0:00Total duration:5:08

Video transcript

When you jump over a puddle, climb a tree, or play a game of basketball, you're doing things that no robot can currently do. You're outperforming the work of thousands of scientists, companies, and even governments. Now, for you and me, moving around's really easy, but for a robot to control its motion, it's much harder. We could somehow copy our motor control in robots, then we could save lives by replacing humans on hazardous jobs like firefighting, performing a spacewalk or even cleaning up the meltdown at the Fukushima nuclear reactor. If I want to grab this drill, I just pick it up - there's no thinking involved. But why is that? It's because my subconscious mind is recruiting billions of braincells to do all the complex calculations involved for me. When I decide to pick up the drill, I first rely on 30-50% of my brain to subconsciously process the signals from my eyes. Now, in the front part of my brain, I consciously whip up a plan to reach my arm out towards the drill, touch the drill, grasp it, and then bring the drill back to my body. My brain subconsciously transforms these vague commands, first, to basic motor plans in my premotor cortex then, more precise muscular instructions in my motor cortex. Finally, my cerebellum perfectly coordinates activation patterns that get passed to the brainstem and sent out to all of my muscles. Once I take my first step, my brain is constantly comparing the plan with the real-time situation and adapting to what I'm sensing, even if I'm not fully aware of it. Now you can start to see why it's so hard to get robots to do these complex tasks. It's not as easy as just controlling a motor - you have to plan and perfectly adjust all of the motors together. With robots that have wheels or treads, we don't have to worry about balance since we always have a stable base. But if one of these robots comes across stairs or a ladder, its mission can come to an early end. That's why we're working on bipedal robots, robots that have two legs. Though they're harder to control, they're more adaptable and better suited to work in environments that were made for humans. I'm here with a bipedal Atlas robot, made by Boston Dynamics, that the MIT team is using to compete in the 2013 DARPA Robotics Challenge. You can call him Helios. Now, MIT is working really hard to develop Helios' brain, which is really a series of algorithms, or steps that Helios can take to take his sensor data to control each of his 36 joints and motors. Without this brain or these algorithms in place, Helios is really just a very expensive, 330 pound pile of metal. The MIT team is working hard on algorithms that improve Helios' ability to walk on difficult terrain. Currently, a human operator must send commands to Helios through a wirelessly operated computer. The operator looks at the sensor readouts from Helios and decides what basic actions the robot should do. This is sort of like how my frontal lobe took the visual input from the back of my brain and made basic plans for my motor system to carry out. Helios' operator makes these decisions for him, but once the command is sent to Helios, it's up to the robot to execute them himself. So Helios' brain is kind of a hybrid - it's part human and part computer. But why do we even need our human operator? Well, that's because as difficult as motor control is, motor planning is even harder. It takes a lot of knowledge about your environment to translate your goals into plans. To us humans, it's really obvious how to pick up this drill, but that's because we already know so much about it - we know that it's solid, we know about how much it weighs, that it's being held up by the table, and we know to pick it up from the handle. But to program all of this information - and so much more - into Helios would be impossible to do it for every conceivable object. However, there are some labs that are working on other robots that can learn this information slowly from its environment, like we do when we're infants or toddlers. For Helios to perform a task completely on his own, his creators have to answer one more question - how to adapt to an ever-changing environment? If I go to pick up this drill, but someone takes it before I can get there, I can run after them and take it right back. But if I were to take this drill from Helios, he wouldn't know what to do without his human operator. It takes 30-50% of our brainpower just to keep up with all the sensory information from our eyes. So you can imagine how challenging it would be to program a robot to continuously pay attention to his surroundings and adapt to them. Fully autonomous robots are on their way. But partially-autonomous robots, like Helios, can still be very helpful while we're waiting. Maybe one day soon, firefighters will be controlling robots that can carry people from burning buildings, and astronauts might even do their spacewalks remotely, controlling a robot like Helios from Earth.