Neural Networks: How Do Robots Teach Themselves?
Articles,  Blog

Neural Networks: How Do Robots Teach Themselves?

Hi everyone! Welcome to the new Seeker Elements set. We’re still going to be covering all of
the mind bogglingly awesome discoveries in the science and tech world, we’ve just got
a fun new way to do it. And what better way to kick things off then
with an update from the field of machine learning and robotics. If you’ve ever seen a baby kick and squirm
involuntarily or watched a small child clumsily try to do a complex physical task, you know
that it takes humans years to develop fine motor skills. So it makes sense that intuitive, fluid motions
are hard to get robots to do on their own…up until now. And get this–the robots are teaching themselves. A new robotic demonstration from a company
called Open AI uses something called machine learning, specifically a neural network, to
allow this robotic hand to perform a complicated series of independent object manipulations. That means the motions you’re seeing here? The robot is doing that by itself, without
any input from or control by a human, and without any DIRECT programming to perform
each action. But first–what is machine learning? Machine learning is a subset of artificial
intelligence. It’s getting computers to perform tasks
without being explicitly programmed to do them. Take one of the most advanced robots we have
today, a robot that helps us perform surgeries. This is a traditionally programmed robot,
that has to be explicitly told what to do every time. The programmer has to write: “if this happens,
the machine will do that”, for every step of that robot’s action. For tasks where that would be prohibitively
time-intensive, machine learning algorithms can be used instead. These are algorithms that you can expose to
vast quantities of data, from which they can ‘learn’ certain criteria and identify
patterns. So how is this applied in something like the
robotic hand from Open AI? In this situation the main data sets are all
the different positions of the hand and the block. But the combination of all of these possibilities
gives us way too many options for the robot to practice in real life so that it can ‘learn’
each one. So instead, the researchers used a massive
amount of computing power to simulate training the hand –they designed a virtual space in
which the robot could experience myriad hand and block positions at an accelerated pace
inside a computer model. The team estimates they exposed the robotic
hand to about 100 years of trial and error experience in just 50 hours of simulation
. In addition to letting the hand ‘practice’, the researchers also randomized some aspects
of the simulation, variables like the size and slipperiness of the block and even the
force of gravity. While the simulation couldn’t reproduce
everything the robot would encounter when handling the cube in the ‘real world’,
these variables made it more likely that the simulation practice would be useful in real-world
conditions. Because of all the variables involved, the
research team used a kind of machine learning algorithm with a ‘memory’ –based loosely
on the way a human memory would work–making this particular algorithm a neural network:
a kind of machine learning loosely based on the human brain and its logic structures. They then transferred all of this ‘learned’
information to the real-life robotic hand, which is equipped with a set of cameras that
can estimate the object’s position and orientation. The end result? Simply ask the hand to manipulate the object
in a certain way–say, to reorient the block with its purple side up–and it’ll do it. You ask for an outcome, and the robot can
provide that with no further input from you because it taught itself the series of motions
it needed to get there. As you can see*, this robot developed motions
you may recognize from your own hands, just by learning what motions were most efficient
and effective at moving the block without dropping it just with input from visual stimuli
and joint sensors on each hand. Other experts in the field of robotics and
machine learning state that while this example is exciting and comparatively elegant, it’s
not necessarily new. It’s also still quite limited to an object
of convenient size in a hand that’s facing up, so it’s not dealing with as many challenges
as a machine-learning robot being asked to complete a task that would be useful in say,
an assembly line. So while robotics is still catching up to
human capabilities, it is making strides. This work from Open AI shows us that developing
machine learning algorithms and neural networks could help us make more precise and dexterous
robots, that not only help us with the things we don’t want to do or can’t do–but teach
themselves how to do it. For more on AI, subscribe to Seeker, and check
out this video here about how AI is being used in the real world to monitor your data. And fun fact, some machine learning algorithms
are also teaching themselves as they go. As they complete the task they taught themselves
to do, they’re learning from their mistakes and evolving their own algorithm to make it
more accurate–with some small guidance from human programmers. Thanks for watching Seeker.


Leave a Reply

Your email address will not be published. Required fields are marked *