Stanford's 'autonomous' helicopters teach themselves to fly

Sep 01, 2008
Computer Science Professor Andrew Ng (center) and his graduate students Pieter Abbeel (left) and Adam Coates have developed an artificial intelligence system that enables these helicopters to perform difficult aerobatic stunts on the their own. The “autonomous” helicopters teach themselves to fly by watching the maneuvers of a radio control helicopter flown by a human pilot.

Stanford computer scientists have developed an artificial intelligence system that enables robotic helicopters to teach themselves to fly difficult stunts by watching other helicopters perform the same maneuvers. The result is an autonomous helicopter than can perform a complete airshow of complex tricks on its own.

The stunts are "by far the most difficult aerobatic maneuvers flown by any computer controlled helicopter," said Andrew Ng, the professor directing the research of graduate students Pieter Abbeel, Adam Coates, Timothy Hunter and Morgan Quigley.

The dazzling airshow is an important demonstration of "apprenticeship learning," in which robots learn by observing an expert, rather than by having software engineers peck away at their keyboards in an attempt to write instructions from scratch.

Stanford's artificial intelligence system learned how to fly by "watching" the four-foot-long helicopters flown by expert radio control pilot Garett Oku. "Garett can pick up any helicopter, even ones he's never seen, and go fly amazing aerobatics. So the question for us is always, why can't computers do things like this?" Coates said.

Computers can, it turns out. On a recent morning in an empty field at the edge of campus, Abbeel and Coates sent up one of their helicopters to demonstrate autonomous flight. The aircraft, brightly painted Stanford red, is an off-the-shelf radio control helicopter, with instrumentation added by the researchers.

For five minutes, the chopper, on its own, ran through a dizzying series of stunts beyond the capabilities of a full-scale piloted helicopter and other autonomous remote control helicopters. The artificial-intelligence helicopter performed a smorgasbord of difficult maneuvers: traveling flips, rolls, loops with pirouettes, stall-turns with pirouettes, a knife-edge, an Immelmann, a slapper, an inverted tail slide and a hurricane, described as a "fast backward funnel."

The pièce de résistance may have been the "tic toc," in which the helicopter, while pointed straight up, hovers with a side-to-side motion as if it were the pendulum of an upside down clock.

"I think the range of maneuvers they can do is by far the largest" in the autonomous helicopter field, said Eric Feron, a Georgia Tech aeronautics and astronautics professor who worked on autonomous helicopters while at MIT. "But what's more impressive is the technology that underlies this work. In a way, the machine teaches itself how to do this by watching an expert pilot fly. This is amazing."

Writing software for robotic helicopters is a daunting task, in part because the craft itself, unlike an airplane, is inherently unstable. "The helicopter doesn't want to fly. It always wants to just tip over and crash," said Oku, the pilot.

To scientists, a helicopter in flight is an "unstable system" that comes unglued without constant input. Abbeel compares flying a helicopter to balancing a long pole in the palm of your hand: "If you don't provide feedback, it will crash."

Early on in their research, Abbeel and Coates attempted to write computer code that would specify the commands for the desired trajectory of a helicopter flying a specific maneuver. While this hand-coded approach succeeded with novice-level flips and rolls, it flopped with the complex tic-toc."

It might seem that an autonomous helicopter could fly stunts by simply replaying the exact finger movements of an expert pilot using the joy sticks on the helicopter's remote controller. That approach, however, is doomed to failure because of uncontrollable variables such as gusting winds.

When the Stanford researchers decided their autonomous helicopter should be capable of flying airshow stunts, they realized that even defining their goal was difficult. What's the formal specification for "flying well?" The answer, it turned out, was that "flying well" is whatever an expert radio control pilot does at an airshow.

So the researchers had Oku and other pilots fly entire airshow routines while every movement of the helicopter was recorded. As Oku repeated a maneuver several times, the trajectory of the helicopter inevitably varied slightly with each flight. But the learning algorithms created by Ng's team were able to discern the ideal trajectory the pilot was seeking. Thus the autonomous helicopter learned to fly the routine better—and more consistently—than Oku himself.

During a flight, some of the necessary instrumentation is mounted on the helicopter, some on the ground. Together, they continuously monitor the position, direction, orientation, velocity, acceleration and spin of the helicopter in several dimensions. A ground-based computer crunches the data, makes quick calculations and beams new flight directions to the helicopter via radio 20 times per second.

The helicopter carries accelerometers, gyroscopes and magnetometers, the latter of which use the Earth's magnetic field to figure out which way the helicopter is pointed. The exact location of the craft is tracked either by a GPS receiver on the helicopter or by cameras on the ground. (With a larger helicopter, the entire navigation package could be airborne.)

There is interest in using autonomous helicopters to search for land mines in war-torn areas or to map out the hot spots of California wildfires in real time, allowing firefighters to quickly move toward or away from them. Firefighters now must often act on information that is several hours old, Abbeel said.

"In order for us to trust helicopters in these sort of mission-critical applications, it's important that we have very robust, very reliable helicopter controllers that can fly maybe as well as the best human pilots in the world can," Ng said. Stanford's autonomous helicopters have taken a large step in that direction, he said.

Video: Stanford's robotic helicopter performs stunts

Provided by Stanford University

Explore further: Computer scientists can predict the price of Bitcoin

add to favorites email to friend print save as pdf

Related Stories

Stopping the leaks

Oct 21, 2014

When a big old cast-iron water main blows, it certainly makes for a spectacular media event.

Flying robots will go where humans can't

Sep 17, 2014

There are many situations where it's impossible, complicated or too time-consuming for humans to enter and carry out operations. Think of contaminated areas following a nuclear accident, or the need to erect ...

Smart robotic drones advance science (w/ Video)

Oct 04, 2012

(Phys.org)—Chengyu Cao sees a day in the not-so-distant future when intelligent robots will be working alongside humans on a wide range of important tasks from advancing science, to performing deep sea ...

Autonomous robotic plane flies indoors (w/ Video)

Aug 10, 2012

For decades, academic and industry researchers have been working on control algorithms for autonomous helicopters — robotic helicopters that pilot themselves, rather than requiring remote human guidance. Dozens of research ...

Recommended for you

New oscillator for low-power implantable transcievers

1 hour ago

Arash Moradi and Mohamad Sawan from Polytechnique Montreal in Canada discuss their new low-power VCO design for medical implants. This oscillator was implemented to provide the frequency deviation of frequency-shift-keying ...

Should the Japanese give nuclear power another chance?

1 hour ago

On September 9, 2014, the Japan Times reported an increasing number of suicides coming from the survivors of the March 2011 disaster. In Minami Soma Hospital, which is located 23 km away from the power plant, ...

User comments : 14

Adjust slider to filter visible comments by rank

Display comments: newest first

ancible
3 / 5 (2) Sep 01, 2008
An impressive feat. In the lull between massive computing power/super efficient algorithms and now, these types of projects should provide real world cross-training of narrow AI and human experience. The lessons from these collaborations will surely be priceless.
Arikin
not rated yet Sep 01, 2008
The recorded flight is done from a stationary camera on the ground or in the apprentice helicopter?

If done from the helicopter's point of view this would a wonderful addition to any ground base robot as well.
Eco_R1
3 / 5 (2) Sep 02, 2008
put this tech. into a F-14 and let it learn how to defeat an enemy in a dog fight, then you have some serious hardware!!!
visual
not rated yet Sep 02, 2008
http://heli.stanford.edu/
the article is incomplete without this link...
why publish it like that?
DGBEACH
4 / 5 (1) Sep 02, 2008
put this tech. into a F-14 and let it learn how to defeat an enemy in a dog fight, then you have some serious hardware!!!


...why not into a car? It could serve as a driving aid for impaired drivers and maybe reduce the numbers of people killed by drunk drivers!
RigorMan
not rated yet Sep 02, 2008
I do accept that this is an important technology, but the article has a biased view.
Situation A: a machine follows a certain set of pre-written instructions (a code)
Situation B: a set of transducers transform the movements of an object into coordinates, that are
written as a dataset, transferred to another machine, read by a code that is executed by the machine itself.

The machine in B is not better than the machine in A, and IT'S NOT LEARNING anything, it is just executing the instructions almost simultaneously with the writing machine (the man-run helicopter).

Thinking that they learn is absurd, they have just been well-instructed in a very short time period!

D666
1 / 5 (1) Sep 02, 2008
put this tech. into a F-14 and let it learn how to defeat an enemy in a dog fight, then you have some serious hardware!!!


They could make a movie out of this! Oh, wait...
superhuman
2 / 5 (1) Sep 02, 2008
This is not a true AI, its simply following a prerecorded flight routines with the added corrections for wind and other variable conditions which might differ each time.

I think some of you misunderstood the 'watching' part, there was no camera, the computer 'watched' positions of remote controls (recorded as a set of digital values) as the human pilot was flying.
DMO
1 / 5 (1) Sep 02, 2008
Did no one else see the Terminator?
ancible
not rated yet Sep 02, 2008
superhuman: "This is not a true AI, its simply following a prerecorded flight routines with the added corrections for wind and other variable conditions which might differ each time."

That isn't quite what the article said: "But the learning algorithms created by Ng's team were able to discern the ideal trajectory the pilot was seeking. Thus the autonomous helicopter learned to fly the routine better%u2014and more consistently%u2014than Oku himself."

I assume also when you said "true AI" you meant an AGI (artificial general intelligence). But I would say that these types of systems more closely resemble autonomous systems/regions within our brain, such as our sense of balance or our ability to create sentences on the fly. We certainly don't calculate these things consciously, indeed, such actions largely arise to the conscious mind fully formed. This seems similar (I completely agree there is no awareness, btw), but I think this completely fits the term AI, though on an extremely low scale (perhaps on a level a little below a fly?).

ancible
3 / 5 (1) Sep 02, 2008
RigorMan "Thinking that they learn is absurd, they have just been well-instructed in a very short time period!"

If you mean learn and are self-conscious of the learning, I agree with you completely. However, it seems to me that the term "learn" (used in the sense of gaining knowledge previously withheld) fits the outcome the algorithms produced.

Again I refer to this piece of the article:
"But the learning algorithms created by Ng's team were able to discern the ideal trajectory the pilot was seeking. Thus the autonomous helicopter learned to fly the routine better%u2014and more consistently%u2014than Oku himself."

Going by that quote, it appears the algorithms were able to produce unique (through optimization) patterns of movement. A difference in degree, however, not in kind.

superhuman
not rated yet Sep 03, 2008
"But the learning algorithms created by Ng's team were able to discern the ideal trajectory the pilot was seeking. Thus the autonomous helicopter learned to fly the routine better and more consistently than Oku himself."


Its just a simple optimization which is not even done in real time. Basically if you consider it an AI then any algorithm calculating a least square fit to some dataset would also qualify as AI. (http://en.wikiped...squares)

I didn't mean general AI, I meant regular one as in http://en.wikiped...lligence
Major AI textbooks define artificial intelligence as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3] defines it as "the science and engineering of making intelligent machines."[4]


This definition is a bit lacking in my mind as a simple washing machine (which perceives input water temperature and its progress through the washing program and makes decisions on this data so as to maximize its chances of success - accomplishing the washing program specifications) or any other piece of electronic qualifies unless we consider that the actions have a certain level of sophistication and autonomy.

In my mind the helicopter would qualify as a true AI if you could tell it where to fly and it would pick the best route on its own, flying between buildings avoiding trees, electric cables and other obstacles.
It would also qualify if it made some other autonomous decisions based on its environment which went beyond calculating simple corrections for wind and such factors to the prerecorded flightpath. The main problem I have with it being called AI is that the flight is PRERECORDED and simply played back.

It would qualify as an AI if it could run the routine in a new, challenging, never before encountered environment - a sort of 3D maze for example. If it were able to decide in real time whether there is enough room for a given trick and if not where to fly to have enough room it would qualify as a true AI (the maze should be restricted vertically so simply going higher wouldn't suffice as that would make it trivial).

I think such an AI is the ultimate goal of the team described in this article and others in the field, they simply aren't there yet but its just a matter of time.

A "general AI" (as defined on the wiki page) is a completely different matter.
ancible
not rated yet Sep 03, 2008
Your point is clearer now.
If 'true AI' doesn't mean AGI (which you said that isn't what you meant) but instead a level of real-world problem solving skill, we still only differ in level of complexity. I still consider it 'true' AI because it is 'a system that perceives its environment and takes actions which maximize its chances of success.' (from the wiki article) It is merely a very basic one.
Ah well, chalk it up to different interpretations.
helicopters
not rated yet Oct 09, 2008
The other problem is to not hit other flying objects. Does anyone remember the 2 news helicopters that crashed over Phoenix, AZ watching a car chase? These helicopters also have to be prepared for the unexpted, such as, the best location to crash when you don't have a choice. I am impressed with how far they have come.

http://www.heline...ion.com/