Google: Driverless cars are mastering city streets (Update)

Apr 28, 2014 by Justin Pritchard
In this Sept. 25, 2012, file photo, Google co-founder Sergey Brin gestures after riding in a driverless car with officials, to a bill signing for driverless cars at Google headquarters in Mountain View, Calif. Google engineers say they have turned a corner in their pursuit of creating a car that can drive itself. Test cars have been able to navigate freeways comfortably for a few years. On Monday, April 28, 2014, Google said the cars can now negotiate thousands of urban situations that would have stumped them a year or two ago. (AP Photo/Eric Risberg, File)

Google says that cars it is programming to drive themselves have started to master the navigation of city streets and the challenges they bring, from jaywalkers to weaving bicyclists—a critical milestone for any commercially available self-driving car technology.

Despite the progress over the past year, the cars have plenty of learning to do before 2017, when the Silicon Valley tech giant hopes to get "autonomous driving" technology to the public.

None of the traditional automakers has been so bullish. Instead, they have rolled out features incrementally, including technology that brakes and accelerates in stop-and-go traffic, or keeps cars in their lanes.

"I think the Google technology is great stuff. But I just don't see a quick pathway to the market," said David Alexander, a senior analyst with Navigant Research who specializes in autonomous vehicles.

His projection is that self-driving cars will not be commercially available until 2025.

Google Inc.'s self-driving cars already can navigate freeways comfortably, albeit with a driver ready to take control. In a new blog post, the project's leader said test cars now can handle thousands of urban situations that would have stumped them a year or two ago.

"We're growing more optimistic that we're heading toward an achievable goal—a vehicle that operates fully without human intervention," project director Chris Urmson wrote. The benefits would include fewer accidents, since in principle machines can drive more safely than people.

Urmson's post was the first official update since 2012 on a project that is part of the company's secretive Google X lab.

In initial iterations, human drivers would be expected to take control if the computer fails. The promise is that, eventually, there would be no need for a driver. Passengers could read, daydream, even sleep—or work—while the car drives.

That day is still years away, cautioned Navigant's Alexander.

He noted that Google's retrofitted Lexus RX450H SUVs have a small tower on the roof that uses lasers to map the surrounding area. Automakers want to hide that technology in a car's existing shape, he said. And even once cars are better than humans at driving, it will still take several years to get the technology from development to large-scale production.

Google has not said how it plans to market the technology. Options include collaborating with major carmakers or giving away the software, as the company did with its Android operating system.

While Google has the balance sheet to invest in making cars, that is unlikely. Certainly not in the 2017 time-frame that Google co-founder Sergey Brin has laid out.

Urmson said in an interview that 2017 is "a pretty great time frame" for people living near Google's San Francisco Bay Area headquarters to expect to have access to the technology, but in what form remains to be seen.

While Brin is his boss, and he wants to keep the boss happy, Urmson said safety will come first.

He added that he has another milestone in mind: His 10-year-old son can get behind the wheel in about five years, and knowing how teens drive, he'd like to see the technology available by 2019.

For now, Google is focused on the predictably common challenges of city driving.

To deal with cyclists, engineers have taught the software to predict likely behavior based on thousands of real-life encounters, according to Google spokeswoman Courtney Hohne. The software plots the car's path accordingly—then reacts if something unexpected happens.

Before recent breakthroughs, Google had contemplated mapping all the world's stop signs. Now the technology can read stop signs, including those held in the hands of school crossing guards, Hohne said.

While the car knows to stop, just when to start again is still a challenge, partly because the cars are programmed to drive defensively. At a four-way stop, Google's cars have been known to wait in place as other cars edge out into the intersection.

The cars still need human help with other problems. Among them, understanding the gestures that drivers give one another to signal it's OK to merge or change lanes, turning right on red, and driving in rain or fog (which requires more sophisticated sensors).

To date, Google's cars have gone about 700,000 miles (1.1 million kilometers) in self-driving mode, the company said. Hohne said more than 10,000 miles (16,000 kilometers) have been on city streets; Urmson said it would not be easy to calculate the total with greater specificity.

This Wednesday, April 23, 2014 photo provided by Google shows the Google driverless car navigating along a street in Mountain View, Calif. The director of Google's self-driving car project wrote in a blog post Monday, April 28, that development of the technology has entered a new stage: trying to master driving on city streets. Many times more complex than freeways, which the cars can now reliably navigate, city streets represent a huge challenge. (AP Photo/Google)

Five things to know about Google's self-driving cars

The director of Google's self-driving car project wrote in a blog post Monday that development of the technology has entered a new stage: trying to master driving on city streets. Many times more complex than freeways, which the cars can now reliably navigate, city streets represent a huge challenge.

Here are five things to know about the cars, and their future.

MEAN STREETS

Google says its cars have now driven about 700,000 accident-free miles on freeways in "autonomous mode"—with the car in control, though a safety driver sits behind the wheel. That's the equivalent of about 120 San Francisco-to-Manhattan-to-San Francisco road trips.

With that success, Google has been focusing on city driving for about the past year. Freeways are relatively simple for the cars—no blind corners, no cyclists and no pedestrians. City streets have all that and more, including intersections and complex interactions with other drivers, such as who goes first at a four-way stop sign.

___

TO-DO LIST

Google says that in the past year, the Lexus RX450H SUVs it has retrofitted with lasers, radar and cameras rapidly learned how to handle thousands of urban driving situations. The robot's vision can now "read" stop signs (rather than rely on a map to plot them out) and differentiate between hundreds of objects in real time. It also can negotiate construction zones much more reliably.

But the technology is far from perfect. Improvements are needed in merging and lane changes, turning right on red and handling bad weather.

___

COMING TO A NEIGHBORHOOD NEAR YOU?

Not in the near future—unless you live in Mountain View, California, where Google is located. So far, the tech giant has focused street driving in its hometown, which it has mapped parts of in tremendous detail. The mapping helps the car's computer make sense of its environment and focus on moving parts—other cars, cyclists and pedestrians.

Just four states—California, Nevada, Florida and Michigan—and the Washington capital district have formally opened public streets to testing of , though testing is probably legal nearly everywhere (because it is not expressly banned).

___

THE FUTURE IS (NOT QUITE) HERE

In 2012, Google co-founder Sergey Brin predicted that the public would be able to get ahold of the technology within five years. Google isn't revising that date. Initially, drivers would be expected to take control if the computer fails. Eventually, the vision goes, there would be no need for a person in the driver's seat—or at least not a driver who has to watch the road.

___

GOOGLE, THE CARMAKER?

While Google has enough money to invest in making cars, that likelihood is remote. More likely options include collaborating with major carmakers or giving away the software, as Google did with its Android operating system.

Meanwhile, traditional automakers are developing driverless cars of their own. Renault-Nissan's CEO said he hopes to deliver a model to the public by 2020.

Explore further: Sweden joins race for self-driving cars

4.4 /5 (15 votes)
add to favorites email to friend print save as pdf

Related Stories

With human behind wheel, Google's self-driving car crashes

Aug 07, 2011

Google Inc.'s quest to popularize cars that drive themselves seemed to hit a roadblock Friday when news emerged that one of the automated vehicles was in an accident. But in an ironic twist, the company is saying that the ...

Google self-driving cars pass 300,000 mile mark

Aug 08, 2012

(Phys.org) -- Google has just released an update on its blog boasting about how its fleet of self-driving cars which the company has designed and is operating on public roads, have collectively racked up ...

Sweden joins race for self-driving cars

Dec 02, 2013

A hundred self-driving Volvo cars will roll onto public roads in and around the Swedish city of Gothenburg in 2017, the Chinese-owned car maker said Monday.

Google gets driverless car law passed in Nevada

Jun 24, 2011

(PhysOrg.com) -- The savvy among you may remember that back in May we told you about Google's attempts to get the Nevada state legislature to consider allowing users to driver UGV, or unmanned ground vehic ...

Recommended for you

Medical advances turn science fiction into science fact

Jul 18, 2014

Exoskeletons helping the paralysed to walk, tiny maggot-inspired devices gnawing at brain tumours, machines working tirelessly as hospital helpers: in many respects, the future of medicine is already here.

Really smart cars are ready to take the wheel

Jul 17, 2014

Why waste your time looking for a place to park when your car can do it for you? An idea that was pure science fiction only a few years ago is becoming reality thanks to automatic robot cars.

User comments : 9

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
not rated yet Apr 29, 2014
To deal with cyclists, engineers have taught the software to predict likely behavior based on thousands of encounters...


Too bad they're still relying on big data correlation instead of cognition.

The AI research of today is a big magic show with smoke and mirrors to hide how essentially dumb the machines are, because they operate in ways that are recognizably not intelligent. At the moment, if we want a computer to percieve an umbrella, we have to train the machine with millions and millions of pictures of different umbrellas in different backgrounds and different lightings, open, closed, broken, upside down... it's a method of exhaustion that is inefficient and decidedly unintelligent.

Meanwhile a human will see a person holding a large rhubarb leaf by its stem over their heads and conclude "that's an umbrella" because we understand what an umbrella is instead of merely what one looks like.

That's why they have such an uphill battle to get the car to drive reliably.
Eikka
1 / 5 (1) Apr 29, 2014
So in that sense, if a clown on an unicycle approaches the Google car, what will it do? Will it even understand that it is in fact a cyclist, or that while it is a cyclist it's not going to behave in a way similiar to bicyclists? Or what about people on tricycles, quadcycles, recumbent bikes, tandem bikes?

Can Google train the car for everything by teaching it in detail every single thing that it might encounter on the roads from cats to garbage bags flying in the air that for a brief moment may apper like cats.

nowhere
5 / 5 (3) Apr 29, 2014
Meanwhile a human will see a person holding a large rhubarb leaf by its stem over their heads and conclude "that's an umbrella" because we understand what an umbrella is instead of merely what one looks like.

How many years training, from birth, does it take before a human makes this recognition? Additionally the human training can't simply be copied to the next version.
Eikka
not rated yet Apr 29, 2014
How many years training, from birth, does it take before a human makes this recognition? Additionally the human training can't simply be copied to the next version.


That's a red herring. Humans train to understand the object for its purpose and use, whereas the machine trains to simply recognize the picture without understanding of the object.

The machine goes from the particular to the general, whereas people go from the general to the particular. That's why people can more effectively recognize the object even when they're not trained to see that specific one.

For example, a person needs to see another's face a handful of times to recognize the person again. A machine needs to be trained with thousands of different pictures of the same face to achieve the same accuracy.
Eikka
not rated yet Apr 29, 2014
To see where the problem is, consider what is an umbrella.

If you start by defining its shape and size and color et cetera, you get false results because there are many similiar things that are not umbrellas. The more specific you make your description to deal with these false positives, the more different kinds of umbrellas you exclude which leads to false negatives, so you have to memorize multiple different "also umbrella" definitions, which is taxing to your memory and still prone to error.

But if you start by defining an umbrella as an object that is suspended above a person to keep them from getting wet in the rain, you can more efficiently separate objects that might be umbrellas from what aren't. Then you can go into the specifics, like whether it's a rhubarb leaf or an actual umbrella.

But that requires a-priori understanding of what things are supposed to do instead of simply observation that "this blob of pixels is associated with this blob of pixels".
antialias_physorg
not rated yet Apr 29, 2014
In initial iterations, human drivers would be expected to take control if the computer fails.

That's sort of a problem if the system is very good. The driver will expect the system to cope and take control too late. Especially in split second descision situations you now have two decisions to make: does the software react appropriately? And how to avoid the situation after deciding that it doesn't.

Cruise control users have had this problem when leaving highways. If the automatic is set to accelerates back to its set speed that may still be the highway speed - which is way too fast for the curve at the end of the off ramp.

How many years training, from birth, does it take before a human makes this recognition?

That's less of a problem, because once a machine has learned how to do this (however long it took) the behavior can be copied to other machines. That's the idea beahind the "web of robots"
nowhere
not rated yet Apr 29, 2014
That's a red herring.

No it's not. You were comparing an untrained machine to a trained human. Rather compare both at square one.

Humans train to understand the object for its purpose and use, whereas the machine trains to simply recognize the picture without understanding of the object.

Humans have better hardware and sample size when training so it is to be expected.

For example, a person needs to see another's face a handful of times to recognize the person again.

This being after the person is of sufficient age and therefore has had a lot more recognition training across a wider range.

A machine needs to be trained with thousands of different pictures of the same face to achieve the same accuracy.

Really? I was under the impression that a modern machine using an accurate 3d model rather than a 2d picture could very easily match or even exceed the accuracy of a human.
dbsi
not rated yet Apr 30, 2014
About the capabilities and reliability of the current system:

Think of a disaster striking a mega-city like New York or Shanghai - 1.24 million fatalities and tens of millions gravely wounded and disabled.
And this year for year and rising!

I don' t trust the current system and - even knowing it is not perfect - welcome technology taking over.

After all, I travel in planes and high-speed trains knowing about possible disaster, despite very likely being unable to do anything if one would strike.

Pressing ahead developing this technology will save a lot of liefs.
Eikka
not rated yet May 01, 2014
No it's not. You were comparing an untrained machine to a trained human. Rather compare both at square one.


No I'm not.

I'm comparing a trained robot to a trained human, and pointing out that there's a fundamental difference in what they're actually trained to do.

You can train the robot for years and years, and it's still just comparing pixel blobs to pixel blobs without any understanding of what they mean. It's dealing with the problem by brute force rather than intelligence, which is why it's having a hard time approaching even insect level of object recognition with reasonable amount of hardware and energy.

Really? I was under the impression that a modern machine using an accurate 3d model rather than a 2d picture could very easily match or even exceed the accuracy of a human.


An accurate 3D model is apples to oranges, because it's worth a million pictures, whereas people have to do with inaccurate and ever-shifting memory representations of the subject.