Bits: Robot Cars Can’t Count on Us in an Emergency

Bits: Robot Cars Can’t Count on Us in an Emergency

- in Automotive
12
0

We humans are easily distracted by our games, phones and mates. And automotive engineers, computer interaction designers and, yes, lawyers, wonder if the self-driving cars they are working on will ever really be able to count on us in an emergency.

Engineers say they believe that cars will be intelligent enough to do all the driving, somewhere between five years and a decade from now, depending on whom you ask. But until then, what passes for autonomous driving will be a delicate ballet between human and machine: Humans may be required to take the wheel at a moment’s notice when the computer can’t decide what to do.

To outline a development path to complete autonomy, the automotive industry has established five levels of human-to-machine control, ranging from manual driving — Level 0 — up through complete autonomy, Level 5. In the middle, Level 3 is an approach in which the artificial intelligence driving the car may ask humans to take over in an emergency.

But many automotive technologists are skeptical that the so-called handoff from machine to human can be counted on, because of the challenge of quickly bringing a distracted human back into control of a rapidly moving vehicle.

“Do you really want last-minute handoffs?” said Stefan Heck, chief executive of Nauto, a start-up based in Palo Alto, Calif., that has developed a system that simultaneously observes both the driver and the outside environment and provides alerts and safety information. “There is a really good debate going on over whether it will be possible to solve the handoff problem.”

Nauto’s data shows that a “driver distraction event” occurs, on average, every four miles. Mr. Heck said there was evidence that the inattention of human drivers was a factor in half of the approximately 40,000 traffic fatalities in the United States last year.

Last month, a group of scientists at Stanford University presented research showing that most drivers required more than five seconds to regain control of a car when — while playing a game on a smartphone — they were abruptly required to return their attention to driving.

Another group of Stanford researchers published research in the journal Science Robotics in December that highlighted a more subtle problem. Taking back control of a car is a very different experience at a high speed than at a low one, and adapting to the feel of the steering took a significant amount of time even when the test subjects were prepared for the handoff.

“There is a motor-learning process if I haven’t been controlling the vehicle and I have to take control,” said J. Christian Gerdes, a Stanford University mechanical engineering professor who was one of the authors of the study.

The handoff challenge is compounded by what is known as “over-trust” by automotive engineers.

Over-trust was what Google observed when it saw its engineers not paying attention during commutes with prototype self-driving cars. Driver inattention was implied in a recent National Highway Traffic Safety Administration investigation that absolved the Tesla from blame in a 2016 Florida accident in which a Model S sedan drove under a tractor-trailer rig, killing the driver.

A simulator at the Toyota Research Institute. The company is working on technologies that will assist human drivers in remaining vigilant when they are required to oversee an autonomous driving system for long stretches of time.

Credit
Christie Hemm Klok for The New York Times

Solving the over-trust issue is a key to autonomous vehicles in the Level 3 category, where the computer hands off to humans.

The first commercial vehicle to offer Level 3 autonomy is expected to be released next month by Audi. A version of its luxury A8 model will be able to drive in stop-and-go freeway traffic up to 37 miles an hour while allowing drivers to pursue other tasks. The vehicle reportedly will notify drivers in emergencies, giving them eight to 10 seconds to intervene.

Despite these limited advances, many automotive technologists remain uncertain about whether technology will ever be able to operate smoothly with a human driver who may be reading email or playing World of Warcraft.

“I believe that Level 3 autonomous driving is unsolvable,” said John Leonard, a mechanical engineering professor at the Massachusetts Institute of Technology who has collected detailed examples of driving situations that are currently impossible for state-of-the-art autonomous driving systems. “The notion that a human can be a reliable backup is a fallacy.”

Yet, despite widespread skepticism, the automotive industry is spending heavily on artificial intelligence technologies designed to make cars safer before they are fully autonomous. The idea is that self-driving technology (warning lights, emergency braking) can help humans be safer drivers.

Gill Pratt, a roboticist who heads an ambitious Toyota research effort in Silicon Valley; Ann Arbor, Mich.; and Cambridge, Mass.; said he did not see the automation ratings — one through five — as a straight line of technical progress.

Instead, he said, he saw the ratings as different ways of addressing the same car-safety question, regardless of who or what is in control.

Unlike many in the industry who say that advances in machine learning will soon make self-driving cars safer than those driven by humans, Mr. Pratt has pushed for less futuristic “guardian” technologies that could be added to a car the same way that anti-lock brakes, stability control, blind-spot warning lights and other features have become common.

One possible new feature being designed by the Toyota Research Institute is adding the ability not just to stop when a pedestrian is detected, but also to swerve to avoid an accident, he said.

Toyota is also working on technologies that will assist human drivers in remaining vigilant when they are required to oversee an autonomous driving system for long stretches of time. There is already a rich literature that explores the challenges of keeping airplane pilots vigilant; Toyota researchers say they will be able to develop techniques to maintain human driver attention.

Mr. Pratt said Toyota had not given up on the challenge of Level 3 driving. But to make a safe Level 3 car, he said, it may be necessary to develop technologies that see risks as much as 15 seconds in the future.

Still, over-trust will be a tough challenge to overcome. “Imagine if the autopilot disengages once in 10,000 miles,” he said. “You will be very tempted to over-trust the system. Then when it does mess up, you will be unprepared.”

And if all those issues do get resolved, there is one more question: Will people really use self-driving cars?

Last September, researchers at the University of Michigan Transportation Research Institute published results of a survey reporting that for 62 percent of Americans, an increase in productivity as a result of self-driving cars was unlikely.

The researchers found that 23 percent of Americans would refuse to drive in autonomous cars and 36 percent would be so nervous that they would not take their eyes off the road. An additional 3 percent said they would be too motion-sick to take advantage of the cars.

“Also of importance is the fact that current trips in light-duty vehicles average only about 19 minutes, a rather short duration for sustained productive activity or invigorating sleep,” the researchers concluded.

Continue reading the main story

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like

Part of Hudson Yards Tunnels Is Nearly Done. Now It Sits, Unused.

The building of the river tunnel hasn’t begun