Chris Urmson, the head of Google’s robot car project, made some impressive claims about the company’s autonomous vehicles at a robotics conference in Silicon Valley on Friday. He told attendees the cars are safer than those driven by humans, based on thousands of hours of test driving in California and Nevada.
“We’re spending less time in near-collision states,” he said. “Our car is driving more smoothly and more safely than our trained professional drivers.”
The professionally trained drivers used as a control during Google’s experiments were found to speed up and slow down more abruptly than their autonomous counterparts. Robots also tended to maintain safer distances from the vehicles ahead of them, according to the data.
The findings jibe with other similar research, such as a study last year that suggested robot cars could triple highway capacity. If all cars were autonomous, more of them could fit on the roads – and at higher speeds. It might be the sort of bumper-to-bumper traffic many drivers are used to during heavy commute times, but it would be moving much faster.
With several auto makers working on their own autonomous vehicles, it’s now just a matter of time before the technology reaches a level of affordability. The main obstacle to people getting to drive them, however, is legality. More specifically, the big question is quickly becoming: who is liable when a robot car inevitably hurts or kills someone?
Silicon Valley lawyer Stephen Wu earlier this year argued a number of strategies that such manufacturers can and should take, with the main thrust being that they need to be proactive. Robot car makers need to strictly monitor their supply chains and keep track of where all their parts and components are coming from, and not rely strictly on closed systems to prevent hacking, since that won’t work. He also suggests companies adjust their behaviours to prevent the inevitable angry jurors, who get upset when it becomes apparent that corners have been cut or that human life was weighed against profits.
In his speech, Urmson signaled the route of defence that Google is likely considering. The company’s autonomous cars gather a boatload of data while driving, which means it will be significantly easier to figure out who or what caused an accident after the fact. Robot cars will effectively come armed with their own black boxes.
He mentioned one incident where a Google car was rear-ended by a human driver. The data showed the robot car halted smoothly and that the other driver was at fault.
“We don’t have to rely on eyewitnesses that can’t… be trusted as to what happened – we actually have the data,” he said. “The guy around us wasn’t paying enough attention. The data will set you free.”
On the one hand, what Urmson is saying is true. In a robot-car-filled future, the number of accidents should be greatly reduced, with the remaining few hopefully the result of inattentive human drivers. At the same time, there is something a bit disturbing about treating data as omniscient because it can be manipulated and often doesn’t always show the human side of things.
The best personal example I can think of is when I got a ticket for running a red light a few years ago. I was caught in the act by an autonomous camera and the system eventually mailed me the ticket. I would have paid the fine with no questions asked, but the demerit points and inevitable insurance premium hike that would follow were problematic.
While the data certainly proved my guilt, it didn’t take into account other potentially mitigating circumstances. For one thing, it was early Sunday morning and there were no other cars at the intersection, so there was no risk of an accident happening. Secondly, I didn’t make a conscious call to run the light, but rather a poor last-second decision that I immediately regretted.
I decided to argue the ticket when it arrived because I thought a human police officer, if he or she had pulled me over, might have taken those factors into account and let me off. The judge in fact did, and cancelled my ticket. Sure, what I did was wrong and against the law, but there was no harm or foul – something that a cold-calculating machine had no way of determining.
The moral of the story may very well be that robot cars and the data they produce are fine, but we should probably hang on to those human judges for a little while longer.