I covered this earlier on Herman’s site, but I want to follow up today with a bit more about the technology – and the obvious problems with it – that led a driveless Uber car in Arizona to run over and kill a woman walking her bicycle.
Here’s how Scientific American puts it in a story that acknowledges the technology has some obvious “blind spots,” to turn a phrase without a doubt:
Herzberg’s death is the first reported incident of a pedestrian killed by a self-driving car, and raises questions about whether such vehicles are ready to operate autonomously on public roads. The vehicle’s cameras and other sensors apparently did not detect the victim and made no attempt to brake or otherwise avoid her. An Uber employee was in the Volvo XC90 SUV acting as a safety operator but told police he did not have time to react to avoid hitting Herzberg
Self-driving cars rely on a combination of sensors and data systems to navigate and avoid obstacles. The vehicles typically include some combination of global positioning systems (GPS), light detection and ranging (LiDAR) sensors, radar, cameras and other equipment to help detect lane markings, bicycles, other vehicles and pedestrians. Each of these systems has particular strengths and weaknesses. “One of the things that we have noticed about accidents involving self-driving cars is that they seem strange from a human perspective; for example, the vehicles do not hit the brakes prior to the collision, which is something most human drivers do,” says Bart Selman, a computer science professor at Cornell University and director of the Intelligent Information Systems Institute. “That’s because the vehicles make decisions based on what their sensors detect. If its sensors don’t detect anything, the vehicle won’t react at all.” That was evident following a fatal accident in May 2016, when a Tesla Sedan S using its driver-assist Autopilot technology failed to brake to avoid hitting a tractor trailer that was making a left turn across its lane, killing the Tesla’s driver.
Given the time the Uber accident occurred—10 P.M. local time—it is possible the vehicle’s cameras did not see the pedestrian, but its LiDAR and radar should not have been affected by the darkness, says Ragunathan Rajkumar, a professor of electrical and computer engineering in Carnegie Mellon University’s CyLab Security and Privacy Institute. “Self-driving vehicles are trained to identify crosswalks and yield to a person crossing a road,” Rajkumar says. (He has helped lead Carnegie Mellon’s efforts to develop autonomous vehicles, including the “Boss” SUV that won the DARPA 2007 Urban Challenge.) “Even in a jaywalking scenario [such as this] the vehicle is still always looking for obstacles in its path,” so its failure to see the pedestrian is puzzling, he notes.
What I find puzzling is that they find this puzzling. Autonomous car technology is trying to make machines behave in the same way humans would. The idea is that you can teach these sensors how to recognize things like crosswalks, pedestrians or other objects, and instantly send a command to the car to react to the presence of these objects exactly as person would – or should if he was paying attention and fully in possession of his faculties.
One of the biggest arguments of driverless car advocates is that the technology – because of its repeatable nature and reliance on data – should reliably do what it’s trained to do every time. It won’t drive drunk. It won’t text. It won’t get distracted by a billboard or a cute girl.
It’s data! It’s science! It won’t fail like idiot humans.
It sounds good in theory, but when it fails in practice, the humans who designed it are puzzled as to why. Let’s start with the very fact that it’s humans who are designing these cars and their control systems. They’re trying to use their own human capabilities to make technology that will operate in the same way God designed humans to operate.
Does anyone see a problem there? Human technology designers aren’t God. They operate as best they can with what they’ve come to understand – or what they think they understand – about human senses and how they are applied to the operation of a motor vehicle. There are all kinds of things about how people work that we don’t really understand, now matter how much we flatter ourselves to the contrary.
And the human body is a certain kind of mechanism. Human-designed technology is a different kind of mechanism. The driverless car designers are telling themselves they can make the technology work even better than the human, and yet they don’t know why the technology failed to react to a woman walking her bike in the street.
And again I ask: What is the reason we need this technology? Just because you can do something doesn’t mean anyone benefits if you do. Cars have always needed human drivers, and there are plenty of humans happy to do the driving. When did this become a problem in need of a solution?
We keep hearing that people like Uber, Google and GM are all over this because it’s going to revolutionize the way people get around in big cities. Why do we need this revolution? There are obviously problems with getting around big cities, but how is it going to improve things to have cars with no drivers replace cars with drivers?
Tech companies like Google have revolutionized a lot of things – much for the better, although on some points I’d say the jury’s still out. But changing something radically just because you can doesn’t mean the change was an improvement.
Human beings are perfectly capable of driving cars. Some don’t do it very well, but they can do it. So far it’s far from clear that autonomous car technology can do it at all, and it’s even further from clear that there is any need for it to do so.
All this looks like right now is a very dangerous and monumentally pointless initiative. This would be a good time to kill it – completely, forever – before it kills any more people.
Get the complete collection of Dan’s books for the low-price of $49.99! Dan will sign all copies . Order the 4-pack of Powers and Principalities, Pharmakeia, Dark Matter and Backstop.
Or order individually for $15.99 each!
Powers and Principalities (2009): Twenty years ago, Clay Bender saw the face of spiritual evil with the naked eye while attending a party. Now, Clay’s terrifying spiritual gift returns, showing him that a supernatural threat is looming – one that could threaten everyone in Royal Oak. As the community grapples with bizarre electrical disturbances and a horrible train derailment, only Clay can recognize the true nature of the strange events, and he and his two closest friends have little time to battle the city’s demons – even as all three are forced to face their own. (Buy Now button for signed hard copy: $15.99. Amazon button for digital download: $2.99)
Pharmakeia (2010): Kyla Spears is being warned – in terrifying dreams – of grisly and violent tragedies looming for young people in Royal Oak. But her spiritually gifted friend Clay Bender is reluctant to help, and her feelings for one charming young man threaten her newfound spiritual integrity and her ability to face the truth about what’s really behind the threat. (Buy Now button for signed hard copy: $15.99. Amazon button for digital download: $2.99)