Self-driving cars—coming soon to a road near you?
We were going to include this in the regular “Worth Noting” feature on interesting items on the web. Then it became clear we had too much in the way of personal opinion on the matter. So we pegged it as an editorial instead. Here’s our conclusion–much as it might be wonderful for a number of reasons, it seems wise to slow down the rush to self-driving vehicles and avoid the potential for catastrophic crashes.
It’s a topic much in the news these days—both pro and con. In theory, backed with some research studies on the technology, a computer-run car might be safer than a human one. We’ll skip those studies, but trust us, they’re out there. We don’t dispute them, mind you.
So why might the computer-run car be safer? Here’s just a couple of obvious points:
- The computer isn’t distracted by scenery, the radio, other people in the vehicle, phone calls, etc.
- The computer can react faster than the human senses and brain can to perils—applying the brakes more quickly and effectively than we humans can. Have doubts? Consider anti-lock braking systems that have been standard for some time now. It simply isn’t possible for the human brain and foot to pump brakes as quickly as a computer system can–avoiding locked brakes that can induce a skid.
On the other hand, computers don’t currently have–and won’t without substantially more sophisticated artificial intelligence, the ability to recognize subtleties that the human mind can. What happens if the computer module running the vehicle “crashes”—which could result in the car itself crashing? What about the potential for hackers taking remote control of your vehicle. At least two crime shows on TV have already made that a plot point in story lines the past year. Can it happen? Probably. So that means hardening the computer systems in your car—and not just self-driving ones but ones with various software systems that aid in driving. Like those described below, the ones that are already here but don’t make a car driver-less.
Many automakers, as well as technology companies, are eager to have us buy their increasingly automated vehicles. Much of that technology offers advantages even without the vehicle driving itself. Like blind spot monitoring software that warns of a car near your rear quarter panel when you want to pass. Collision avoidance systems that will first warn you to brake and then apply brakes for you if you fails to do so when the system concludes a collision appears imminent. Dynamic cruise control that will slow the vehicle when you approach a slower moving vehicle from behind. Auto-dimming headlights that will dim high beam headlights at the approach of another vehicle and revert to high beam once past them. Even auto-parking, alleviating a bane of urban driving for some, parallel parking—a feature one could assume as a precursor to a fully automated vehicle. We have a hybrid vehicle with most of those helpful systems. Unfortunately, the blind spot monitoring sensors keep failing. That won’t kill us or result in crashes if we exercise ordinary driving care, but it suggests that there’s room for improvement in these systems.
All of these systems, as you may suspect, rely on technology like lasers or sonar interpreted by computer modules in the vehicle. Another example of a system with limitations–dynamic cruise control. It won’t work as rain picks up, because its sensors can’t see through the precipitation. Do laser and sonar systems exist without such limitations? Probably, or the military would be at a handicap in sea or sky. But their systems might be financially out of reach for ordinary vehicles–at the moment.
State legislatures and motor vehicle regulatory officials are considering whether, where and when to permit self-driving cars. Most would require that a fully-attentive driver be ready and able to take over from the computer operating the vehicle. That’s just for ordinary operation. Knowing how often your home computer can malfunction or even the various systems within a vehicle, it seems best to keep in mind that the AI running your car might get a glitch or fail entirely—with potentially fatal results. I don’t know about you, but I’m not quite ready for that.
But now Congress is getting in the act, as this article in The Verge describes:
The first federal legislation to regulate self-driving cars in the US was introduced on June 20th. [2017] These bills — there are 14 of them — would give the US National Highway Traffic Safety Administration (NHTSA) the power to increase the number of self-driving cars on public roads. And they would preempt the current patchwork of state laws regarding the enforcement of autonomous driving. Automakers and the big tech companies are in favor of the bills for two main reasons: they want to get their robot cars on the road faster than their competitors, and they would rather abide by one overarching set of federal laws than 50 individual state laws.
What’s the scary part? The article goes on to state what’s in at least some of the bills under consideration:
The package of bill includes a proposal to increase federal motor vehicle safety standard (FMVSS) exemption caps from 2,500 to 100,000 — which is a wonky way of saying that it would allow automakers and tech firms to test (and eventually deploy) autonomous vehicles without steering wheels, brake pedals, and other components designed with humans in mind and required by federal safety standards. Right now, these companies are testing cars that can at best be considered Level 3 autonomous, meaning they still require some human intervention. [emphasis added]
Contrast that with this story in Vox, detailing what happened with a Tesla driver.
“An important moment in the self-diving car debate came on May 7, 2016, when Joshua Brown lost his life after his Tesla vehicle crashed into a semi-truck trailer. Brown had engaged Tesla’s Autopilot feature, and the software didn’t detect the white side of the trailer against the daytime sky. The car slammed into the truck at full speed — 74 miles per hour — shearing off the top of the car and killing Brown.” But the NTSB found that it wasn’t the fault of Tesla. The car was NOT a completely self-driving car. In fact, the autopilot system requires the driver to keep hands on the while most of the time and failure to do so registers a warning. Brown was warned seven times and wound up dead.
Vox has more to say on what Tesla and other manufacturers are doing to keep drivers from assuming they can just relinquish control of their vehicle to some computer system, after the Brown incident:
Since his death, Tesla has established a stricter “three strikes and you’re out” rule: If the driver ignores three consecutive warnings, he gets locked out of Autopilot for the rest of that trip. If the driver still doesn’t grab the wheel, the car will assume the driver is incapacitated and come to a gradual stop with the hazard lights flashing.
Other car companies are working on similar technologies. UPDATED July 2020 Audi, for example,
will soon be introducingintroduced a product called traffic jam pilot that will allow hands-free freeway driving up to 35 miles per hour which is only available in Europe. (self-driving at full highway speeds is four to five years away, Audi says). During a recent test drive, Audi engineer Kaushik Raghu told me that Traffic Jam Pilot will include a “driver-availability monitoring system” that makes sure the driver isn’t sleeping or looking backward for an extended period of time.
Cadillac recently announced a freeway-driving technology called Super Cruise that does this. An infrared camera mounted in the steering wheel can tell if the driver is looking out at the road or down at a smartphone. If the driver’s eyes are off the road for too long, the car will start beeping until the driver gets his eyes back on the road.
So here’s the concern, if there’s room for computer or sensor error as in the Tesla case, what more is down the pike if vehicles were completely automated? No doubt at the behest of manufacturers, Google, Waymo and Uber, Congress wants to step on the gas apparently to make that automation happen. We say, step on the brakes. It’s risky enough on the roads with drunk and texting drivers. We don’t need driverless cars that might careen out of control at computer failure.
We’d love to have a safe automated car when we’re 85 or 90 years old and no longer capable of safely driving ourselves. Thankfully, that’s a ways off; we can wait until we’re sure that the technology IS safe.
I think driverless cars are a disaster waiting to happen. We all know computers fail – often. Not to mention that it’s just one more outlet for hackers to use as a terrorist means.
Eventually, I hope, somebody will make more reliable computers and put in enough redundancy to make these cars safe. Probably best that I’m not on the road when I’m 90–hoping I’m still around to BE on the road. 🙂
Lol 🙂