In the early 2010s, auto industry executives made wild predictions about the inevitable deployment of thousands of self-driving “robo-taxis” by the year 2020.
What happened?
What caused the industry to be so bullish on autonomous vehicle (AV) technologies was increased computer processing power (Moore’s Law) and a ton of venture capital dollars being funneled into a host of AV startups. The technologies seemed ready and refined, and all we needed was proof of the concept that self-driving cars could operate safely alongside the rest of us.
Was that the truth?
Vehicle connectivity, advanced software, and “machine learning” would bring us to a new world of driverless vehicles promising near-zero collisions and personal mobility options for those who cannot drive. Seniors, children, and the disabled could safely find their way around town by themselves. So-called “last mile deliveries” could be made all day and all night long with no bathroom or meal breaks.
About That “Safety” Thing…
It turned out to be a difficult journey for autonomous vehicle development. In their sprint to be first to market with self-driving vehicles, automakers and tech companies cut a lot of corners and insisted the “millions of AV miles driven” without human fatalities was the same as taking proper precautions to keep the public safe from their driverless vehicles.
What happened during those “millions of miles driven” is largely unknown except for a relatively few instances where getting to the truth was unavoidable.
Alphabet/Google’s AV unit Waymo was reported over two dozen times to federal regulators as having their AVs crash, drive directly into construction zones, or veer into oncoming traffic. This after touting 100,000 AV taxi trips per week in San Francisco, Phoenix, and Los Angeles.
The entire AV industry has been promoting the same “AVs are safer than vehicles with human drivers” talking point as Waymo. That claim suggests AVs are superhuman when it comes to safety. But that is a dangerous lie.
There is ample evidence that self-driving vehicles, using today’s best sensor and computing technologies, are still not capable of delivering on the safety measures being promised. In San Francisco, local reports showed video of confused AVs driving into wet cement, running red lights, and wandering around dead-end streets. The list of menacing traffic violations is compounding daily, with very little being done to enforce safety protocols.
While most of the mishaps with AV taxis have not been life-threatening, they present a scenario that goes completely against the narrative that AVs are safer than human-driven vehicles. Less than two years ago, a driverless car developed by Cruise Automotive (a subsidiary of General Motors) hit and dragged a pedestrian for twenty feet before parking on top of the person.
Even with all of this evidence of the public dangers of AVs, California allowed these robo-taxis to expand service in August 2023. San Francisco filed motions to halt this expansion. A week later, a Cruise AV collided with a fire truck. Then a woman was hit by another Cruise AV, and Cruise withheld a video recording of the incident from the DMV, which ordered Cruise to immediately cease all operations in California.
Activists have tried to appeal to authorities but have had to resort to tactics such as placing orange traffic cones on the hoods of AVs (listen to the four-minute NPR podcast at the link), which renders them immobile until the cone is removed. That is how desperate people are becoming to ensure their safety from nuisance AVs tooling around their city.
Why have state and federal regulators been so slow to clamp down on Cruise and Waymo for their safety shortcomings? If we follow the money, we can conclude that political donations can easily mitigate concerns over safety if those donations and compliance with regulators are significant enough. And apparently, they are.
Humans to the Rescue
Politics aside, adding to the mystery surrounding driverless vehicles is the human element. Is it possible that these “self-driving” vehicles don’t drive themselves?
While today’s robo-taxis have nobody behind the wheel, they still rely on the good sense and intuition of actual humans to get them out of trouble. And those humans could be hundreds of miles away.
Amazon’s Waymo AV unit bought Zoox, a Foster City, CA, company with its own purpose-built autonomous vehicle and command center that monitors its AVs by allowing humans to step in when its AVs get into unfamiliar situations. Each AV taxi has at least one human monitoring its performance.
Technicians tasked with monitoring self-driving taxis receive an alert if the taxi cannot navigate through an unfamiliar situation, such as the presence of emergency vehicles or unexpected construction sites. A human must manually reroute the taxi from a remote location. In other words, the self-driving taxi is not self-driving; it relies on human assistance and cannot complete minor interruptions in its routine without a real person stepping in.
Remote assistance of their self-driving cars was something that companies such as Waymo and Cruise did not want you to know. They instead created an illusion of total autonomy and thus generated interest in their technologies to raise billions of dollars to build what they claimed was a sustainable self-driving taxi fleet.
But again, it’s a lie and a dangerous one.
Without assistance from humans, AVs are simply not capable of operating on roads alongside vehicles piloted by people who have human intuition and critical thinking skills that cannot be replicated by a machine. Just as with artificial intelligence, these machines are only as smart as how they were programmed. They can quickly observe patterns but only if those patterns are programmed in advance.
The takeaway here is that self-driving taxis are not capable of skillfully piloting among human drivers safely, even with people watching from remote locations and making intuitive decisions that only humans can make. The use cases for autonomous vehicles will never be realized until the technology catches up and transforms AVs from being a menace to society into something more useful…and safe.
We’re not there yet.
There are two hard facts that stand in the way of the self-driving car: