Skip to content
Opinion

I Left My Intelligence in San Francisco

When humans make mistakes, they explain, understand, and correct them. “Thinking” machines, surrounded by tech apologists and lobbyists, are getting a free pass for their course-correction deficit.
Aftermath of Artificial Stupidity
In early August, the San Francisco Giants and Cruise announced jersey patch partnership. (Image: MLB.TV)

Share This Post:

By Junko Yoshida

What’s at stake:
Calif. Governor Gavin Newsom can remain a cheerleader for tech. San Francisco Giants can proudly wear a Cruise jersey patch. Cruise can continue to push its narrative that humans are terrible drivers. But at stake is the safety of a non-consenting public forced to share the roads with inscrutable machine drivers.

Bryan Reimer, research scientist at MIT Age Lab, recently linked me to a TED Talk he gave five years ago.

In it, he reminded the audience: “We almost always forget that our infrastructure was really designed and built for human drivers.”

When it comes to autonomous vehicles, ‘technologists, automakers, politicians and drivers see the world through different lenses.’

Bryan Reimer

For a robotaxi company like Cruise, it is easy to paint a rosy picture of increased automation where fewer terrible human drivers exist. But the roads will always be shared by a mix of driverless cars, automation-assisted vehicles and plain old human drivers. The infrastructure where “we are still expected to travel in Model Ts” is not going to transform seamlessly into a system that enables future mobility, said Reimer.

In his TED talk, he worried about “how many individuals and organizations don’t understand the level of complexity in the transformation of our system is required to enable safe, convenient, enhance mobility.” In his view, “Technologists, automakers, politicians and drivers see the world through different lenses.”

Five years later, Reimer’s angst is coming true on public streets in San Francisco, where little has been done to harmonize technologies, automakers, politicians and drivers.

Robotaxis is a farce
It is funny when a robotaxi gets stuck in wet cement. We can lament the stupidity of a robotaxi entering an intersection on a green light only to get licked by a fire engine blasting its siren.

In his talk, Reimer predicted robots causing traffic jams, a prophecy certain to undermine “our trust in the future of automated systems.”

Five years later, a Waymo driverless taxi with one door not quite shut froze on a street in San Francisco for several minutes. Result: a traffic jam.

In August, a Cruise driverless taxi – supposedly with wireless bandwidth issues – stalled on two narrow streets and backed up cars for several blocks. Just to be clear, these gridlock incidents have become a sort of robotaxi trademark in San Francisco.

It is now apparent that AI-driven machines – touted to be smarter than us guys – make the sort of astoundingly absurd mistakes that most humans with common sense would avoid, instinctively.

Note: Artificial instinct has yet to be invented.

Aftermath of mistakes
Humans. of course, make lots of bonehead moves behind the wheel. We know, immediately, when we’ve screwed up. We didn’t need Cruise to buy a full-page “Humans Are Terrible Drivers” ad in the New York Times.

I don’t think I am alone in taking offense.

Like people, robocars make mistakes. But these are different kinds of mistakes. The resulting surprises, for human drivers, are hardly likely to make people feel trustful of machine drivers. Most disconcerting to me, accustomed to driving defensively, is my sheer unfamiliarity with the uncommon mistakes being invented by smart machines.

When smart machines driven by artificial intelligence screw up, they seem incapable of explaining the mistakes, and why they did what they did. Do they even know they made a mistake?

The common driver exclamation, “What the hell is that idiot doing?” takes on a whole new meaning.

Even worse is a fundamental difference in behavior between smart people and smart machines in the aftermath of stupid mistakes.

When smart people make stupid mistakes, they “understand” how and why they blundered. If possible — and as fast as possible — they “correct” their mistakes.

In contrast, when smart machines driven by artificial intelligence screw up, they seem incapable of explaining the mistakes, and why they did what they did. Do they even know they made a mistake?

The opaque explanation provided by a developer of such smart machines is no help.

After the robotaxi-fire engine collision, the humans at Cruise boasted their safety record, and explained, “The AV’s ability to successfully chart the emergency vehicle’s path was complicated by the fact that the emergency vehicle was in the oncoming lane of traffic, which it had moved into to bypass the red light.” So, what did the company learn? Cruise admitted humbly “that we’ll always encounter challenging situations, which is why continuous improvement is central to our work.”

A few more trucks and they’re going to have this problem licked!

Nobody should forget that robotaxis are operating on public streets among involuntary guinea pigs — human drivers, pedestrians and cyclists. When accidents happen, corporate platitudes (“our primary concern remains with our passenger and their well-being”) offer scant consolation to, for example, the terrified passenger trapped in the Cruise taxi while it was being crushed by the fire truck.

What matters is what next.

Has it occurred to Cruise and its robocar cohorts that they cannot win public trust until they can demonstrate the willingness to correct their behavior, rather than damning the torpedoes and racking up ever more robotaxi miles?

Where’s AA meetings for machine drivers?
After repeated drunk-driving accidents, remorseful drivers have been known to join Alcoholics Anonymous (AA) and undertake the famous Twelve Steps to atonement and recovery.

Where can machine drivers go for Autonomous Accidents (AA) meetings? And what twelve steps can a robotaxi take to avoid further encounters with fire engines, ambulances and wet cement?

Humans are fallible. But, usually, we spot our errors and try to prevent the next one. This basic human function apparently is yet to be instilled in supposedly “thinking” smart machines.

Humans are fallible. But, usually, we spot our errors and try to prevent the next one. This basic human function apparently is yet to be instilled in supposedly “thinking” smart machines.

I understand that we are a society enamored with robotics, automation, artificial intelligence, and automotive technologies. Right now, California governor Gavin Newsom seems to believe that good policy always favors the Silicon Valley technology lobby, a position that fails to reflect that public roads are for everyone — both people and machines — to share. And we, the people, were there first.

Earlier this month, the San Francisco Giants and Cruise announced a jersey patch partnership. The deal is intended to “accelerate electrified transportation system and improve road safety and environmental sustainability throughout San Francisco.” Supposedly, this will heighten consumer sympathy toward the robotaxi. But if Cruise keeps getting in the way of first responders and blocking traffic, the jersey patch might only serve to remind fans of why they were late for the game.

Bottom line:
Robotaxis making non-human mistakes might be forgiven, once. But repetitions, exacerbated by the complacency of the machines’ corporate operator, are less tolerable. A failure, or refusal, to make course corrections is good reason to get these lemons off the street.

Bryan Reimer’s TED talk in 2018: “There’s more to the safety of driverless cars than AI”


Junko Yoshida is the editor in chief of The Ojo-Yoshida Report. She can be reached at junko@ojoyoshidareport.com.Share This Post:

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at talktous@ojoyoshidareport.com for details.

Share This Post: