Skip to content
Truth & Consequences

Level 3 Cars? 'No Customer Buys It'

BMW CEO’s declaration, ‘No customer buys it,” serves as the death knell for the self-driving dream.

Share This Post:

By Colin Barnden

What’s at stake:
If automakers won’t assume liability for janky automated driving technology and are ready to throw customers under the bus when anything goes wrong, why then should anyone buy their vehicles?

The history books will record that January 2023 was the month the “self-driving dream” died. The venue was CES in Las Vegas, and the executioner was none other than BMW’s CEO Oliver Zipse.

Quoted in an article on the BMW Blog, Zipse admitted:

A Level 3 system, whether at 60, 80 or 120 kilometers per hour, which constantly switches off in the tunnel, switches off when it rains, switches off in the dark, switches off in fog – what’s that supposed to mean? No customer buys it.

That was the moment the commercial reality of a wacky race to put robo-drivers in cars was exposed for all to see. Those words “No customer buys it,” once said, can never be unsaid.

Not only did Zipse trash the fantasy of privately-owned autonomous passenger vehicles, he finally junked the levels of driving automation described in SAE International’s standard J3016.

Level 5 is a fantasy. As we can see in San Francisco, Level 4 is for robo-clowns. Zipse admitted no customer will pay for Level 3. Level 2 has been in production for several years, and Level 1 and Level 0 in production for several decades.

What a colossal waste of time and resources.

None of this will come as a surprise to readers of The Ojo-Yoshida Report. Last July we observed “J3016 is not fit for purpose and must be withdrawn,” advising automakers seeking to demonstrate their automation competency to:

Focus R&D efforts on supervised automation systems mated with robust driver monitoring. Think GM Super Cruise and Ford BlueCruise, rather than Tesla’s Full Self-Driving.

Last August, we warned, “Volvo needs to radically rethink Ride Pilot” and reiterated that car buyers want supervised automation systems like BlueCruise, not unsupervised autonomous driving features like Volvo’s Ride Pilot.

We have also highlighted “Steve’s Law,” attributed to Apple’s Steve Jobs, who explained:

You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you are going to sell it.

We observed that in seeking to develop Ride Pilot’s “unsupervised autonomous driving” function, Volvo simply pursued the wrong customer experience.

Zipse’s mea culpa occurred just months after we raised “Unanswered Questions in the Aftermath of Fatal BMW Crash.” We asked: Is it wise to enable a hands-free automated driving function on undivided, winding country roads? Is that really safe? Who gets to decide if that is safe? And how safe is safe enough?

The risks of hands-off driving on undivided roads evidently outweigh the benefits. For instance, what if the automated steering did the wrong thing at the worst time?

In response to media queries, BMW said its test vehicle involved in the fatal crash was “equipped with Level 2 driver assistance systems,” and that “the driver always remains responsible.”

The Rub
Therein lies the rub. Some automakers are now developing and deploying poorly designed automated driving technology, safe in the knowledge that when anything goes wrong in real-world operation, their lawyers and PR teams can throw the poor sucker sitting behind the wheel under the bus. It is so easy it only takes five words:

“The driver always remains responsible.”

We’ve reported on this trend, observing, “Carmakers like Tesla are enabling people to abuse their systems, but then blaming them for accidents. They are willfully programming human drivers to fail.”

We’ve also pointed out that consumers should question the callousness of automakers using a human driver as a “component” of their safeguards, noting that the objective is to shield the automaker from liability rather than protecting drivers.

Serious questions remain unanswered following the fatal crash of the BMW iX “test vehicle” last August near Römerstein, Germany. We asked: “Who ensures the public’s safety when automakers test experimental automated driving technology on public roads?”

The answer may actually be nobody.

Elsewhere, we concluded: “By putting drivers on the spot, usually when it’s too late for them to react, carmakers can deflect blame for the crashes caused by their vehicles.”

While we remain skeptical of automakers’ claims, it is with some relief that BMW has finally yanked on the handbrake and is screeching towards a different strategy. Where one goes, others will surely follow.

Liability
BMW is fortunate that the 2022 fatal test vehicle crash occurred in Germany, and not in the U.S. While the U.S. National Highway Safety Traffic Administration has failed to protect the travelling public from the robo-circus of autonomous development, no such accusations can be leveled at the National Transportation Safety Board.

NTSB has repeatedly demonstrated an unwavering commitment to its mission to investigate fatal transportation crashes, to issue safety recommendations, and to ensure lessons are learned.

At CES, BMW’s Zipse showed he had learned the lessons of Römerstein. Quoted in BMW Blog, he said:

No one wants to be in the shoes of a manufacturer who misinterprets a traffic situation during the liability phase, for example when control is handed back to the driver. We don’t take the risk.

With those words, the executioner’s axe fell. Zipse acknowledged automakers won’t be liable for the actions of a robo-driver, and the poor devil sitting behind the steering wheel will be out of luck.

Reporting on Ford’s decision to shut down Argo AI we reiterated:

As others follow Ford’s lead, the emphasis now will be greater automation of highway driving with heavy reliance on driver monitoring.

We also provided an explanation of the differences between complex and complicated transportation modes, highlighting an underlying problem of robo-drivers lacking the necessary skills to successfully interact with humans on public roads. We named this condition “Autonomous Personality Disorder.”

The real-world challenge of Level 3 driving is known as “mode confusion,” and as Zipse confessed, automakers won’t assume liability associated with the dreaded “machine-to-human handover.”

Which spells the end of autonomous driving in privately-owned passenger vehicles.

All this begs the question, why doesn’t the automotive industry end its showboating and gaslighting, and just install technology to make human drivers safer?

The short answer: It will, eventually.

Bottom line:
The safest driver is an alert, engaged, and unimpaired human with hands on the wheel, eyes on the road, and mind on the task of driving. Expect to hear a lot more about the adoption of robust driver monitoring and pedestrian automatic emergency braking in the months ahead, as the industry’s narrative at last shifts from fantasy to reality.

Recommended:


Colin Barnden is principal analyst at Semicast ResearchHe can be reached at colin.barnden@semicast.net.

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at talktous@ojoyoshidareport.com for details.

Share This Post: