Skip to content
BulletinPartner Content

CES Robocar Panel: 'Humans Are in the Loop'

CES Panel discusses the reset for the future of automated driving.
Expert panel on 'The Spinning Wheel of Technology – On the Road to Trusted Mobility,' hosted by Veoneer
Expert panel on 'The Spinning Wheel of Technology – On the Road to Trusted Mobility,' hosted by Veoneer. Panelists from left from right: Bryan Reimer, Research Scientist, Massachusetts Institute of Technology; David Harkey, President, Insurance Institute for Highway Safety; Junko Yoshida, Co-Founder & Editor-in-Chief, The Ojo-Yoshida Report; Ola Boström, VP Government and Regulatory Affairs, Veoneer

Share This Post:

By David Benjamin

Call CES 2023 the Consumer Electronics Show of deferred dreams, during which some of high-tech’s loftiest aspirations seem to have hit a snag.

Thursday morning at the West Hall of the Las Vegas Convention Center offered a vivid example of this theme, when a panel of automotive safety experts agreed that the dreamt-of era of autonomous vehicles (AV) is years away, a judgment rooted in the thorny issues of “how safe is safe” and how to get a partly, mostly or “fully” self-driving car to get along with the human “driver” inside.

Veoneer hosted the expert panel on “The Spinning Wheel of Technology – On the Road to Trusted Mobility.”

Junko Yoshida, editor-in-chief of The Ojo-Yoshida Report, expressed the reduced status of the robocar bluntly. Recalling glowing promises by technology companies and automakers that AVs would be on the road in volume by the mid-2020s, Yoshida said, “The emperor has no clothes.”

But she followed up with a silver lining. “We have come to a really good place where we can be honest,” she said, “about the risks and challenges of autonomous vehicles.”

Panel member Bryan Reimer, a research scientist at the Massachusetts Institute of Technology, elaborated by noting that the industries involved in AV development have become aware that they must “balance” the convenience of automation with complicated issues of safety.

That word, “safety,” dominated the discussion, with barely a mention of the automation levels, L1 through L5, that have been bandied ever since the first AV prototypes emerged. These categories have been defined, re-defined and manipulated to the point where they’ve become meaningless. And the promise that self-driven cars will “save lives” has turned out to be hype without substance.

“We’ve given up on a quick solution,” said panelist David Harley, president of the Insurance Institute for Highway Safety. “I think we all agree that we still have possibilities and we still have hope. But it’s decades away.”

He made the stakes clear. He cited an increase on traffic fatalities in the U.S. from 33,000 in 2014 to just under 43,000 in 2021. If AVs can reduce that number, they will be a welcome addition to the driving mix. But the panel’s focus was on safety issues that can be caused by automated driving features.

Reimer said, for example, that when the level of automation requires some driver involvement—usually designated as Level 2 or Level 3—the driver can be confused. “Am I a driver or a rider?” he asked. “It’s not very clear whether I can take my hands off the wheel.”

Humans and automated features must be ‘integrated,’ just like pilots in airplanes, who use automatic pilot functions. ‘The pilot needs to be supported not replaced.’

Bryan Reimer

Yoshida called this moment of indecision a “gray area” that’s dangerous. She said that terms like ADAS (automated driver assistance systems) and Level 3 are left for drivers to figure out. “They’re not written for people who are actually driving cars,” but for “engineers designing these types of cars.”

Reimer added, “If the driver understands that he’s riding, we’re going the wrong way.”

Regardless of the level of automation, the panelists agreed, the model for vehicles that use automation and artificial intelligence (AI) in the future should be “supporting” rather than replacing the driver at the steering wheel. Humans and automated features must be, said Reimer, “integrated.” He drew a parallel to pilots in airplanes, who use automatic pilot functions. “The pilot needs to be supported not replaced,” he said.

Yoshida noted that, in the early flush of AV enthusiasm, “the whole idea was ‘Let’s take the human out of the loop.’ That proved not to be true.”

The panelists pondered solutions to this quandary of where falls the human-machine balance. Harley lamented the “misunderstanding” among consumers, fueled by AV promoters, of how soon self-driving vehicles will hit the road, and how automatic they will really be. “How do we properly educate drivers about what these technologies can do?” he asked.

Yoshida politely disagreed, citing the hyperbolic promises made by technology and automotive companies who drew scenarios of passengers lounging in the back seat of a robocar—without a steering wheel—tooling along the interstate at 80 miles an hour. If consumers have been misled, their misunderstanding is not their fault, she said. “Don’t keep sticking it to the drivers.”

Among solutions noted by Harley are driver monitoring systems (DMS) that mitigate driver inattention by watching and reacting to driver behavior. “If someone’s driving with a drink in one hand and a telephone in the other hand, you’re not going to be in control.”

In an emergency requiring a sudden stop, Harley noted, an “alert” to the driver tends to be less effective than an automatic emergency braking (AEB) that hits the brakes without asking the driver for help. However, the hitch in this observation is the frequency of “false positives,” when the AEB does a panic stop when it’s not necessary.

Reimer said that the role of regulators, safety experts and “pseudo-regulators” like Harley of the Insurance Institute for Highway Safety—and, by extension, the makers of AVs—is to mitigate the damage, “to reduce the force of collisions, not necessarily stop every accident.”

He added that the industry and its regulators need clear benchmarks that establish a balance between safety and convenience.

Responding a question from moderator Ola Bostrom, vice-president of government affairs for Veoneer, the panel’s sponsor, about how autonomous driving on the future can balance safety and convenience, Harly offered a a distinctly non-technological response.

“Think about human beings,” he said, “and about how humans should not die as a result of making mistakes.”

David Benjamin is an author, essayist and veteran journalist, has been examining the human element in high technology for more than 20 years. His novels and non-fiction books, published by Last Kid Books, have won more than a dozen literary awards. Most recently, his coming-of-age novel, They Shot Kennedy, had won the 2021 Midwest Book award in the category of literary/contemporary/historical fiction.

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at for details.

Share This Post: