Skip to content

When, Where and How AI Should Be Applied

Phil Koopman dissects strengths and weaknesses of machine learning based AI
Machine Learnig Capabilities for Applications

Share This Post:

By Junko Yoshida

AI does amazing stuff. No question about it.

But how hard have we really thought about “machine-learning capabilities” for applications? 

Phil Koopman, professor at Carnegie Mellon University, delivered a keynote on Sept. 11, 2024 at the Business of Semiconductor Summit, (BOSS 2024), concentrating on big-picture AI capabilities.

Instead of parsing the AI’s underlying mechanisms and models, Koopman intentionally lifted his AI talk to a very high-level. 

Why? 

Because AI is no panacea.

For anyone developing a new AI-infused product — most likely you are — it’s crucial to remain vigilant about strengths and weaknesses in machine learning. 

Machine learning-based AI can obviously perform well when applied to “common case” products. But even a common product could end up generating outcomes unforeseen and not imagined. Remember that “good enough” performance can be bad enough particularly in safety-critical products.

In his presentation, Koopman stacks examples of the good, the bad and the ugly in AI applications. Depending on when, how and where machine learning is applied, the results can range from being amazing to devastating. 

In this era, AI is worshiped as the universal problem-solver. Even when machine learning-based AI fails to produce accurate answers, we often let it pass, trusting that AI will “learn” over time. 

Here’s the fly in the ointment.

As we apply human behavioral terms to machine learning capabilities, saying that AI is “smart,” has “bias,” continues to “learn,” and even “hallucinates,” we tend to forget that machine learning-based AI is built on statistics. Koopman stressed the fallacy of our “projecting ‘truth’ and awareness into AI.” 

AI, in fact, is not our friend. It certainly isn’t human.

As Koopman made clear, “Machine Learning-based AI has no self-awareness.” AI doesn’t even “understand” what it is doing. 

Koopman’s talk brings AI’s capabilities — too often left unquestioned — down to earth, bluntly. He said that AI’s “hallucinating” should be called, more accurately, “bullshitting.” 


Junko Yoshida is the editor in chief of The Ojo-Yoshida Report. She can be reached at [email protected].

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at [email protected] for details.

Share This Post: