Skip to content
Opinion

What’s Sam Altman’s Game?

Enough with the handwringing, it’s time to shut down public access to AI
Sam Altman
Sam Altman, CEO of OpenAI

Share This Post:

by Girish Mhatre

Twenty years ago, this month, President George W. Bush stood on the deck of an aircraft carrier and declared victory in the Iraq war – a war that the United States Congress was duped into supporting with the help of some sketchy “evidence” presented by Secretary of State, Colin Powell, just a few months earlier. The evidence did not convince everybody. Twenty-three Senators and 133 Representatives refused to sign on. But back then if we had the AI tools we have today, the vote would have been unanimous. Today, we can easily manufacture as much evidence as we need to start another war, perhaps even a second civil war.

Nearly two months ago an Open Letter from the Future of Life Institute, signed by a virtual who’s who of AI researchers and others, including such tech luminaries as Elon Musk and Steve Wozniak, urged a “pause” of six months in the development of large language models more powerful than OpenAI’s GPT-4. The letter warned that AI tools present “profound risks to society and humanity.” Bill Gates and others demurred with the message, but needlessly as it turns out. Not being clear about what could be done, and by whom, during those six months, the letter landed with a thud. Now that the initial flurry of excitement and debate has ebbed, not too many people are paying it much attention.

The internet will be flooded with false photos, videos and text, and the average person will ‘not be able to know what is true anymore.’

Geoffery Hinton

Meanwhile, the chorus of cautionary voices grows louder. Geoffery Hinton, the pioneering developer of Large Language Models (LLMs), resigned dramatically from his post as chief scientist at Google to warn the world about AI. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said about his work.

That rationale, increasingly bandied about in AI circles, doesn’t make sense, of course. Just because you did it wouldn’t stop anyone else from doing it; perversely, it could even spur rival developments. More likely that Hinton was channeling the “father” of the atomic bomb, Robert Oppenheimer, who famously observed “when you see something that is technically sweet, you go ahead and do it.”

Like others, Hinton is worried that at some time in the future AI may replace human workers and even further down the road more advanced AI technologies may pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.

His immediate concern, however, is that the internet will be flooded with false photosvideos and text, and the average person will “not be able to know what is true anymore.” Says Hinton, “It is hard to see how you can prevent the bad actors from using it for bad things.”

But while the collective wringing of hands and gnashing of teeth is on public display, nobody has the faintest idea of what to do next.  The genie is well and truly out of the bottle.

Sam Altman could be called the Robert Oppenheimer of the age of AI. At his congressional testimony last week Altman, CEO of OpenAI, the developer of ChatGPT and GPT-4, continued his prevarication. “There’s real danger,” he warned, while tooting his own horn on what he had wrought. “The thing that I try to caution people the most is what we call the ‘hallucinations problem,'” Altman said. “The model will confidently state things as if they were facts that are entirely made up.”  But while he believes it comes with real dangers, it can also be “the greatest technology humanity has yet developed” to drastically improve our lives.

Weirdly, he appeared to be pleading with the lawmakers for government intervention that “will be critical to mitigate the risks of increasingly powerful” AI systems. “Stop me” he seemed to be saying, or I’ll keep making larger more powerful – and dangerous – new products.

Is this some bizarre form of blackmail or simply BS? 

There are certain advantages to being a machine. We humans are limited by our input-output rate — we learn only two bits a second, so a ton is lost. To a machine, we must seem like slowed-down whale songs.

Sam Altman

Perhaps the real issue that Altman seems to be dancing around – a clue to Altman’s mindset – can be found in the short history of his company. Open AI was started as an open source, nonprofit to keep us safe from bad AI – to prevent artificial intelligence from accidentally wiping out humanity – with $50 million in seed capital from none other than Elon Musk. Its mission attracted star talent. But then, in 2019 to the chagrin of many who believed in the original mission, Altman transitioned it to a for-profit, closed source company. He needed the money, he said, to offer more attractive salaries and to pay for the huge amounts of computing power necessary for advanced projects.

Elon Musk, who has resigned from the Open AI board, likens Altman’s move to that of a hypothetical non-profit organization created with a mission to save the Amazon rain forest transforming itself into a lumber company that makes money from the forest. “Is that what I gave the money for?” he asks.

The transitioning of OpenAI may sound like Altman’s doing it for the money. But he’s not. Rather, it’s power that Altman wants – power to save the world by building intelligent machines. Altman believes humans are innately inferior to machines. “There are certain advantages to being a machine. We humans are limited by our input-output rate – we learn only two bits a second, so a ton is lost. To a machine, we must seem like slowed-down whale songs.”

He added: “We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever. What a time to be alive!” And, just in case things go badly, Altman’s prepared. “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

Beware of megalomaniacs with good intentions.

Altman and his ilk (heads of companies like MidJourney, Stable Fusion, Hugging Face and the handful of open source ChatGPT clones. etc.) could do the right thing: They could voluntarily restrict access to their products.

That’s not likely, at least in the case of OpenAI; it runs counter to a mass-scale experiment that Altman is running. Here’s Altman’s logic as he expressed to the podcaster Lex Fridman: “We are building in public and we are putting out technology because we think it is important for the world to get access to this early, to shape the way it’s going to be developed, to help us find the good things and the bad things. And every time we put out a new model – we’ve just released this with GPT-4 this week – the collective intelligence and ability of the outside world helps us discover things we could not have imagined, things that we could have never done internally. And [it exposes] real weaknesses that we must fix.”

Unfortunately, that sounds like letting a group of toddlers play with loaded handguns.

Nuclear takes a lot of money, access to restricted raw materials and know-how.
In contrast, almost anybody can use large models to create havoc.

Altman is doubling down; he’s considering open-sourcing ChatGPT code. But to what end? To make life easier for bad guys?  

Governments can stop the spread of technology. Nuclear technology, for example, has largely been contained successfully. But nuclear is not a cottage industry. It takes a lot of money, access to restricted raw materials and know-how.

In contrast, as things stand today, almost anybody can use large models to create havoc.

It can happen here. Another presidential election is looming. Imagine the impact of AI tools mass producing political content, fake-news stories and scriptures for new cults. Democracy and the concept of truth itself may not survive.

If OpenAI and others don’t do it voluntarily, then the Justice Department should shut them down. Altman wants regulation; he can have the only kind that would work – shut down access to these products.

Eliezer Yudkowsky, widely regarded as a founder of the field of Artificial General Intelligence weighed in with this impassioned essay:

The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

The choke point to enforce this ban would be the infrastructure .

A less extreme position – one that does not throw out the baby with the bathwater – is to allow language models to be used in domain-specific applications, but to shut down general consumer access. Domain-specific AI applications are specialized AI models that have been trained on data from a particular industry or application, allowing them to understand and generate language that is specific to that domain.

The recently announced ServiceNow-Nvidia agreement aims to create custom LLMs to inject more intelligence into workflow automation within the enterprise. Adobe’s  new text-to-image generative artificial intelligence (AI) model, Firefly has been trained on vetted content – Adobe Stock images, mainly – which makes it less likely to hallucinate than competing models built with stolen or unauthorized content and content drawn from the toxic stew that is the Internet. (Midjourney’s founder David Holz recently admitted that his company didn’t have permission to use the hundreds of millions of images used to train its AI image generator.) Under the right conditions enterprises could also “self host” large language models trained on company specific data.

The choke point to enforce this ban would be the infrastructure – the cloud computing platforms (Microsoft, AWS, etc.) would be constrained from running public accessed AI and the chip suppliers would not be allowed to sell their chips into datacenters that do. Jensen Huang, chairman of Nvidia, the dominant supplier of GPUs, boasts of having personally delivered OpenAI’s first supercomputer. Let’s make sure he doesn’t do that again. Nvidia is already prohibited from selling certain of its processors to China. Now it would be equally enjoined from selling them into certain datacenters.

What this means, of course, is that if you write professional content like blog posts, articles, and technical documents, or any media content, or school essays, or graduate theses, you’ll have to go back to doing it with your native intelligence without the aid of the artificial variety. That’s a small price to pay when truth itself is at stake.


Girish Mhatre is the former editor-in-chief and publisher of EE Times. The views expressed in this article are those of the author alone and do not necessarily represent the views of the Ojo-Yoshida Report.

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at talktous@ojoyoshidareport.com for details.

Share This Post: