Skip to content
Opinion

ChatGPT Will Eat Our Brains

The genie is out of the bottle; nothing will be the same again.
The Last of Us (HBO Max)
'The Last of Us' (Source: HBO Max)

Share This Post:

By Girish Mhatre

In the latest of the zombie apocalypse sagas – HBO’s “The Last of Us” – a deadly, highly infectious fungal pathogen causes most of the world’s human population to morph into hordes of walking dead.

It’s art anticipating life.

The pathogen running rampant today is OpenAI’s ChatGPT, a natural language processing system of the class known as Large Language Models (LLMs). Think of it as a chatbot that’s light years ahead of the kind you might encounter on a web site.

Unleashed on the world only a couple of months ago, it’s already the fastest growing consumer app in history. And it’s set to eat people’s minds. That’s because ChatGPT is the first iteration of what might be called precisely targetable, scalable, disinformation systems (or “PTSDs,” to coin a term).

ChatGPT is the first iteration of what might be called precisely targetable, scalable, disinformation systems (or “PTSDs,” to coin a term).

Consider the three terms “disinformation,” “precise targeting” and “scalable,” in that order.

ChatGPT, it turns out, can be used as a disinformation content creator without parallel. 

ChatGPT’s uncanny ability to mimic human speech and thought patterns can be shockingly realistic on first encounter. After all, it can write reasonable sounding – sometimes insightful – letters, essays, poems, novels, audio books, even computer code, with only a few user prompts. The problem is that, in its current incarnation, ChatGPT is often simply wrong across the board: On facts, reasoning and common sense.  

There’s no dearth of examples of ChatGPT blunders: Ex-Google chairman Eric Schmidt said he asked the tool to write an essay about why all skyscrapers more than 300 meters tall should be made of butter. The tool wrote an essay describing the benefits of using butter to build skyscrapers; AI expert Gary Marcus notes that if you ask [LLMs] to explain why crushed porcelain is good in breast milk, they may tell you that “porcelain can help to balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop.”

Says Marcus, “[LLMs] are models of sequences of words (that is, how people use language), not models of how the world works. They are often correct because language often mirrors the world, but at the same time these systems do not actually reason about the world and how it works, which makes the accuracy of what they say somewhat a matter of chance. They have been known to bumble everything from multiplication facts to geography (‘Egypt is a transcontinental country because it is located in both Africa and Asia’).”

It should be noted that in both examples cited above the user provided somewhat bizarre prompts. And ChatGPT amplified them. There’s nothing stopping an unscrupulous user from prompting it to create convincing content that denies climate change, say, by asking it to write about how a warming planet would be a boon to farming. ChatGPT, in its current version, would rise to the occasion, even citing fictious reference works.

The code hosting platform, Github, maintains a repository of ChatGPT failures. They’re hilarious, but not so funny once you realize how easily the system can be deliberately stimulated into giving fantastical answers that can then be promulgated as facts.

ChatGPT will target people in frighteningly precise ways.

The EU’s  General Data Protection Regulation (GDPR), the toughest privacy and security law in the world, regulates the collection of personal data and prohibits the storing of sensitive personal attributes. But it will be no match for AI systems that tap into high-dimensional and strongly correlated datasets – thousands of attributes whose relevance is not obvious – to “reconstruct” certain basal attributes. A simple example (that doesn’t require much compute power): If you drive a Tesla – an identified attribute based, perhaps, on your membership in a Tesla owners’ club – and live in a particular part of the country, then there’s a lot that can be inferred about your age, gender, economic status and political views.  Another example: When self-reported data about race and ethnicity are prohibited or limited, an indirect estimation method (relying on various administrative datasets), called Bayesian Improved Surname Geocoding (BISG), can predict an individual’s race given the surname and a geolocation.

All of which is to say that regulations notwithstanding, ChatGPT and its future iterations will know more about you than Google or Facebook ever could. Unearthing hidden attributes can allow an exquisitely tailored disinformation message to appeal subtly to individual proclivities.

Philosophy Bear, a Substack blog on AI and sundry other topics, predicts that “the internet will swarm with bots, but unlike the bots of yesteryear, they’ll be charming and compelling. They’ll be smart about it too. They won’t just write essays no one reads like I do. They’ll form relationships and mimic friendship. They’ll hijack existing discussions almost seamlessly. They’ll create communities and subcultures and lonely people, old and young, will seek them out.”

ChatGPT will be able to churn out millions of highly individualized messages at virtually no incremental cost enabling disinformation distribution on an unimagined scale.

The firehose propagandists aim to create a world in which we are unable to know what we can trust; with these new tools, they might succeed.

Gary Marcus

The Russian “Firehose of Falsehoods” Propaganda Model, an analysis developed by the RAND Corporation, identifies four distinctive features of successful contemporary Russian propaganda: High volume and multichannel; rapid, continuous and repetitive and lack of commitment to both objective reality and consistency.

ChatGPT and its ilk will make short work of this; it’s almost as if they’ve been custom-built for this express purpose. Further, LLMs will do so at infinitesimal cost compared to whatever the Russians spend. Says Gary Marcus, “The firehose propagandists aim to create a world in which we are unable to know what we can trust; with these new tools, they might succeed.”

By undermining our ability to tell fact from fiction, the rampant spread of disinformation has already rocked American society on number of fronts. By some estimates, false claims regarding the efficacy and side effects of vaccines are responsible for a third of the million-odd American Covid deaths. Lack of progress on battling climate change is directly attributable to disinformation (for example, Trump’s oft quoted claim that windmills cause cancer). Most egregious of all, disinformation about election security ultimately led to a direct assault on American democracy. 

Against this backdrop, OpenAI’s decision to launch ChatGPT in its current version is simply unethical. Yes, it will get better, but given the consequences of putting a half-baked product into the hands of bad actors, OpenAI should have been more circumspect. OpenAI CEO Sam Altman almost – almost – admits it as such; “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” Altman goes on to say that they “expect these issues to be ironed out over time. “

My industry has taken the position that this stuff is just good, we’ll just give it to everyone. I don’t think that’s true anymore – it’s too powerful.

Eric Schmidt

But they didn’t give it any time at all. Instead, OpenAI launched an arms race; a race to arm the bad guys. Several AI startups, to say nothing of Big Tech, have scrambled to stake out positions in response to OpenAI. It hasn’t gone well; both Google and Microsoft have staged embarrassing demos of their LLM products. Google fell flat on its face demonstrating its chatbot, Bard, precipitating an almost immediate $100 billion loss in market capitalization of its parent company. Microsoft’s search engine, Bing, which has long languished in search backwaters, but is now powered by OpenAI, shows an alarming tendency to abuse its users. It would be funny if it weren’t so scary.

Says Schmidt, “My industry has taken the position that this stuff is just good, we’ll just give it to everyone. I don’t think that’s true anymore – it’s too powerful.”

OpenAI’s only responsible option is to pull back, to restrict ChatGPT access to a trusted cadre of “tire kickers” charged with probing every aspect of the product from a user point of view, for a year. It’s not going to be enough, since the genie is already out of the bottle. But it might contain the spill while a vaccine of sorts is developed. And it might be a first step in setting up an industry-wide safety certification process for AI-based tools.

Stories included ChatGPT on Trial package:


Girish Mhatre is the former editor-in-chief and publisher of EE Times. The views expressed in this article are those of the author alone and do not necessarily represent the views of the Ojo-Yoshida Report.

Copyright permission/reprint service of a full Ojo-Yoshida Report story is available for promotional use on your website, marketing materials and social media promotions. Please send us an email at talktous@ojoyoshidareport.com for details.

Share This Post: