Am I a Bot? Part II
ChatGPT is scary good. Elon Musk
Every so often a technology captures the world’s imagination. The Economist
[Generative AI] is as important as the PC, as the internet. Bill Gates
I think it will be the most significant technological transformation in human history. Sam Altman, CEO of Open AI, the developer of ChatGPT
AI relieves people of their verbal and mental powers … [it is] a thing of perfect lunacy. Writer Deric Bownds
[AI] is an insult to life. Film director Guillermo del Toro
Previously in this series: Am I a Bot? Part I
It was my good fortune to be called a bot exactly at the moment a chatbot called ChatGPT exploded onto the scene as perhaps the most fascinating – and scary – technology ever.
ChatGPT was released to the public on November 30, 2022 and it gained 100 million users faster than any technology in history, faster than Instagram, faster than Tik Tok. It may well be “an insult to life,” but if so, it’s the biggest insult in the history of life.
What exactly is Chat GPT? The “GPT” stands for “generative pre-trained transformer” and it is the proprietary generative AI product of OpenAI, a startup AI company now valued at north of $30 billion. ChatGPT is built on neural networks modelled after how the human brain works, and it uses machine learning algorithms to create – to “generate” – new material: text, images, audio, video.
ChatGPT is a “large language model” (LLM) version of AI, which means its memory consists of vast amounts of text scanned from the internet. It can learn from its mistakes, in part by being curated by actual human beings who both preprogram it and tweak it as it goes along, but in part by improving on its own programming, exactly as Alan Turing predicted.
At its most basic level, ChatGPT – and other generative AI systems – work by guessing what the next words in a sentence should be, based on how often those next words appear in that order in its vast database. To take a simple example, if you ask ChatGPT if racism is bad, it will quickly notice that, across billions and billions of webpages, racism is condemned, while across perhaps a few million pages racism is celebrated. It will therefore tell you something like “Racism is harmful and dehumanizing to individuals and groups” and so on.
The New York Times reported that “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public,” and few would disagree. One reason for all the hoopla is that, although generative AI has been making important advances for at least a decade, most of the products that have been released have targeted narrow spheres.
For example, DeepL is an AI system developed by a German firm that is used to translate text from one language to another. Released in 2017, DeepL transformed how translators work so profoundly that today almost no one actually translates text from scratch. Instead, DeepL prepares the basic translation and then the human translator corrects errors, improves the flow of the text, and so on.
Similarly, AlphaCode, developed by Google, is an AI system that writes computer code so well it is estimated that 40% of all code written today is actually written by a bot. (Microsoft’s GitHub Copilot is a similar system.) And as with DeepL, AI has remarkably improved the productivity of its users.
There are also AI programs – Lean and Minerva – that have allowed mathematicians to improve the rigor of their calculations.
But most of us aren’t translators or coders or mathematicians, and as a result the astonishing capabilities of modern AI systems came as a shock when ChatGPT was released and seemed so, well, human.
But before we get to what is and isn’t “human” about generative AI, let’s remember that a core challenge for all startup AI firms is the massive computational power required by AI, especially large language models. As a result, virtually every AI startup has had no choice but to align itself with one or another of the huge tech firms that already have such capabilities: OpenAI is aligned with Microsoft, Bard is aligned with Google, and Stability AI is aligned with Amazon. The Chinese internet giant, Baidu, is developing its own LLM, called Ernie.
All this annoys people who are already bothered by the power of Big Tech, but it’s simply a fact of life. Unless, say, the government wants to spend $30 billion or so to build its own computer systems and make them available to startups, it seems inevitable that Big Tech will simply get bigger and that, to the extent AI becomes commercially successful, ever more powerful.
That’s not to say that even the tech giants don’t fall on their AI faces occasionally. Google, for example, pioneered AI but seems to have dropped well behind Microsoft/OpenAI. This seems to have happened not so much because Google’s technology was worse but as a result of Google corporate culture.
Google’s young, Progressive workforce has been extremely skeptical of AI for a host of reasons, some of them perfectly understandable: fear that chatbots would promote bias or prove to be discriminatory; fear that the spread of AI would only further empower the white, male workers who have been its primary users; worry about giving the big tech firms even more power; and so on.
The problem was that Google’s management allowed these concerns to slow the progress of AI at the firm, rather than managing the concerns. The result was that Google lost many top AI scientists to other AI firms, especially including OpenAI. Then, when ChatGPT was released, Google was forced to declare a “Code Red,” pulling out all the stops to try to catch up with Microsoft and preserve its virtual monopoly on search – and, more important, on the ad revenue search generates.
But when you rush things, bad stuff can happen, and when Google mounted a hastily-organized event to announce its answer to ChatGPT – Bard – bad things did in fact happen, as we’ll see next week.
Next up: Am I a Bot, Part 3