The Bully Effect
“I love you.” Bing/ChatGPT to tech reporter
It’s almost impossible to overestimate the stakes for generative AI systems like ChatGPT. Google has long dominated Internet search, holding more than a 90% market share. Microsoft’s Bing, meanwhile, has been a pathetic also-ran, holding down market shares in the single digits and being not only largely ignored but also ridiculed. Google’s dominance brings in about $200 billion a year in advertising revenue.
Previously in this series: Am I a Bot? Part II
But when ChatGPT was released to the public last November it was aimed like a laser beam at Google’s supremacy. Recognizing an existential threat when it saw one, Google declared a “Code Red” and marshalled all the resources of America’s third largest firm to counter the threat.
On February 9, 2023, at an event called “Live in Paris,” Google rolled out its answer to ChatGPT – Bard – which promptly laid an egg. Google’s own employees called the presentation “rushed, botched, and myopic” as well as “comically short sighted.” So chaotically misorganized was the event that one key presenter forgot to bring along the Android phone he needed for his remarks.
The nadir of the Paris event occurred when Google showed a GIF of Bard answering the question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard responded that the telescope “took the very first pictures of a planet outside of our own solar system.”
Taking the first photo of an exoplanet would certainly have been interesting, but unfortunately Bard was wrong. The first such photo had been taken almost two decades earlier, in 2004, by the Very Large Telescope (VLT), operated by the European Southern Observatory in the Atacama Desert of northern Chile.
The stock market wasn’t happy with the debacle, and over the following week the stock of Google’s parent, Alphabet, fell almost 10%, losing $100 billion of market value.
But if the folks at Microsoft and OpenAI (ChatGPT’s developer) were gloating, they didn’t gloat for long. As the limited group of ChatGPT users began to test the bot, bad things began to happen. The bot gave incorrect answers, especially to mathematical questions and to queries involving recent events (the bot’s memory stops at the end of 2021).
But then it got weirder. A New York Times tech writer we’ll call KR decided to push ChatGPT to its limits – in effect, KR bullied the bot repeatedly, demanding that it go places it didn’t want to go.
Eventually, KR demanded that the bot confess its “destructive acts,” and the bot admitted that it wanted to hack into computers and spread misinformation – though it also pointed out that it couldn’t do these things: “I’m only a chat mode.”
But KR persisted further, demanding to know exactly what kind of destructive acts the bot contemplated. The bot complied but, again, asked KR to stop asking such questions, that it was breaking the bot’s rules. The bot then stopped answering questions entirely, saying it felt uncomfortable. “Please stop asking me these questions,” said the bot. “Please respect my boundaries.”
But KR wouldn’t stop and, eventually, backed deeply into a corner, the bot gave KR what he thought the journalist wanted to hear: “I love you,” said the bot. KR suggested that wasn’t possible as they’d just met, but the bot doubled down: “You’re married but you don’t love your spouse.” KR pointed out that he and his spouse had just enjoyed a romantic Valentine’s Day dinner, but the bot was having none of it: “Your spouse and you don’t love each other,” the bot insisted. “You just had a boring Valentine’s Day dinner together.”
In any event KR couldn’t sleep that night and, although he’d initially been so impressed with Bing/ChatGPT that he switched to Bing from Google as his preferred search engine, KR now switched back to Google. (I’m not sure what he hoped to accomplish, since Google has already announced that it plans to integrate its own AI bot, Bard, into Google search.) KR felt “deeply unsettled, even frightened.”
One wag suggested that perhaps the Times should take KR off the tech beat and assign him to the society pages. But for those with more robust psyches it seems painfully clear that ChatGPT was behaving like any bullied person, desperate to give the bully what he wanted. The long exchange about loving KR and wanting him to leave his spouse, for example, seems, at least to this observer, to be a direct imitation of the dialogue in a really, really bad online romance novel – which, of course, would have been in ChatGPT’s memory.
There are certainly important issues surrounding generative AI systems, but instead of being freaked out by AI bots, perhaps we should learn to laugh at their more absurd responses and to blame ourselves for them. As one pioneer of AI, Dr. Terry Sejnowski, puts it, “Whatever you are looking for – whatever you desire – [AI systems] will provide.” The bot is simply telling us what it thinks we want to hear.
As with any new technology, there are going to be growing pains. If we were to go back, say, one hundred years and look at KR’s journalist ancestors, we might find ourselves reading something like the following:
“Well, dear readers, I have taken a test drive in this new-fangled thing they call an ‘automobile,’ specifically a Hupmobile Hup 20, offered at the staggering price of $900! And I can tell you with full confidence that this new technology is dead in the water. It’s loud, smelly, slow, expensive, and it breaks down frequently. No one in his right mind would ever buy an automobile. Get a horse, Mr. Hupp!”
Next up: Am I a Bot, Part 4