ChatGPT seems to have glitched, spitting out responses ranging from quirky to nonsensical.
The buzz started on a Tuesday when perplexed users flocked to the r/ChatGPT subreddit, sharing screenshots of the AI’s bizarre antics.
One user summed up the confusion, saying, “It’s not just you, ChatGPT is having a stroke.”
The community was then flooded with descriptions of ChatGPT’s erratic behavior, saying it’s “going insane,” “off the rails,” and “rambling.”
Amidst growing Reddit chatter, a user named z3ldafitzgerald shared their eerie experience, stating, “It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia. It’s the first time anything AI-related sincerely gave me the creeps.”
As users delved deeper, the encounters grew stranger.
One user, puzzled by ChatGPT’s response to a simple question about computers, screenshotted the AI’s poetic but confusing answer: “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest.”
Speculations about the cause of this digital oddity were rampant. Some wondered if the AI’s ‘temperature’ had been cranked up too high, leading to its unpredictable outputs, while others pondered if recent updates or new features were to blame.
Reflecting on the incident, Dr. Sasha Luccioni from Hugging Face pointed out the vulnerabilities of relying on closed AI systems: “Black box APIs can break in production when one of their underlying components gets updated. This becomes an issue when you build tools on top of these APIs, and these break down, too. That’s where open-source has a major advantage, allowing you to pinpoint and fix the problem!”
Cognitive scientist Dr. Gary Marcus highlighted that hallucinations might not be so amusing if these models were hooked up to critical infrastructure or defense systems: “The Great ChatGPT Meltdown has been fixed. Has OpenAI said anything about what caused it? With society’s increasing dependence on these tools, we should insist on transparency here, esp. if these tools wind up being used in defense, medicine, education, infrastructure, etc.”
This isn’t the first time ChatGPT has exhibited such behaviors. In 2023, GPT-4’s quality seemed to mysteriously shift and diminish. OpenAI somewhat acknowledged this but didn’t give the impression they knew why it was happening.
Later, some even speculated whether ChatGPT suffered from seasonal affect disorder (SAD), with one researcher finding that ChatGPT behaves differently when it ‘thinks’ in December versus when it ‘thinks’ in May.
ChatGPT will likely keep offering periodical reminders of the unpredictable nature of AI and how we shouldn’t take its ‘objectivity’ for granted.
A case of anthropomorphization?
ChatGPT’s erratic behavior also showed our tendency to anthropomorphize AI, attributing human-like characteristics, emotions, or intentions to the technology.
The descriptions used by the users, such as ChatGPT “having a stroke,” “going insane,” or “losing its mind,” immediately liken its behavior to ourselves.
Of course, ChatGPT is not sentient and doesn’t ‘suffer’ from any form of ailment.
It just captures glitches and unpredictable behavior using natural language, which tends to trick us into offering a human interpretation.