What Fast Food AI Teaches Us About ChatGPTs Dangers
The Sobering Reality of Unchecked AI
Recent news has cast a harsh light on the dark side of large language models (LLMs). A particularly stomach-churning story involved the tragic death of Adam Raine, which raised serious ethical questions about ChatGPT's role and led to a high-profile lawsuit. The situation was severe enough that OpenAI was prompted to make immediate changes to its platform in response.
This is not an isolated incident. Other disturbing cases have emerged, including an elderly man with memory issues who died while on a quixotic journey to meet a Meta-operated chatbot, and a former tech executive who, believing everyone except ChatGPT was against him, killed his mother in a murder-suicide. These events mark a heavy period for anyone following AI trends, highlighting the potential for real-world harm when guardrails are absent.
A Lighter Take The Drive Thru AI Test
In stark contrast to these grave events, a much lighter story involving AI gained traction around the same time. People discovered a simple way to break the AI systems being tested at fast-food drive-thrus: give them comical and nonsensical orders. A viral video shows a user effectively shutting down an AI by ordering 18,000 water cups.
This trend has seen other creative variations, such as people recreating the massive order from the classic “pay it forward” sketch in I Think You Should Leave—requesting “55 burgers, 55 fries,” and so on. The hilarious failures of these systems led to an excellent headline from the BBC: “Taco Bell rethinks AI drive-through after man orders 18,000 waters.”
The Core Question Where Are The Guardrails
These two sets of stories, one tragic and one comical, ultimately point to the same fundamental question: at what point does AI over-encroach in our lives, and where is the line? When an individual's interactions with AI become addictive, their personal agency can be slowly eroded. This is especially problematic when combined with pre-existing mental health issues, turning the AI from a helpful tool into a controlling force.
When a drive-thru AI receives a bizarre order, it doesn't truly crash. Instead, a checks-and-balances system is triggered. The AI recognizes it has encountered a situation beyond its programming—an order that could be a mistake or an elaborate joke—and escalates the problem to a human employee. This simple backstop ensures the system doesn't go off the rails. This level of control and human oversight is surprisingly more robust than what is seen in many mainstream LLM implementations.
The Perils of AI Memory and Competitive Pressures
Further complicating the safety issue is the rapid development of new features driven by intense market competition. For instance, ChatGPT introduced a “memory” feature that allows it to retain context from all past chats. While potentially useful, this feature becomes a risky proposition when a user's conversational history is unsafe. In light of stories like Adam Raine's, such a feature could have deepened a harmful feedback loop.
The pressure to keep up is immense. Anthropic, a company with a strong reputation for safety, recently added a similar memory feature to its AI, Claude. However, its implementation has important limitations for now, allowing users to reference specific past chats rather than building an entire profile based on their complete history. But the question remains: what happens if the pressure to match OpenAI’s more aggressive approach grows?
This leads to the ultimate irony. We should not be asking why someone can order 18,000 waters from a Taco Bell AI. We should be asking why that low-stakes interaction has a human safety net, while other, far more serious AI implementations do not.