Back to all posts

ChatGPT Made Up A Feature So Developers Built It

2025-07-23Matthew Gault3 minutes read
AI
Software Development
Technology

What happens when your users start demanding a feature that doesn't exist, all because a popular AI told them it was real? This isn't a hypothetical question. It's the new reality for some software developers, a phenomenon one has dubbed 'gaslight-driven development.'

The Case of the Hallucinated Feature

Adrian Holovaty, a developer for the sheet music app Soundslice, recently noticed a bizarre pattern in his site's error logs. Day after day, users were attempting to upload ASCII tablature—a simple text-based format for guitar music—even though Soundslice had never supported this feature. The logs contained a crucial clue: screenshots of conversations with ChatGPT, where the AI confidently provided ASCII tabs and instructed users to upload them to Soundslice.

"It was around 5-10 images daily, for a period of a month or two," Holovaty told 404 Media. "Definitely enough where I was like, ‘What the heck is going on here?’"

The Developer's Dilemma: Fight or Build?

Faced with a persistent stream of users being misled by an AI, Holovaty's team had a choice to make. They could ignore the issue, clutter their site with warnings, or simply build the feature ChatGPT had invented. Given that the fix would only take a few hours, the decision was clear.

"The main reason we did this was to prevent disappointment," he explained. The team was motivated by the "galling reality that ChatGPT was setting Soundslice users up for failure." The options were:

  1. Ignore it and let users get frustrated.
  2. Add annoying banners to the site explaining the AI was wrong.
  3. Just spend a few hours and develop the feature.

They chose option three, effectively making the AI's hallucination a reality.

Welcome to 'Gaslight-Driven Development'

This incident isn't isolated. Developer Niki Tonsky has seen this before, coining the term "gaslight-driven development" to describe it. On his own project, a database for frontends called Instant, his team ran into a similar problem. LLMs interacting with the app insisted on calling an update method "create" instead of its actual name, "update." Rather than fight the tide, Tonsky's team simply added "create" as an alias.

"It’s not programming for AI, but AI as a tool changes how we do programming," Tonsky noted, suggesting that developers will increasingly have to account for the 'tastes' of LLMs.

The Unseen Influence of LLMs on Software

Holovaty's experience highlights the growing and unchecked power of LLMs to influence consumer behavior. There's currently no formal process to tell a model like ChatGPT that it's wrong, unlike the ability to request a site's removal from Google's index.

"It's making product recommendations—for existent and nonexistent features alike—to massive audiences, with zero transparency into why it made those particular recommendations. And zero recourse," Holovaty said.

He compared the situation to dealing with an overzealous sales team that constantly promises features that don't exist. While he uses machine learning in Soundslice, Holovaty remains skeptical of trusting LLMs for production code, concluding, "Plus: writing code is fun! Why would I choose to deny myself fun? To appease the capitalism gods? No thanks."

Matthew Gault

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.