Back to all posts

Using AI Hallucinations To Improve API Design

2025-07-08Unknown4 minutes read
AI
Software Development
API Design

A novel approach to software development is gaining traction in the programming community: using the 'hallucinations' of AI models like GPT-4 to refine and improve Application Programming Interfaces (APIs). Instead of viewing AI's incorrect assumptions as bugs, developers are treating them as a source of creative and intuitive design ideas.

Harnessing AI's 'Plausible' Guesses

The core concept, shared in a recent online discussion, involves a paradigm shift in how developers interact with AI for coding. Rather than meticulously explaining an API's functionality to an AI, a developer can present it with some example code and ask it to add a new feature. The AI, lacking full knowledge, will guess how the API should work.

"Sometimes it comes up with a better approach than I had thought of," the original poster explained. "Then I change the API so that its code works."

This method leverages what neural networks excel at: generating highly plausible, human-like outputs, even if they aren't factually accurate. This "hallucination" is essentially a form of creativity. If an AI consistently intuits that an API should have a certain function or structure, it suggests that this design is more conventional or user-friendly. This can be a powerful signal that the developer's own design might be confusing or unintuitive.

Conversely, asking an AI to explain what existing code does can also be a valuable test. If it gets the explanation wrong, it's a strong indicator that the API is confusing.

Real-World Examples from the Community

Other developers chimed in with similar experiences, validating this unconventional technique.

One user shared an anecdote about their custom Python image-processing library. They had used an esoteric function name, image_get(), instead of the more common imread() found in popular libraries like SciPy and OpenCV. When asking ChatGPT for help with scripts, the AI would consistently and incorrectly default to calling mylib.imread(), revealing a deviation from a well-established convention.

Another developer found a bug while asking an AI to write unit tests. The AI failed, but how it failed was instructive:

"It used nonsensical parameters to the API in way that I didn't realize was possible (though obvious in hindsight)... It was close enough for me to realize that 'hey, I never thought of that possibility'. I needed to fix the function to return a proper error response for the nonsense."

This highlights the AI's ability to act as an unexpected quality assurance tester, probing for edge cases a human developer might overlook.

AI as a Partner, Not a Replacement

The discussion evolved to cover the broader role of AI as a tool for human augmentation rather than replacement. One commenter drew a parallel to using Grammarly for writing. While accepting all of Grammarly's suggestions would have made their book worse by stripping out nuance and humor, about a third of its suggestions were genuinely helpful for catching passive voice and wordiness, even after multiple human edits.

This sentiment was echoed by others who see AI as a creative partner that can offer suggestions and new perspectives, but which requires human judgment to filter and implement correctly. The risk, as one user noted, is when decision-makers aim to remove the human from the loop entirely, which "almost universally leads to disastrous results."

This led to a deeper conversation about the societal implications of AI. Developers and researchers may create tools intended for assistance, but business pressures can lead to their deployment in full automation, potentially sacrificing quality and jobs for cost savings. This raised questions about developer responsibility and the control they have over their creations.

Limitations and The Path Forward

Participants acknowledged the limitations of this AI-driven design approach. It works best for newer or less popular APIs where the AI doesn't have extensive, rigid training data. Furthermore, while it can make an API more intuitive, it can't help with deeper architectural issues like inefficiency, unreliability, or a lack of composability.

Despite the caveats, the consensus leaned toward the value of this method. Using an AI's 'guess' is like tapping into the mean of collective developer wisdom. It's a way to forecast what an average developer would expect an API to look like, paving the cowpaths of code to create more intuitive and user-friendly software.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.