Maine Police AI Photo Editing Attempt Ends in Failure
A Seemingly Simple Edit Goes Wrong
A police department in Maine has become an example of the unforeseen pitfalls of modern technology after an attempt to edit a photo of a drug bust went spectacularly wrong. The Westbrook Maine Police Department used an AI tool to add its official insignia to the photograph, but the officers involved were unaware that the AI didn't just add a logo—it regenerated the entire image, leading to significant and noticeable alterations.
The real photo of the drugs bust, left, and the AI image that cops posted, right.
How a 'Photoshop App' Became an AI Fiasco
The department's social media followers on Facebook were quick to notice that something was amiss. They pointed out garbled text on evidence packaging and an unnatural, glossy sheen on the items in the photo, which looked out of place. Typically, the department would place a physical badge next to seized items, but this time they opted for a digital solution that backfired.
After initially denying the image was AI-generated, the department investigated and realized its mistake. The 'photoshop app' used was, in fact, ChatGPT. While primarily known as a chatbot, ChatGPT also features a powerful image generator that can edit user-uploaded photos. However, the police officers did not understand that the tool uses the uploaded photo as a reference or prompt to create a completely new, AI-generated image. The result looks similar but is fundamentally a fabrication.
Public Backlash and a Department's Apology
Faced with public scrutiny, the department removed the altered photo and issued a formal apology. "After taking the photograph, the officer wanted to add in our department patch to identify Westbrook as the arresting agency," the Westbrook Maine Police Department explained in a Facebook post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it."
The department added, "We apologize for this oversight. It was never our intent to alter the image of the evidence."
The Obvious Telltale Signs of AI
The incident has raised questions about the general understanding of AI technology. As one local resident, Jessica Wellman, told WGME news, "The fact that the person who posted it and put it through ChatGPT didn’t notice the differences because they were very obvious."
Beyond just garbling text, the AI-generated image actually removed some of the narcotics that were present in the original photo. This significant discrepancy makes it even more surprising that the error was not caught internally before being posted publicly, highlighting a critical learning moment for the department and a cautionary tale for others using generative AI tools.