How Faulty AI Led to an Innocent Mans Arrest
The NYPDs Billion Dollar Tech Arsenal
The New York Police Department operates on a scale that rivals the military forces of entire nations. With a budget approaching $6 billion and over 48,000 full-time staff, its resources are comparable to the defense spending of countries like the Philippines and Iraq.
This immense funding has allowed the NYPD to build a sophisticated and controversial collection of surveillance technology. Between 2007 and 2020, the department invested over $2.8 billion into a virtual toy chest of spy gear, which includes advanced tools like stingray phone trackers, predictive crime software, and even secretive X-ray vans. One of the most contentious tools in this arsenal is a facial recognition system that has been in development since 2011, a system that recently resulted in a severe case of mistaken identity.
A Case of Flawed Facial Recognition
As reported by the New York Times, a father named Trevis Williams was wrongfully arrested on April 21. The arrest stemmed from an investigation into a man who had exposed himself to a woman two months prior. When investigators fed grainy CCTV footage of the incident into their facial recognition algorithm, it produced a list of six possible matches.
The AI-generated list featured six men who looked remarkably similar: all were African American with facial hair and dreadlocks. Despite this, an investigator noted that a match from the system was not sufficient probable cause for an arrest and that any individual identified was only a potential suspect.
From Possible Match to Wrongful Arrest
Despite the unreliability of the initial AI match, detectives proceeded to include Trevis Williams's image in a photo lineup presented to the victim. This method is already known to be a notoriously unreliable process, and the inclusion of an AI-generated suggestion only compounded the potential for error. When the victim identified Williams with confidence, the police believed they had the probable cause needed to make an arrest.
Williams was later located by subway police in Brooklyn and taken in for questioning. He was 12 miles from the crime scene at the time of the incident and, more strikingly, was eight inches taller and 70 pounds heavier than the actual suspect. Despite these massive physical discrepancies and his pleas of innocence, Williams was arrested and jailed for over two days. "That's not me, man, I swear to God, that's not me," he insisted, only to be dismissed by a detective who reportedly replied, "Of course you're going to say that wasn't you."
A Disturbing Pattern Without Safeguards
The charges against Williams were eventually dismissed in July, and the case was closed. However, this incident serves as a chilling example of what can happen when law enforcement uses powerful technology without sufficient checks and balances. This is not an isolated event. At least three other Black individuals have faced similar wrongful arrests due to faulty facial recognition in Detroit.
In response to these errors, legal advocates have successfully pushed for stringent rules governing the use of facial recognition in assembling police lineups in other jurisdictions. The NYPD currently has no such safeguards in place, raising serious questions about future miscarriages of justice fueled by flawed AI.