Back to all posts

AI Deception Fuels Misinformation in Kashmir Protests

2025-10-13Masroor GILANI / AFP Pakistan3 minutes read
Artificial Intelligence
Misinformation
Fact Check

Amidst a backdrop of real and volatile protests in Pakistan-administered Kashmir, a striking image began to circulate on social media, supposedly capturing the essence of the demonstrations. However, this powerful visual was not a genuine photograph but a sophisticated, AI-generated fake designed to mislead.

The Spread of a Viral Protest Image

Social media platforms, including X (formerly Twitter), Facebook, and Threads, were instrumental in the spread of the fabricated image. One post on X claimed, "People have come out on streets at this time in Rawalakot, Azad Kashmir after lighting the torches." The image, showing a large crowd with motorcycles, flags, and blazing torches, was shared by accounts such as Jkbsnews, which has over 950,000 followers, lending it a false sense of credibility.

Screenshot of the false post with a red X added by AFP

The Real Story Behind the Unrest

The circulation of the fake image coincided with actual, deadly protests across the region. These demonstrations began in late September as citizens demanded an end to lavish perks for the political elite, including free electricity and luxury vehicles. The situation escalated, with up to 6,000 protesters clashing with security forces, resulting in the deaths of at least six civilians and three officers. The unrest was eventually called off on October 4 after the government pledged to reduce the cabinet size and investigate the violence.

Unmasking the AI-Generated Fake

Despite the realistic context, investigators quickly found conclusive evidence that the viral image was a digital fabrication. The most obvious clue was a star-shaped watermark belonging to Google's Gemini AI model, subtly placed within the image.

A reverse image search on Google further confirmed this, as the platform automatically labeled the picture as "Made with Google AI." To erase any doubt, Google's SynthID Detector tool analyzed the image and identified digital watermarks, confirming it was created with the tech giant's generative model.

Screenshot of the Google Image search result, with the Gemini logo highlighted by AFP

Beyond the digital markers, the image contained telltale visual inconsistencies common in AI-generated content. Flames appeared to be emerging directly from protesters' hands rather than from torches, and the lighting on the road was unnaturally uniform. These details are classic signs that the image was fabricated.

This incident is one of many cases where AI-generated content has been used to create and spread misinformation, highlighting the growing challenge of distinguishing fact from fiction in the digital age. AFP has previously debunked other false claims involving AI content, which can be found here.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.