Digital Deception Fuels India Pakistan Tensions
When a deadly attack struck the tourist town of Pahalgam in Indian-administered Kashmir last month, igniting military conflict between India and Pakistan, a different kind of war simultaneously erupted online—a battle for truth itself.
Platforms like X, WhatsApp, Facebook, and YouTube became inundated with fake videos powered by artificial intelligence, repurposed old war footage, and entirely fabricated narratives. This flood of false information spread rapidly, fuelling fear, outrage, and widespread confusion on both sides of the border.
Among the most pervasive were digitally manipulated images depicting supposed strikes.
One particularly viral AI-generated fake, viewed millions of times, purported to show Rawalpindi Stadium in northern Pakistan devastated after an attack. Another piece of disinformation falsely suggested that Pakistani Prime Minister Shehbaz Sharif had conceded defeat.
"This was electronic warfare," stated Raqib Hameed Naik, executive director at the Center for the Study of Organized Hate in Washington DC. His organization compiled a database of hundreds of such misleading posts. "It was weaponised primarily to fabricate false narratives of military success with fictional visual evidence and to feed hyper-nationalist sentiment, baying for war and more blood," Mr Naik explained. "The goal was to manipulate public opinion — the war of perception is everything that matters in modern warfare."
The Escalation of Deepfakes and Fabricated Triumphs
One of the most concerning trends in disinformation during the Pahalgam crisis was the emergence of sophisticated deepfakes. These are typically created using AI to superimpose or manipulate video, audio, or images.
An AI-generated video appeared to feature Pakistan Army spokesperson, General Ahmed Sharif Chaudhary, admitting to the loss of two fighter jets.
"The lip sync was nearly perfect," commented Nighat Dad, founder of the Digital Rights Foundation (DRF), a Pakistan-based NGO. "The only thing that gave it away was the dialect of Urdu and some words that Indians typically mispronounce in Urdu. Honestly, it was one of the most convincing deepfakes I've seen." The objective was evident: to undermine Pakistani morale and exaggerate Indian successes. Ms. Dad believed it was effective, noting that the video, shared across thousands of Indian accounts, even surfaced in mainstream news debates before being debunked.
"Misinformation is helping change the narrative, it's helping win wars," Ms Dad asserted. Another viral post utilized video game footage, complete with dramatic music and nationalistic captions, to falsely claim Indian jets had downed Pakistani aircraft over Bhuj. "It was crafted to look like a decisive military win," said Mr Naik, "But it was just a flight simulator."
Old footage of downed fighter jets was also circulated, claiming Pakistani aerial victories. Groups from both nations acknowledged that the disinformation campaign was not a one-sided affair. Pakistan's deputy prime minister and local media also shared a fabricated article, supposedly from the UK's The Daily Telegraph, which praised Pakistan's Air Force as the "king of the skies," despite the newspaper never having published such a piece.
When Falsehoods Permeate Mainstream Channels
Both misinformation—misleading content shared without deceptive intent, often by ordinary users—and disinformation—the deliberate spread of false information to manipulate or harm—were rampant during the conflict. Crucially, disinformation didn't just emanate from fringe accounts; verified users and even mainstream media amplified unverified content.
In a prominent case, a video of a couple dancing on a Kashmiri hillside, originally posted by the couple themselves, was erroneously claimed to be their "final moments" before being killed in the Pahalgam attack. Major Indian channels broadcast this video without proper verification.
The following day, the couple posted on Instagram clarifying: "Hey guys, we are alive … we had to delete our original post because it sparked so much hatred." Sara Imran, a research associate with DRF, noted, "Even with the couple debunking the video, it still spread like wildfire." Some in the online community even defended the spread of false information when Ms Imran pointed out the misuse, with one person replying, "If it psychologically hurts the other side, it doesn't matter if it's fake."
Recycled Content Ignites Battles of Perception
In other instances, old footage from a naval drill was circulated as an Indian Navy attack on Karachi port, while clips from Israeli air strikes in Gaza were misrepresented as Indian strikes on Pakistan. These false claims initially sparked panic in Karachi and Peshawar, with residents fearing imminent attacks.
Once the immediate panic subsided, some locals responded by posting videos of themselves calmly drinking tea, juxtaposed with the sensationalist headlines, thereby mocking the false reports. "Meme culture became a response," said DRF's Ms Dad. "Pakistanis countered misinformation with humour, sarcasm, and jokes." However, she also documented rampant misinformation that fuelled hate speech in both Pakistan and India, including genocidal threats to starve, invade, or bomb Pakistan. Her organization found several fake claims of Kashmiri locals sheltering terrorists. Efforts to counter these falsehoods often faced roadblocks and censorship.
A System Under Strain The Failure of Moderation
On April 28, India took action by blocking 17 Pakistani YouTube channels and several X accounts. These included journalists, media outlets like Dawn News and Samaa TV, and even official Pakistani government handles. The Indian government and X did not respond to requests for comment, while YouTube requested further details.
"Civil society has long criticised India's blocking regime for its opacity, lack of transparency and absence of due process," Ms Dad remarked, adding criticism for "blanket bans based on identity or viewpoint such as targeting accounts for being Pakistani or critical of India."
Despite the efforts of community fact-checkers and organizations like AFP Factcheck, the sheer volume of misinformation was overwhelming.
BOOM Live, an independent fact-checking initiative, recommended that media outlets implement stricter verification protocols, especially during conflicts. It also suggested that social media platforms enhance their algorithms to detect and flag misleading content more effectively.
"The system of checks and community notes [on X] failed miserably, especially during the conflict," Mr Naik stated. His research identified X as the central battleground. Despite mass circulation, very few posts were flagged or removed across various social media platforms. The ABC also contacted Meta (Facebook's parent company) but received no response in time for publication.
"Of the 437 posts we examined [on X], 179 came from verified accounts. Only 73 had community notes," he reported. Mr Naik's attempts to reach out to X for comment were unanswered.
Even more troubling, according to Mr Naik and Ms Dad, was the muted response from platform moderation systems. "It's about civilians knowing the truth and differentiating between what is a lie," Mr Naik emphasized. "Truth becomes the casualty of war and cross-border disinformation and fact-checking units are needed."