AI Fakes Are Tricking Millions How To Stay Safe
Linh, a 42-year-old media professional, found herself about to congratulate a soldier couple on Facebook for their newborn quadruplets. Seconds later, a sense of unease prompted her to delete the comment. On closer inspection, the heartwarming image was a sophisticated AI generation, illustrating a sentimental post. This wasn't her first brush with such deception; she had previously mistaken an AI video titled “Retirees meet summer vacationers” for genuine footage. Even for someone frequently encountering AI-generated content, Linh confessed that the rapid advancement and realism of AI technology now make it incredibly challenging to distinguish authentic content from artificial creations.
The Alarming Rise of Hyper Realistic AI
Experts in the field confirm Linh's experience. Advanced AI tools such as Google Veo 3, Kling AI, DALL E 3, and Midjourney are now capable of generating photos and videos that are virtually indistinguishable from reality. Do Nhu Lam, Director of Training at the Institute of Blockchain and Artificial Intelligence (ABAII), elaborates that these tools leverage sophisticated multimodal technologies and advanced language models. This allows them to perfectly synchronize visuals, audio, facial expressions, and natural human motion, resulting in incredibly persuasive and convincing content.
The Double Edged Sword AI's Potential and Pitfalls
Lam recognizes the immense potential of AI in fields like content creation, advertising, entertainment, and education. However, this remarkable ability to mirror reality also dangerously blurs the distinction between what is authentic and what is fabricated. This leads to substantial ethical dilemmas, security vulnerabilities, and challenges in information governance.
The deceptive power of AI is already evident. The AI-generated image of quadruplets that Linh nearly fell for, for instance, garnered almost 300,000 interactions and attracted over 16,000 comments. Many users wholeheartedly congratulated the fictitious parents, completely unaware the image was an AI creation. While some more discerning users pointed out the deception, many were easily fooled.
This trend is not isolated. AI-generated videos are becoming increasingly prevalent across Facebook groups. The recent introduction of tools like Google Veo 3 has significantly enhanced video realism, particularly in synchronizing lip movements with audio, making it even more challenging to identify fakes.
Staying Vigilant in the Age of AI Deception
AI-generated media presents substantial dangers, particularly for individuals who are vulnerable or less familiar with technology. Vu Thanh Thang, Chief AI Officer at SCS Cybersecurity Corporation, cautions that criminals are actively using AI for various malicious purposes. These include perpetrating scams, spoofing biometric identification, and impersonating individuals, which can deceive security systems like eKYC (electronic Know Your Customer) and facilitate the spread of misinformation through fake videos of public figures.
Thang further highlighted that businesses are not immune to these threats. AI-powered deepfakes can be used to impersonate employees to circumvent security measures, manipulate facial recognition systems, or even mimic company executives to tarnish reputations or initiate fraudulent transactions.
Do Nhu Lam identifies three primary risks AI poses to individuals: financial scams, defamation through fabricated content, and the misuse of personal information. For businesses, the risks are equally severe. Lam pointed to a recent case where Arup, an engineering firm, suffered a USD 25 million loss. An employee at their Hong Kong office was deceived into transferring company funds during a meeting that involved deepfake video representations of colleagues.
Beyond financial and personal harm, a more insidious consequence is the erosion of public trust. When distinguishing between authentic and artificial content becomes nearly impossible, faith in media outlets and other reliable sources of information inevitably declines. Lam cited a 2024 report from the Reuters Institute, which found that global trust in news consumed on digital platforms has plummeted to its lowest level in a decade, a decline largely attributed to the proliferation of deepfakes.
“We’re no longer discussing the potential risk of fake content – it is a full-blown reality,” Thang stated emphatically. He strongly encouraged the public to increase their digital literacy and adopt protective measures. This includes gaining a better understanding of how AI technologies operate and learning how to navigate the digital world safely alongside them.
Both Lam and Thang offer crucial advice for users: always verify information before taking any action based on it, educate yourselves on how to identify fabricated media, be cautious about sharing personal information online, and actively report any content that appears fake or harmful. “Only through knowledge and constant vigilance can individuals protect themselves and help foster a safer digital environment in this new age of AI,” Lam concluded.