TikToker Blames AI For Costly Travel Mistake
A cautionary tale for the modern traveler has gone viral: always double-check your sources, especially when that source is an AI. One influencer learned this lesson the hard way, and her experience has sparked a major online conversation about the pitfalls of over-relying on artificial intelligence for critical information.
A Costly Mistake Blamed on AI
On August 13, Spanish content creator Mery Caldass posted a tearful TikTok video from an airport after being denied boarding for her flight to Puerto Rico. The reason? She was missing the correct travel documentation. In her video, Caldass squarely places the blame on ChatGPT, an AI tool used by millions.
“I asked ChatGPT and he said no,” she explained in Spanish, recalling how she had asked the AI assistant if a visa was necessary for her trip. Realizing her error too late, she lamented, “That’s what I get for not getting more information…I don’t trust that one anymore.” In a strange twist, she even suggested the AI might have been getting “his revenge” for a time she had previously insulted the bot.
The Visa vs. ESTA Mix-Up
Here’s where the details matter. For European Union citizens like Caldass, a visa is not needed to enter Puerto Rico for stays under 90 days. So, technically, ChatGPT was correct. However, it failed to provide the crucial next piece of information: travelers must still complete an Electronic System for Travel Authorization (ESTA) application online before their trip to determine their eligibility to enter the United States. This oversight proved to be a trip-ending mistake.
Online Backlash and a Lesson in Media Literacy
The influencer's story quickly spread across social media, with many users on a Reddit thread expressing pure bafflement at her reliance on AI for such important information.
“We're so cooked,” read one popular comment. Another user added, “I wouldn't even trust ChatGPT to tell me if the sky is blue let alone rely on it for airport/flight info.”
The conversation highlighted a broader concern. “People will do anything but look at the actual government websites,” one person wrote. Another user suggested the issue stems from low media literacy, noting that people trust AI without verifying the data sources, even though the technology can be “entirely wrong several times and quoting questionable sources.”
The Pitfalls of Trusting AI Unconditionally
While AI assistants can be incredibly useful for tasks like brainstorming, making lists, or helping with phrasing, this incident serves as a stark reminder of their limitations. Large language models are not infallible databases of fact; they can misunderstand context, provide incomplete information, or simply be wrong.
When stakes are high—involving international travel, legal matters, or financial decisions—the best practice remains the same as it was before AI: consult official, primary sources. A quick visit to the official US government travel website would have saved a lot of tears and the cost of a missed flight.