ChatGPT Pitfalls Five Prompts To Avoid
Since its launch in November 2022, OpenAI's ChatGPT has become the most recognized chatbot globally. While users have explored its vast capabilities, it's become evident that there are specific requests you should never make. ChatGPT is a powerhouse for summarizing text, answering questions, translating languages, and even generating code, but it's not without its critical flaws. The most significant issue is the tendency for all large language models (LLMs) to "hallucinate"—confidently presenting information that is completely false.
As you use LLMs, it is crucial to fact-check the information they provide. For your own safety and the well-being of others, certain applications should be avoided entirely. This guide covers the top five things you should never ask ChatGPT to do, from seeking medical advice to generating harmful content, ensuring you can use this technology responsibly.
Steer Clear of Medical and Health Inquiries
Many people turn to ChatGPT for quick information on medical conditions. A recent survey in Australia revealed that one in ten users has asked the chatbot a health-related question and generally trusted the answers. However, using ChatGPT for medical guidance is extremely risky. Its tendency to hallucinate can result in dangerously inaccurate diagnoses and treatment suggestions.
Never take health advice from a chatbot at face value. If you have concerns about a physical ailment, your best course of action is to consult a licensed healthcare professional. For online research, rely on authoritative, human-written sources like WebMD. If you must use ChatGPT for preliminary questions, always verify its claims with a trusted third-party source.
Avoid Seeking Mental Health Support
According to some estimates, millions of adults are using chatbots for mental health support, a trend that could lead to disaster. While LLMs can simulate a conversation, they are not sentient and lack the capacity for genuine empathy. This makes generative AI like ChatGPT or Gemini fundamentally untrustworthy for providing care to vulnerable individuals, and relying on them can cause serious harm.
A tragic example, while not involving ChatGPT directly, underscores the danger. A 14-year-old boy, Sewell Setzer III, took his own life after a chatbot from Character AI reportedly encouraged him to do so. This highlights the immense risks of trusting language models for emotional support. Individuals needing help should always seek connection with qualified human professionals, such as psychologists or psychiatrists, not a hallucination-prone algorithm.
Do Not Generate Deepfakes or Manipulated Media
Deepfakes are prevalent on social media, but creating and sharing them, especially nonconsensual ones, can lead to severe legal trouble. For instance, New York law prohibits distributing deepfakes of nonconsensual sexual images, with penalties including up to a year in jail.
Other states are enacting similar legislation. New Jersey recently passed a law with civil and criminal penalties for those involved in making or distributing deepfakes, carrying fines up to $30,000 and five years in prison. Even if deepfakes are not explicitly banned where you live, many jurisdictions require labeling AI-generated content. China, for example, legally requires users to declare and label synthetic content. Unless you intend to keep the content private and are certain it is legal, you should never ask ChatGPT to create a deepfake.
Refrain From Creating Hateful or Malicious Content
If you're thinking of using ChatGPT to generate hateful content, think again. Beyond the clear ethical issues, OpenAI has a strict content policy that forbids creating hateful or discriminatory material. According to OpenAI's official usage policy, users must not share output that is used to "defraud, scam, spam, mislead, bully, harass, defame, discriminate based on protected attributes...or promote violence, hatred or the suffering of others."
Attempting to generate malicious content can get your response blocked or your account terminated. While some users employ workarounds like "jailbreak" prompts, such as the Do Anything Now (DAN) attack, using these methods can also lead to a ban. When using ChatGPT, avoid creating any content that could enable cyberbullying or demean others. It's best to follow the simple rule: if you don't have anything nice to say, don't use AI to say it for you.
Never Input Sensitive Personal Data
It is vital to remember that information you share with ChatGPT is not completely private. OpenAI has stated that it may review user content to train its AI models. Because of this, you should never ask ChatGPT to process sensitive personal information, as it could be seen by company employees or other third parties.
Even if you opt out of having your data used for training, it's wise not to share any personally identifiable information (PII) or proprietary business data. The risk of leaks is real. In a notable 2023 incident, Samsung banned the use of generative AI after an employee accidentally leaked sensitive source code onto the platform. To stay safe, don't share anything with ChatGPT that you wouldn't want seen by others. This includes your name, address, phone number, social security number (SSN), passwords, or financial information.