Back to all posts

Critical ChatGPT Mistakes You Should Avoid Making

2025-09-08Nelson Aguilar5 minutes read
ChatGPT
AI Ethics
Artificial Intelligence

AI tools like ChatGPT have woven themselves into the fabric of our daily lives, helping millions of people brainstorm ideas, draft reports, and even plan their meals. With its vast capabilities, it's tempting to offload more and more tasks to the chatbot. However, just because you can, doesn't always mean you should.

Large language models are known to sometimes generate incorrect or outdated information while sounding completely confident. This isn't a major problem for low-stakes tasks like brainstorming a party theme, but it can create serious issues in sensitive areas like personal finance, health, or legal matters. Knowing when not to use ChatGPT is just as important as knowing how to use it effectively.

(Disclosure: CNET's parent company, Ziff Davis, has filed a lawsuit against OpenAI, the creator of ChatGPT, alleging copyright infringement in the training of its AI systems.)

To help you avoid potential pitfalls, here are 11 specific situations where relying on an AI chatbot could do more harm than good.

1. Diagnosing Physical Health Issues

While it's tempting to input your symptoms into ChatGPT out of curiosity, the results can range from benign to terrifying, often causing unnecessary anxiety. For example, a simple lump on the chest might be presented as a potential sign of cancer, when it could be a common and non-cancerous lipoma, as a licensed doctor would know. While AI can help you organize a symptom timeline or draft questions for your doctor, it cannot perform a physical exam, order lab tests, or provide a real diagnosis. Always consult a qualified medical professional for health concerns.

2. Managing Your Mental Health

ChatGPT can offer grounding techniques or act as a sounding board, but it is not a substitute for a real therapist. An AI lacks lived experience, cannot read body language or tone, and has no capacity for genuine empathy—it can only simulate it. A licensed therapist is bound by professional and legal codes to protect you. An AI is not, and its advice can misfire, overlook critical red flags, or reinforce biases from its training data. For deep, personal work, rely on a trained human professional. If you or someone you know is in crisis, please contact the 988 lifeline in the US or your local hotline.

3. Making Urgent Safety Decisions

In an emergency, every second counts. If a carbon monoxide alarm goes off, your first instinct should be to get to safety and call 911, not to ask a chatbot if you're in danger. AI cannot smell gas, see smoke, or dispatch emergency services. It can only process the limited information you provide, which is too little, too late in a crisis. Use ChatGPT as a post-incident tool for understanding what happened, never as a first responder.

4. Getting Personalized Financial or Tax Advice

ChatGPT can explain financial concepts like an ETF, but it has no knowledge of your personal financial situation—your income, debts, tax status, or retirement goals. Its training data may not include the most recent tax codes or economic changes, making its advice potentially outdated. Sharing sensitive financial details with a chatbot is also a significant security risk. For matters involving your money and potential IRS penalties, consult a professional CPA or financial advisor.

5. Handling Confidential or Regulated Data

Never input sensitive or confidential information into a public AI tool. This includes client contracts, medical records, trade secrets, or any personal data covered by privacy laws like CCPA or GDPR. Once you enter information into the prompt, you lose control over where it's stored, who might review it, or if it will be used to train future AI models. If you wouldn't post it in a public forum, don't paste it into ChatGPT.

6. Engaging in Illegal Activities

This should go without saying, but do not use AI chatbots for any purpose that is illegal.

7. Cheating on Academic Work

While using AI to cheat might seem like an easy shortcut, the risks are high. Plagiarism detectors like Turnitin are constantly improving their ability to spot AI-generated text, and educators are becoming adept at recognizing its distinct style. The consequences—suspension, expulsion, or even having a professional license revoked—are severe. Use ChatGPT as a study partner for brainstorming or understanding complex topics, not as a ghostwriter to do the work for you.

8. Monitoring Breaking News and Real-Time Information

ChatGPT can now pull current information from the web, but it is not a live feed. It cannot provide continuous, streaming updates on a developing story. For breaking news where speed is critical, you are still better off relying on official news sites, live data feeds, and push alerts from trusted sources.

9. Relying on It for Gambling

Using ChatGPT for gambling advice is a risky bet. The AI can hallucinate player statistics, misreport injuries, and provide incorrect records. While it might get lucky on occasion, it cannot predict future outcomes and its data may be flawed. Don't rely on a chatbot to make financial wagers.

10. Drafting Legally Binding Documents

ChatGPT can be helpful for understanding basic legal concepts, like a revocable living trust. However, you should never ask it to draft an actual legal document, such as a will. Estate and contract laws vary significantly by state and even county. A small mistake, like a missing witness signature clause, could render the entire document invalid in court. Use AI to prepare questions for your lawyer, then have that professional draft the legally sound document.

11. Creating Original Art

This is a subjective point, but there is a strong ethical argument against using AI to generate art and passing it off as your own work. AI can be a powerful tool for supplementing the creative process—helping with brainstorming, headlines, or overcoming creative blocks. However, using it to replace the human element of creation and claiming the output as your own is ethically questionable.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.