Back to all posts

ChatGPT Sharing Habits That Risk Your Job And Safety

2025-06-17John Sundholm4 minutes read
AI Security
ChatGPT
Data Privacy

We are increasingly using AI tools like ChatGPT for various tasks, sometimes without even realizing it, such as with Siri and Alexa. While these tools are fantastic for streamlining our work, a security expert highlights that we often overlook the implications of the information we share with them and how it might be used in the future.

The Dangers of Oversharing with AI

It's common to view AI tools like Claude or ChatGPT as harmless digital assistants. However, experts from the digital security firm Indusface warn that we are overly complacent about the privacy of data entered into these platforms. This concern is supported by statistics: studies reveal that 38% of regular users have admitted to sharing sensitive work data, such as customer information and confidential legal or financial details, with AI tools without employer consent.

Watch a relevant video on YouTube

AI tools are not only non-human entities incapable of keeping secrets but are also susceptible to hacking. Consequently, data breaches involving this type of information reportedly surged by 60.4% between February and April 2023, a period marked by ChatGPT's rapid user growth after its launch in November 2022.

These breaches jeopardize sensitive job-related details and personal information often intertwined with work data. While data privacy concerns may seem trivial to some due to its perceived non-existence, such breaches can lead to job termination if an employee is found responsible.

Here are five types of information Indusface advises against sharing with AI tools at work to prevent these risks.

1. Work Files Like Reports and Presentations

These documents are typically filled with sensitive information about your company and yourself. Research indicates that up to 80% of Fortune 500 employees use AI tools like ChatGPT for tasks such as drafting emails, reports, and presentations. These documents often contain confidential data.

Security experts recommend removing all sensitive information before uploading such files to AI tools. Large language models (LLMs) retain all data they receive indefinitely and might share your information with other users if prompted.

2. Passwords and Access Credentials

For decades, we've been told never to share passwords. Yet, people frequently provide them to LLMs for assistance with tasks, and AI features are integrated into many password management tools.

Indusface cautions, "It’s important to remember that [LLMs] are not designed with confidentiality in Mind. Rather, the purpose is to learn from what users input, the questions they ask, and the information they provide." Exercise caution.

3. Personal Details Like Full Name and Address

woman sharing personal info like her name and address with ChatGPT

Sharing personal information like your full name, address, and even photos of yourself or others might seem natural when using AI as an assistant. However, security experts warn this practice significantly increases your vulnerability to fraud.

Indusface states that any information fraudsters could use to impersonate you or create deepfakes of you or your associates should not be shared with ChatGPT. Such incidents can damage your finances and reputation, and potentially harm your employer's reputation, leading to legal risks for both you and your company.

4. Financial Information

While LLMs like ChatGPT can be helpful for understanding complex financial topics or performing financial analysis, security experts advise against using them for decision-making. Besides security issues, LLMs are primarily designed for word processing, and their numerical accuracy can be unreliable.

Using them for financial analysis may result in inaccurate responses, leading to serious errors that could cost you your job.

5. Company Code or Other Intellectual Property

Tech companies often include highly exploitative intellectual property clauses in their terms of service. For example, any text you write in an online word-processing platform could be used to train AI LLMs.

In a business context, this means any sensitive company secrets or intellectual property within the information you share with AI tools is vulnerable. Using ChatGPT for coding assistance is increasingly common, but Indusface warns that this code could be "stored, processed, or even used to train future AI models, potentially exposing trade secrets to external entities." This, too, could lead to job loss.

The key takeaway is that while these AI tools seem like technological marvels, they must be used with extreme caution. Their creators' business models often depend on users not taking these precautions.

Some find AI's advice compelling enough to make significant life changes, as seen in cases where individuals developed delusions from using ChatGPT as a spiritual guide, a wife ended her marriage based on ChatGPT's analysis, or an employee quit their job on ChatGPT's recommendation.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.