Five Ways To Secure Your ChatGPT Conversations
The Hidden Risks of Everyday AI
Artificial intelligence tools like ChatGPT have seamlessly integrated into daily life for millions, who use them for handling sensitive personal, medical, and professional information. However, cybersecurity experts warn that this convenience comes with a significant risk. Careless use could expose your private data and compromise your privacy.
Recent research in cybersecurity reveals that it's relatively straightforward for a skilled hacker to access data shared with these models. While OpenAI is in a constant battle to patch vulnerabilities, it's a persistent cat-and-mouse game where attackers are always searching for the next exploit.
Five Steps to Protect Your Data on ChatGPT
Fortunately, you can take proactive measures to safeguard your information. The National Cyber Directorate recommends five simple steps to significantly reduce the risk of exposing your personal data when using AI chatbots.
1. Opt Out of Model Training
When you use ChatGPT, both the free and paid versions have a default setting that allows OpenAI to use your conversations to train its future models. If this setting is enabled, any personal or proprietary business information you input can be stored and might reappear in the model's responses to other users down the line.
What to do: Navigate to Profile > Settings > Data Controls
and turn off the option labeled “Improve the model for everyone.”
2. Be Cautious with Shared Links
ChatGPT offers a feature to share your conversations via a public link. While convenient, once you share this link, you lose all control over who sees it and where it's distributed. Deleting the original conversation from your account does not disable a link that has already been shared.
What to do: Never share chat links that contain any private or sensitive information. As of now, there is no feature to set or limit access permissions for these shared links.
3. Supervise AI Agents
Advanced AI “agents” can perform automated tasks on your behalf, like browsing websites or making online purchases. These agents operate without human judgment and can be tricked into clicking malicious links or submitting your information to phishing websites.
What to do: Always provide your AI agents with clear and specific instructions on what they are and are not permitted to do. Avoid entering passwords, financial data, or other sensitive credentials on any website accessed by the agent, and always double-check that the site is legitimate.
4. Watch for Prompt Injection Attacks
Prompt injection is a sophisticated cyberattack where a hacker embeds malicious instructions within a seemingly harmless webpage, document, or link. When your AI agent processes this content, it can be tricked into executing harmful commands without your knowledge, potentially compromising your data or accounts.
What to do: As with the previous step, it's crucial to write clear, restrictive prompts for any AI agent. For added security, you can even use a separate AI model to help you craft safer and more secure prompts.
5. Enable Two-Factor Authentication (2FA)
Two-factor authentication is a critical security layer for your OpenAI account. It ensures that even if a hacker manages to steal your password, they cannot log in without a second form of verification, typically a temporary code sent to your phone.
What to do: In your account, go to Settings > Security > Multi-factor authentication
and enable the option. Using a dedicated authentication app is considered the most secure method for 2FA.
By following these essential guidelines, you can significantly minimize the risk of data exposure and continue to use powerful AI tools more safely and securely.