When interacting with chatbots, it’s crucial to consider that your conversations may be used to enhance the underlying AI systems. For instance, if you share sensitive information with ChatGPT, it could be utilized to refine OpenAI's algorithms. This concern extends to other platforms as well, like Google’s Gemini, where uploaded documents may be accessed by reviewers for model improvement.
Many AI models have been trained on extensive data collected from the internet without user consent, raising significant privacy issues. Fortunately, some companies offer options to opt out of having your interactions used for AI training:
Google Gemini
Google retains conversations with Gemini for up to 18 months by default, allowing human reviewers access. Users aged 18 and older can adjust this setting through the Gemini website under the Activity tab, where they can turn off chat recording or delete previous conversations. Note that data selected for review may remain stored separately.
Meta AI
Meta’s AI chatbots on platforms like Facebook and WhatsApp utilize data from public interactions but not private messages. Users in the EU and UK can formally object to their data being used for AI training by completing a form found in Meta's privacy settings. However, users in the U.S. lack this option and must follow a cumbersome process to request the exclusion of data scraped by third parties.
Microsoft Copilot
Currently, there is no opt-out option for personal users of Microsoft Copilot. Users can delete their interaction history via the Microsoft account settings.
OpenAI's ChatGPT
OpenAI users can disable the option to have their conversations used for training through the Data controls in settings. Non-account users can do this via the help menu on the website. Conversations will remain in chat history but will not contribute to model improvement.
Grok by X
Elon Musk’s Grok chatbot can utilize user data by default for training. Users must manually opt out by adjusting settings on the desktop version of X under "Privacy and safety." Unfortunately, this option isn’t available on mobile.
Claude by Anthropic AI
Claude does not typically train on personal data, but users can grant permission for specific interactions to be used in training. Users can signal their consent or objection via thumbs up/down or by contacting the company.
Overall, being proactive about your data privacy can help you navigate the evolving landscape of AI interactions.