OpenAI is considering adding an encryption feature to protect users’ data on ChatGPT, according to CEO Sam Altman. He stated that this feature would initially apply to temporary chats.
Temporary chats are not saved in the user’s history and, importantly, are not used to train OpenAI’s AI models. Altman emphasized that this step aims to enhance privacy because users often share sensitive information with ChatGPT. He also noted that conversations with ChatGPT currently do not have the same legal privacy protections as those with doctors or lawyers, highlighting the need for encryption.
However, implementing this feature comes with challenges:
- Legal issues: Last year, a court ruling in a copyright infringement case required OpenAI to retain temporary and deleted chats. Privacy advocates have criticized this decision.
- Technical challenges: Experts say implementing full end-to-end encryption in chatbots is complex. Unlike regular messaging apps, AI is part of the conversation and needs access to the data for its algorithms to function correctly.
Some companies have partially addressed this issue. For example, Apple’s Private Cloud Compute technology processes requests for Apple Intelligence on servers while limiting broad access to user data.
Overall, OpenAI’s plans are promising for user privacy, but technical and legal obstacles remain. The company has not yet provided a timeline for when these issues will be resolved.