Elad Shulman.
Opinion

The OpenAI security breach: The information you share with ChatGPT may leak

The massive amount of data accumulated in ChatGPT and similar platforms is a goldmine for hackers who are working harder than ever to extract it. Who will win the race for your ChatGPT data, and what must AI companies do to prevent information leaks?

OpenAI may have reported the recent security incident in a way that downplays its scale, but the reality is far more concerning. While this wasn’t a direct breach of OpenAI’s own servers, it did involve exposure of data through a third-party provider called Mixpanel, which is used as an analytics tool.
The vulnerability allowed attackers to access general information about users (primarily developers) who interacted with OpenAI’s API, details such as names, email addresses, browser information, operating systems, and approximate locations. End users, at least in theory, were not affected.
1 View gallery
אלעד שולמן מנכ"ל לאסו Lasso
אלעד שולמן מנכ"ל לאסו Lasso
Elad Shulman.
(Photo: Sharon Gadassi)
This incident, along with others like it, underscores the relentless effort by hackers to reach the holy grail of the AI era: ChatGPT user accounts. AI companies are investing heavily in securing their own servers, but hackers have discovered the real weak point, third-party vendors and supply-chain dependencies.
Even if the breach did not include passwords or personal conversations, it still exposes data that attackers can use for impersonation (phishing), personal intelligence gathering, or creating a false sense of trust. For example, if someone knows which service you used and on what operating system, they can send you an email that looks completely legitimate, leading you to enter a password or download a malicious file.
Beyond that, such a breach weakens the overall sense of security in AI-based systems. As we increasingly rely on AI in our everyday lives, work, healthcare, and personal information management, the sensitivity to data leaks rises, even when they originate from systems adjacent to the core platform. That is why even non-developer users should know about the incident, understand what was exposed, and take precautions, for example, checking for suspicious messages and being more vigilant.
As the CEO of a company focused on AI security, this incident is an important reminder that solutions like governance, visibility and monitoring, data protection, policy enforcement, and more are not “nice-to-have” features, they are critical components. The incident highlights the importance of an additional security layer for organizations, since the model providers themselves, OpenAI, Gemini, and others, are not focused on building security solutions, but on improving the core product.
A breach into AI user accounts isn’t a question of if, but when. The most sensitive information could leak the moment hackers discover a single weakness, and we all know that moment will come eventually. To defend against it, AI companies must implement as many layers of protection as possible, including external ones.
Elad Shulman is the Co-Founder and CEO of Lasso Security.