Kobi Nissan.
Opinion

The Trojan horse is already in the office: Blocked ChatGPT? Your real problem is Zoom and Gmail

"Instead of playing a cat-and-mouse game with employees, organizations need tools that can map not only which applications are installed, but which AI features are actually in use, what data they process, and under what permissions," writes Kobi Nissan, CEO of MineOS.

Over the past year, organizations have invested significant effort in deciding how to handle artificial intelligence. Many managers have blocked access to specific tools and issued guidelines to employees, believing they have taken the right steps to stay in control.
The intention is good, but the sense of control can be misleading.
As organizations lock the front door against AI perceived as "dangerous", for instance, by implementing a sweeping ban on ChatGPT or similar tools, AI is already finding its way in elsewhere. It doesn’t arrive as a new tool that requires a procurement decision, but as a built-in feature within the most familiar and widely used enterprise tools: Gmail offering to draft emails, Zoom summarizing meetings, CRM systems generating automatic insights, and virtually every other major enterprise platform.
1 View gallery
קובי ניסן מייסד-שותף ומנכ״ל ב-MineOS
קובי ניסן מייסד-שותף ומנכ״ל ב-MineOS
Kobi Nissan.
(Photo: Hagar Bader)
It is time to admit: the familiar ritual of sending warnings to employees and mandating password changes no longer reflects how risk actually shows up today. The real danger doesn’t arrive as a burglar in the dark, but as a lawful tenant living inside the tools we already trust and use every day.
The New Trap: The Illusion of Transparent Adoption
The defining challenge today is no longer the classic “Shadow IT” of the 2000s, the rogue employee downloading pirated software in the shadows. The more sophisticated challenge now is what can be described as “Invisible Adoption.”
Ask yourself: what happens to your data assets when an employee clicks the “Help me write” button that appeared in their Gmail this week? Where does the transcript go when an executive assistant asks Zoom to automatically summarize a sensitive board meeting? And what happens to the data inside the corporate CRM when it suggests that a sales representative draft a reply to a strategic client?
In most cases, you simply don’t know. Tech giants have changed the rules of the game: AI is no longer a “tool” that requires proactive installation, but a “feature” that is often enabled by default. It is simply there - available, tempting, and above all, appearing official. A reasonable employee assumes that if a function exists within the corporate Office suite or as an approved browser extension, someone must have approved it.
In practice, there was no deliberate decision, just a lack of attention.
The Hidden Cost: Training the Competition
The result is a paradox many organizations face today: companies spend a fortune on external cyber risk controls and compliance with a growing set of AI and privacy regulations, from the EU AI Act to other regulatory frameworks. Yet at the same time, their intellectual property (IP) is leaking out through the most legitimate and trusted channels.
Without your knowledge, your data, code, business strategy, and client lists may be fed into the training models of major tech providers. In effect, you are paying vendors to use your data to train models that may one day serve your competitors.
This is “Shadow AI” in its most sophisticated and dangerous form, precisely because it is transparent. It doesn’t require employees to be hackers; it only requires them to click “I Agree” on new terms of use that appeared this morning in software they’ve been using for a decade.
From Defense to Governance: Guardrails Instead of Gates
The answer isn’t a return to pen and paper. AI-driven efficiency is now table stakes, and any organization that tries to stifle it will quickly discover that employees will simply bypass the controls.
The approach to managing AI must change - from a “blocking” mindset to one built around guardrails.
My recommendation to managers is to replace the question “How do I block?” with “How do I see?” Instead of playing a cat-and-mouse game with employees, organizations need tools that can map not only which applications are installed, but which AI features are actually in use, what data they process, and under what permissions.
Only then can organizations regain control and set clear boundaries with vendors, and say, “Enough. This data is mine.”
The Trojan horse is already inside the office. It didn’t arrive through a malicious phishing email, but through a routine version update of software you already trust. The choice is yours: keep looking away, or start managing it.
The writer is Co-founder & CEO of MineOS, a platform for Data Privacy & Risk Management and AI Governance.