
Opinion
90% of companies have already adopted AI - so why do only 20% manage to extract real value?
"AI has effectively become a “stress test” for organizational data infrastructure. Ultimately, the challenge organizations face today is not an excess of AI, but a lack of readiness for it," writes Eran Barak, co-founder and CEO of MIND.
What began as a promise of revolution has quickly become reality. Over the past year, artificial intelligence has entered almost every organization - not as a long-term strategic initiative, but as a day-to-day tool. Employees use it to write, analyze, develop, and communicate, while executives already expect it to deliver measurable business outcomes. Yet as adoption has expanded, a different realization is becoming clearer: the core problem is no longer the technology itself - but the data behind it.
This is clearly reflected in the numbers. A study we conducted among 124 Chief Information Security Officers (CISOs) at large enterprises shows that around 90% of organizations are already using GenAI tools, and about 60% are operating internally developed autonomous systems. And still, only about 20% of AI-driven projects meet their business objectives. In other words, the challenge is no longer whether or how to adopt AI - but how to generate real value from it.
At first glance, when value is not realized, it may seem logical to look for the problem in technology itself. But a deeper look reveals that the issue lies elsewhere: in the data. For years, organizations have accumulated vast amounts of information without truly managing it - unsorted repositories, overly broad access permissions, and layers of legacy data that were never properly handled. In the past, when systems were more limited, this could be tolerated. But in the AI era - where every piece of accessible data can instantly be used, processed, or leaked at scale - this becomes a critical barrier.
In this context, “trust in data” is not a theoretical concept. It reflects the level of confidence that AI systems are using information in a correct, controlled, and secure manner. According to our research, about 65% of CISOs report that they are not confident in their organization’s data security controls. When trust is low, AI slows down, stalls, and in some cases even creates more risk than value.
This leads to an insight that has yet to fully permeate many organizations: Agentic AI and GenAI are not model constraints, they are data problems. The pace of AI adoption is far faster than organizations’ ability to control the data it relies on. Many organizations still try to treat AI as just another system to “protect” from the outside - through access restrictions, policies, or blocking mechanisms. But AI does not operate within clear boundaries. It manifests through employees, external tools, and autonomous agents that connect to multiple data sources. In this reality, controlling access alone is no longer sufficient.
What ultimately determines success rates is how well the data itself is managed. The key question is whether the organization knows what its sensitive data is at any given moment, where it resides, and who has access to it.
This is also where the practical challenge comes into play - the reality organizations face daily: employees uploading information to AI tools to work faster, systems connecting disparate data sources without clear context, and projects that stall because data quality or boundaries cannot be trusted. Although most organizations have policies and guidelines around AI, in practice they struggle to enforce them at the speed at which autonomous systems operate. In such conditions, AI not only fails to deliver value - it also introduces uncertainty and risk.
This likely explains why only around 20% of organizations today are considered “mature enough “to deploy AI at scale. All others operate within an inherent tension: on one hand, they are already using AI; on the other, they do not trust the infrastructure it relies on. This gap explains why many projects remain stuck in pilot phases or fail to progress beyond isolated use cases.
This leads to a question that is becoming more relevant than ever: how can organizations truly prepare for AI and use it in a practical, safe, and effective way?
The answer begins with data, mapping and understanding sensitive information, reducing excessive access permissions, and implementing solutions that understand context rather than simply blocking activity. This is not a replacement for AI, but a prerequisite for making it work.
In conclusion, AI has effectively become a “stress test” for organizational data infrastructure. Ultimately, the challenge organizations face today is not an excess of AI, but a lack of readiness for it. In a world where almost everyone has already adopted the technology, the advantage will not go to those who started first - but to those who truly understand what happens to their data when AI comes into play - and what it takes to control it.
Eran Barak is the co-founder and CEO of MIND.














