AI Week panel (from left): Ariel Assaraf, Ariel Salpeter, Dima Tatur,  Ayelet Kutner.

“We've seen a 30% increase in successful email scams in the last two years”

Ayelet Kutner, CTO, At-Bay, was speaking at a panel held as part of Calcalist and Commit's AI Week. The panel also included: Ariel Salpeter, Google Cloud, enterprise cloud BU director, Dima Tatur, VP Cybersecurity, Commit, and Ariel Assaraf, Co-Founder and CEO at Coralogix.

Artificial intelligence is rapidly reshaping both the offensive and defensive landscape of cybersecurity, a shift highlighted by experts Ayelet Kutner (At-Bay), Ariel Salpeter (Google Cloud), Dima Tatur (Commit) and Ariel Assaraf (Coralogix) during a panel at Calcalist and Commit’s AI Week. Together, they described a field where attackers are accelerating faster than defenses, legacy tools are losing effectiveness, and organizations face mounting risks as they rush to adopt AI-driven systems.
Will artificial intelligence change the cyber field?
Ayelet Kutner, CTO, At-Bay: “We are a company that combines technology, cyber-attack prevention solutions, and insurance. We currently insure about 40,000 businesses. As an insurance provider, we see firsthand how artificial intelligence is changing what attackers can do. It affects the effectiveness of cyber solutions and gives us visibility into how 40,000 companies experience this impact.”
1 View gallery
כנס שבוע AI יום 3 פאנל אריאל סלפטר איילת קוטנר דימה טאטור ואריאל אסרף
כנס שבוע AI יום 3 פאנל אריאל סלפטר איילת קוטנר דימה טאטור ואריאל אסרף
AI Week panel (from left): Ariel Assaraf, Ariel Salpeter, Dima Tatur, Ayelet Kutner.
(Photo: Ynet studio)
“The most obvious change is the dramatic improvement in attackers’ ability to carry out email fraud. We all remember the early days of email scams, Nigerian prince emails and other clumsy attempts that were clearly fake. Today, attackers can easily produce polished, persuasive emails. Often these emails contain accurate personal information about the victim. AI makes it possible to collect data online and craft highly tailored fraud attempts.”
How detectable is it?
According to Kutner, “What we’re seeing is that older email-security solutions are becoming less and less effective. These attacks are no longer based on searchable templates, it just works far less. Only solutions that analyze context and assess the likelihood of different attack patterns have any chance of stopping them. We've seen a 30% increase in successful email scams in the last two years.”
There is an interesting paradox today. On one hand, organizations want to adopt AI now; on the other, adopting AI widens the information-security gap. How does this manifest?
Ariel Salpeter, Google Cloud, Enterprise Cloud BU Director: “This is the trillion-dollar question. Everyone understands that AI is creating a genuine technological and business revolution. Nearly all organizations are examining AI systems in one form or another. But according to a study we conducted, 74% of organizations are unable to move from the MVP or POC stage into production. They hit a wall, and the barrier is information security: fear of data leakage, fear of LLM hallucinations, and overall uncertainty. It’s a true paradox. AI brings enormous creative capabilities, but it also creates new attack surfaces and new ways to break systems.”
How protected is the surface?
“The concern is real. And because organizations cannot afford to delay AI adoption, they must implement solutions that are secure by design. Agents must operate within secure environments that inherit the security controls of the organization. Companies need visibility, what the agents are doing, which LLM they are running, and frameworks that provide the confidence to deploy AI tools across departments at scale.”
To what extent do you feel the paradox between agent adoption and security needs?
Dima Tatur, VP Cybersecurity, Commit: “On one hand, organizations are in a frantic race to adopt and deploy agents, internally and for customers. On the other hand, everyone is developing agents: cloud providers offering studio tools, startups building anything they want, even kids in a garage. It has turned into a Wild West. These agents connect to organizational systems, data stores, payment systems, almost everything. And often we don’t know who developed the agent or what logic they put inside. It’s impossible to know how the agent interacts with the LLM. I think cyberattacks targeting agents will only grow.”
Is this easy to protect?
Tatur: “Security teams are used to working with known threat vectors. But in this world, the vectors are unclear. Everyonאe builds agents at home. Protocols allow broad interactions between agents and organizational systems, and with the LLM itself, and there are no clear models for protection. This is why many Israeli companies are emerging with new approaches to securing agents. We still don’t fully understand how best to protect agents and generative-AI systems in the near future. We will see several schools of thought emerge.”
Have you experienced any significant cyberattacks recently?
Ariel Assaraf, Co-Founder and CEO of Coralogix: “We recently experienced an attack where someone forged a thread of emails between me and a company requesting payment, replaced my email address in the thread, and diverted a payment to other accounts, claiming I had approved the invoice. It’s very difficult to recover from such attacks, but we managed.”
“I agree that AI-driven threats are increasing. Coralogix provides monitoring and security solutions. Earlier this year, we acquired a company specializing in AI monitoring and securing AI systems. We understood that AI is not software in the traditional sense. In software, an attack typically looks like an attack. In AI, it may not. Companies train models using sensitive internal data, then expose a chatbot to the outside world. Users can ask questions that reveal context. It’s like putting a senior employee who knows secrets into an interrogation room and hoping they don’t slip.”
“We worked with a major bank where an attacker crafted a sequence of questions that resulted in significant data leakage. The only effective response today is model-versus-model defense.”
Can something like this be prevented?
“We met with a bank that insisted it had no issues because employees could only ask approved questions and the model was allowed to give only approved answers. But in that scenario, you don’t really have a chatbot at all. True protection requires creating a model that acts as a judge, evaluating each question and answer and determining whether it meets defined security criteria.”