Guy Dagan.
Opinion

War, AI, and the human factor

"Advanced defense systems can detect malware and block technical attacks, but they cannot prevent every psychological manipulation, nor can they always keep pace with rapidly evolving technologies. The first line of defense remains the individual," writes Guy Dagan, a cybersecurity expert in the Cyber Division of the Yael Group.

Operation “Roaring Lion” offers a glimpse into a new kind of battlefield. It is a global arena where cyber units, AI tools, states, and a wide range of actors all play a role. Yet, somewhat counterintuitively, this emerging world of conflict is making the human factor more important, not less. The role of each of us as a link in the defensive chain is only growing.
The reason lies largely in the dramatic evolution of artificial intelligence, particularly in the realm of social engineering. Attackers now possess capabilities we have never seen before. Paradoxically, this technological leap has pushed the individual back to the center of the battlefield.
1 View gallery
Guy Dagan
Guy Dagan
Guy Dagan.
(Tamar Matzapi)
The conflict unfolding in cyberspace today is not only technological; it is also psychological. Alongside technical attacks on computer systems, influence campaigns and social engineering operations are designed to shape human behavior — that of employees, managers, and ordinary citizens. Artificial intelligence has become a key tool in attackers’ arsenals because it enables them to manipulate the human factor with unprecedented precision and persuasiveness. At the same time, the integration of AI into defensive technologies has made purely technical attacks harder to carry out.
One of the most significant developments is the ability to generate highly personalized content at scale. In the past, phishing emails were generic and relatively easy to spot. Today, AI allows attackers to produce messages that appear entirely credible: written in natural language, tailored to professional or personal contexts, and often incorporating information about the target gathered from publicly available sources online. The result is social engineering that is almost individually tailored — and far more likely to succeed.
And the technology does not stop at text. Advances in AI now make it possible to generate deepfakes and cloned voices that convincingly imitate a person’s appearance or speech. An attacker can create a video or phone call that appears to come from a senior executive, a colleague, or another authority figure, delivered in a context that looks entirely legitimate. Detecting the manipulation can become extremely difficult.
Increasingly, attackers are also using AI not only as a tactical tool but as a strategic one. Artificial intelligence is no longer just helping draft phishing emails or generate deepfakes. Attackers are consulting AI systems on how to approach individuals, how to frame messages, and how to manipulate targets psychologically.
Alongside targeted attacks on organizations, states are also conducting large-scale influence operations online. Iran, for example, has long operated networks of fake accounts and coordinated social media campaigns aimed at shaping public discourse, spreading specific narratives, and amplifying social and political tensions. With AI in the picture, the ability to generate enormous volumes of content — posts, comments, articles, even entire profiles — grows dramatically. The digital space can be flooded with messages that appear authentic but are in fact part of coordinated influence campaigns.
Another layer of risk emerges when entire societies operate under sustained pressure, fatigue, and stress. In such conditions, our ability to remain alert and critically assess information weakens. Information overload, the urgency of real-time events, and the pressure to respond quickly create fertile ground for manipulation. People may act automatically — clicking a link, forwarding a message, or taking action without stopping to verify whether the information is reliable.
Attackers understand this dynamic well and exploit it deliberately. Social engineering campaigns are designed to trigger urgency, authority, or emotional pressure. When a message appears credible and seems to come from a familiar source — especially when reinforced by convincing audio or video — the likelihood that a target will act without verification increases dramatically. In this sense, AI is not merely a technological tool; it is a psychological force multiplier.
All of this makes the human factor more critical than ever. Advanced defense systems can detect malware and block technical attacks, but they cannot prevent every psychological manipulation, nor can they always keep pace with rapidly evolving technologies. The first line of defense remains the individual: the ability to pause, question, recognize warning signs, and resist automatic reactions when something feels off.
In an era where artificial intelligence enables increasingly sophisticated and persuasive attacks, defense cannot rely on technology alone. At a time when pressure is high, fatigue is widespread, and information flows nonstop, human awareness becomes even more essential. Ultimately, even in the age of AI, the human being remains the central — and sometimes the final — link in the chain of cyber defense.
Guy Dagan is a cybersecurity expert in the Cyber Division of the Yael Group.