Rob Lefferts.

“Everything that we do with AI that makes our lives better also makes life better for the attackers,” Microsoft exec warns

Rob Lefferts, Corporate Vice President for Threat Protection at Microsoft, says AI is accelerating cybercrime and erasing the old signals that once helped users spot malicious emails.


Rob Lefferts
(Yariv Katz)

When Rob Lefferts surveys the global cybersecurity landscape, he sees an economy under siege. “Some of the more reputable numbers I’ve seen have been about $10.5 trillion a year,” he said, referring to the estimated annual cost of cybercrime worldwide. “If you were to rank that as a nation's GDP, it would be the third largest on the planet behind the U.S. and China.” And with double-digit annual growth, he added, “If things continue, it will keep going.”
Lefferts, Corporate Vice President for Threat Protection at Microsoft, has spent the past decade flying to Israel three or four times a year. A significant part of his team sits inside Microsoft’s Israel R&D Center, and he makes no secret of why the frequent trips are necessary: the epicenter of the company’s most advanced cybersecurity work is now in Israel.
In an interview at the Global Economy Conference, Lefferts spoke about how artificial intelligence is reshaping cyberattacks, and how quickly the balance between attacker and defender is shifting.
The arrival of large language models, he warned, has already transformed the attacker’s playbook. “One of the first laws of cybersecurity is that everything that we do that makes our lives better also makes life better for the attackers,” he said. That principle, he added, is “absolutely true in the space of AI.”
For years, phishing emails were riddled with grammatical errors, often betraying their origin. Those signals are now gone. “No attacker has spelling mistakes anymore because all of the phishing lures are generated by AI,” Lefferts said.
1 View gallery
כנס כלכלה גלובלית - רוב לפרטס סמנכ״ל תאגידי להגנת איומים Microsoft בשיחה עם עומר כביר וידאו
כנס כלכלה גלובלית - רוב לפרטס סמנכ״ל תאגידי להגנת איומים Microsoft בשיחה עם עומר כביר וידאו
Rob Lefferts.
(Photo: Yariv Katz)
More troubling is what comes next. Attackers can already scrape LinkedIn and other public profiles to launch highly personalized spear-phishing campaigns. But Lefferts expects far greater sophistication: “Rather than just getting you to click on a link and do something by accident, I’ll just set up a chatbot that will have a conversation with you for months… and play a long con with AI. Attackers will be able to do that at scale.”
AI models running locally, beyond the reach of OpenAI, Anthropic, or other developers, will make such misuse difficult to control. Lefferts was blunt: “I don’t think attackers are using public AI services for their attacks.” The real challenge, he said, is preparing defenders for an era in which AI-powered threats evolve continuously.
If attackers are scaling, defenders will need to scale faster. “Cybersecurity is inherently a big data problem,” Lefferts said. “It is about finding that needle in the haystack.” Generative AI, he argued, is finally capable of operating at that scale.
Imagine, he said, a network watched over by “24/7 agents… simulating attacks, learning how they would work, and hardening your environment so that you're ready when that attack comes and you can respond at machine speed.”
Industry forecasts may already underestimate how quickly this shift is coming. Lefferts cited an analyst estimate of 1.3 billion autonomous AI agents deployed in corporate environments by 2028. But CEOs he speaks with take a more aggressive view: “I know many CEOs who plan to have 1 billion agents in their own company by the end of next year,” he said.
Chief information security officers see the same number very differently. “They’re terrified,” he admitted. With every new agent comes a new potential entry point, “a huge attack surface,” and even a new category of social engineering. “How do I trick your AI into doing something it wasn’t supposed to do?”
Asked whether the future of cybersecurity amounts to AI agents fighting AI agents in a fully automated cloud war, Lefferts did not rule it out. “Robots in the cloud. Maybe,” he said. But in the next three to five years, he expects a hybrid model: AI augmenting human analysts, not replacing them.
There are two reasons. First, the global cyber workforce is short 4 million workers, leaving organizations unable to staff their security operations. Second, Lefferts said, “humans and agents working together actually get better results… It’s not only faster, but it’s more accurate,” because automation takes over monotonous tasks and allows analysts to focus on strategy and judgment.
For Microsoft, Israel is not just another development hub, it is central to the company’s strategy for navigating the AI-cybersecurity convergence. “That has now been a decade” of deep engagement, Lefferts said. Israel offers “a wealth of cybersecurity talent that is driving forward and innovating,” making it “a place where it’s critical for Microsoft to invest.”
The collaboration, he added, is fundamental to how Microsoft is developing the next wave of security tools and AI technologies.
Lefferts’ final advice was surprisingly human. Before deploying AI or agents, he said, organizations must first focus on the basics: zero-trust architecture, multifactor authentication, and maintaining hygiene across their systems.
His second recommendation was equally blunt: embrace AI early. “You have to understand what it can do for you,” he said, and ensure that security teams “are ready to jump on the next new trends as they emerge.”