Google VP Royal Hansen

Google VP: “The only way to win cyberwarfare is to let defenders fix vulnerabilities as fast as attackers exploit them”

Royal Hansen says AI is forcing Google to fight hackers with the same tools they use - autonomous agents that find and patch flaws at machine speed.

Chatbots that allow you to harvest knowledge and produce code, and autonomous agents that can detect hacks and counter them - Google’s approach to defending against cyberattacks is to use the same AI tools that hackers themselves use. “The new models can identify scams and block them, or find vulnerabilities and fix them,” says Royal Hansen, a vice president at Google who leads the technology giant’s cybersecurity, privacy, and safety efforts. “The beauty is that Gemini does it naturally. We just tell it: ‘Now check for these types of problems.’”
Hansen has been involved in artificial intelligence since the late 20th century, and for seven years before joining Google he worked primarily at financial institutions such as Goldman Sachs and American Express. In his current role, he focuses on the engineering and technical side of cybersecurity in the AI era, while also working with policymakers, governments, and clients to promote appropriate solutions and policies.
1 View gallery
רויאל הנסן סגן נשיא ב גוגל מוביל את תחומי הגנת הסייבר הפרטיות והבטיחות בענקית הטכנולוגיה
רויאל הנסן סגן נשיא ב גוגל מוביל את תחומי הגנת הסייבר הפרטיות והבטיחות בענקית הטכנולוגיה
Google VP Royal Hansen
(Ben-Gurion University)
In an interview with Calcalist, he spoke about the main challenges Google faces in cybersecurity, how AI can be leveraged to confront the threats it creates, and why it is so important for doctors to engage with artificial intelligence.
According to Hansen, Google’s biggest cybersecurity challenge is complexity. “We have to keep up with innovation, in artificial intelligence, in the field of agents, where we operate on the front lines,” he said. “The irony is that we also use AI within our security and safety systems to protect ourselves. This is the only way to keep up with this complexity. Think about spam and phishing: we’ve been handling them in Gmail using AI for years, not just since chatbots arrived. Now we also have to use the most advanced tools. For example, we took Gemini and created a special version trained on security-specific materials in addition to the base model.”
How do you build a defense strategy in the era of modern language models, when we don’t know today’s threats, or tomorrow’s?
“We use the same AI to write security patches or fixes and implement them autonomously. We try to think like attackers and use the same tools. That allows us to act not only quickly, but at scale. I don’t need a person for every security patch.
“For example, we have Project Zero. We founded it 15 years ago, after Chinese hackers broke into Google. We said we wouldn’t wait for that to happen again, and we hired 20 of the best hackers in the world, whose entire job is to find security holes, not just in Google products, but in widely used software. And like a good spy, we ‘burn’ those vulnerabilities, we expose them so the bad actors can’t exploit them.
“We gave this team the most advanced tools from Gemini and DeepMind, and now they can find zero-day vulnerabilities without touching the code. The AI models find the vulnerabilities for them.”
How does that work?
“In much the same way that Gemini understands software code and knows how to continue it, write a module, or test it. The model builds a representation of the code, tracks how data flows through it, then examines each interface for anomalies. It tries feeding unexpected input into those interfaces while recording how the code behaves.
“We know the classic types of vulnerabilities, so we trained the model to test for them. The beauty is that Gemini already does this naturally. People use it to write code, we just turn it around and say: ‘Now test for these problems.’”
“The second project is called Project Mender. Once a vulnerability is found and its location in the code is understood, the system is asked to write a patch to close it, run tests to verify the fix, and then integrate it into the product. To succeed, it’s not enough to find vulnerabilities, you have to give people tools to fix them at the same pace attackers find and exploit them. Right now these tools are only available within Google, but eventually they will be made available more broadly, as part of products or as open source.”
How are bad actors trying to exploit Gemini in cyberattacks?
“Our intelligence teams have looked at this. Initially, attackers use Gemini as a knowledge worker: ‘Tell me about this software package,’ ‘Who uses it?’ ‘Where can I find it?’ It’s like a search engine. But that’s just the beginning.
“Now we’re seeing attempts to use it for writing malware or attack code. If you go back 40 years to the first computer worms, it’s the same pattern: exploiting automation. The difference now is that the barrier to entry is much lower. All you need is an internet connection, you don’t need to know programming or cybersecurity.”
How do you distinguish between a bad actor and a legitimate security researcher?
“This problem exists in many areas. We need to stop child exploitation material, fake websites, and fraud, malicious code is another example. We block certain levels of use when we believe activity crosses the line from information into harm. But it’s a delicate balance.
“I don’t rely on blocking forever, because there are other models, not ours, that don’t impose these restrictions and can be downloaded locally. That’s why I emphasize Project Mender. The only way to win is to give defenders everywhere the ability to find and fix vulnerabilities as quickly as attackers can exploit them. You can’t stand still and hope they won’t find you.”
How do you address LLM-driven social engineering, fake news, or scams such as ‘pig butchering’?
“We’ve used AI for over a decade to detect phishing, because manual detection at this scale is impossible. The new models are excellent at identifying scams even without large training datasets. We define policies in plain language, for example, ‘identify and block scams that attempt to withdraw cryptocurrency.’ The model looks for behavior that fits the spirit of the policy. It’s remarkable because it catches patterns we hadn’t even anticipated.”
The challenge, he adds, is defining the line between authentic images and AI-generated ones. “Every digital image undergoes some form of processing. The boundaries are still evolving, because you don’t want to flag every minor enhancement. We’re working on the technical side, but there are also legal and policy questions. Part of the reason I came to Israel was to meet with people from different sectors, including Ben-Gurion University, on projects that combine technology, law, and public policy.”
What do policymakers still misunderstand about AI?
“Many people simply don’t use it enough to have a meaningful discussion. I tell my team: be curious, use it. It’s a very democratic technology. You’ll learn it with your kids, at work. The more people use it, the better the policy conversations become.
“A discussion about AI in healthcare is completely different from one about AI in energy. I need doctors to use it so we can talk seriously about AI in medicine. I’m not a doctor.”
But isn’t medicine too sensitive an area?
“On the contrary, it’s a perfect example. I want doctors to experiment with it in their personal lives so they understand its capabilities and limitations. The same goes for materials scientists or engineers. AI is a tool everyone will eventually learn to use in their own way.”
What do you expect to see in AI over the coming year?
“I expect increased use of AI to find security vulnerabilities, by both defenders and attackers. I also expect the early emergence of self-protecting applications: systems that identify and fix their own problems. These will shape the cybersecurity conversation. Organizations running legacy software will be more exposed, because their options are limited. The gap will widen between those investing in modern defenses and those trying to protect outdated systems.”
Do your meetings in Israel offer any unique insights?
“I met here with the Wiz team and many others. I like coming to Israel because people immediately grasp the importance of the issue. You don’t have to explain why cybersecurity matters. There’s an understanding that cyber isn’t an end in itself, it enables medicine, water, and energy systems to function efficiently. That baseline awareness isn’t always present elsewhere.”