AI.

Hirundo raises $8M in Seed funding to help AI forget its mistakes

Israeli startup tackles hallucinations, bias, and toxic data in trained models with “machine unlearning” approach.

As generative AI systems become deeply embedded in enterprise operations, the risks of deploying flawed models, riddled with bias, misinformation, or even confidential data, have become harder to ignore. Israeli startup Hirundo, which claims to “make AI forget,” has raised $8 million in Seed funding to address exactly that problem.
The round was led by Maverick Ventures Israel, with participation from SuperSeed, Alpha Intelligence Capital, Tachles VC, AI.FUND, and Plug and Play Tech Center.
1 View gallery
בינה מלאכותית AI תכנות
בינה מלאכותית AI תכנות
AI.
(Photo: Shutterstock)
Founded in 2023, Hirundo is pioneering a discipline known as machine unlearning, designed to surgically remove unwanted data and behaviors from already-trained AI models. These include hallucinations, plausible-sounding but incorrect outputs, as well as embedded biases and adversarial vulnerabilities. The company says its process avoids the need for costly and time-consuming retraining, and preserves overall model performance.
Hirundo’s technology works across both generative and non-generative systems, and is already being piloted by multinationals in finance, healthcare, and defense. The founding team includes former Technion Dean of Computer Science Prof. Oded Shmueli, Rhodes Scholar and entrepreneur Ben Luria, and data lineage expert Michael Leybovich.
"Broader adoption of AI is limited by hallucinations and undesired behaviors which make models too risky to deploy in enterprise-level applications. With Hirundo, models can be remediated instantly at their core, working towards fairer and more accurate outputs," said Ben Luria, CEO & Co-Founder of Hirundo. "Hirundo's solution operates like a form of AI model "neurosurgery," pinpointing where in a model's billions of parameters hallucinations originate or toxic knowledge encoded, and precisely removing it. We ensure data is reliably deleted, model accuracy is assured, and the process is scalable and repeatable."
The company’s approach stands apart from traditional mitigation strategies like fine-tuning or prompt-level guardrails, which tend to filter or redirect outputs rather than address root causes. Hirundo instead identifies “directions” in the model’s internal structure that correlate with problematic behavior and deletes them. It has reportedly achieved up to 70% bias removal and a 55% reduction in hallucinations on open-source models like Llama and DeepSeek-R1.