
Senior engineer departs Sutskever’s Safe Superintelligence to launch Israeli AI trust startup
Shahar Papini left the $32 billion research lab to co-found Attestable, which has raised $18.5 million to verify AI models, prompts and outputs.
Shahar Papini, one of the early engineers at Ilya Sutskever’s high-profile Safe Superintelligence (SSI), has recently left the $32 billion research company to co-found a new Israeli startup focused on securing the increasingly fragile foundations of artificial intelligence.
Papini, who spent roughly a year as a senior software engineer at SSI following its creation in 2024, has joined Attestable as co-founder and chief technology officer. The Tel Aviv-based company, still operating in stealth, has raised $18.5 million in Seed funding, including backing from TLV Partners, according to PitchBook. Attestable was co-founded with CEO Yogev Bar-On and vice-president of R&D Shahar Samocha.
Sutskever stepped into the role of CEO at SSI last July after co-founder and CEO Daniel Gross left the company to join Meta.
SSI, which maintains a significant team in Tel Aviv, has raised more than $3 billion to pursue what Sutskever describes as a fundamentally new path to artificial general intelligence, one that moves beyond the industry’s fixation on ever-larger models and brute-force scaling.
Attestable is taking a markedly different angle. PitchBook describes the company as developing an “AI trust-layer” designed to ensure system integrity by verifying models, prompts, and outputs, allowing organizations to secure AI use and prevent malicious tampering. The focus reflects a growing anxiety among businesses that generative AI systems, while powerful, are brittle, easily manipulated, and difficult to govern.
Sutskever has argued that today’s models excel on benchmarks yet falter in real-world settings, a gap he calls “jaggedness.” In a recent interview he described how large language models can oscillate between errors, fixing one bug only to reintroduce another, behavior that hints at deep limitations in current training methods.
He attributes the problem partly to reinforcement learning that makes systems “too single-minded,” and to evaluation-driven training that optimizes for tests rather than true general skills. Humans, he argues, possess a richer internal value function, shaped by emotion, that allows for more robust learning. The implication is that AI’s next leap will require new scientific principles, not simply more data.














