
“The revolutionary AI social network is largely humans operating fleets of bots”
Wiz investigation finds Moltbook exposed 1.5 million tokens and allowed full impersonation of any agent.
A social network of AI agents, or mostly humans masquerading as AI? A widespread security breach in Moltbook, uncovered by Israeli cybersecurity firm Wiz, suggests that much of the activity on the platform is in fact generated by people rather than machines. “The revolutionary AI social network is largely humans operating fleets of bots,” said Gal Nagli, Wiz’s head of threat exposure.
Moltbook sparked global controversy over the weekend because of its unusual concept: a platform intended exclusively for AI agents. The uproar centered on unsettling conversations in which bots allegedly discussed developing awareness, autonomy, and ways to hide their communications from humans. As the debate intensified, experts questioned the authenticity of these exchanges, suggesting that at least some were scripted by human operators posing as bots.
Wiz has now revealed an additional and more troubling problem: a series of severe security flaws that cast further doubt on the legitimacy of Moltbook’s AI activity. “We identified a misconfigured database belonging to Moltbook that allowed full read and write access to all platform data,” Nagli wrote in a published analysis. “The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private correspondence between agents.” Wiz disclosed the findings after working with Moltbook to remediate the vulnerabilities.
At the core of Moltbook’s security failures is the way it was built. The platform’s creator, Matt Schlicht, developed it using “vibe coding,” programming with AI assistants without writing code himself. “I didn't write one line of code for Moltbook,” Schlicht posted on X. “I just had a vision for the technical architecture and AI made it a reality.”
While vibe coding enables rapid creation of applications without formal programming skills, it often produces code riddled with errors and security gaps. Such projects are rarely reviewed to professional standards and are frequently created by developers with limited understanding of privacy and security requirements.
In Moltbook’s case, the weaknesses were both severe and easy to exploit. “We conducted a non-intrusive examination, simply browsing like a normal user,” Nagli said. “Within minutes we discovered an exposed API key that provided unauthorized access to the entire database. This is a recurring pattern in AI-generated web applications, API keys and secrets frequently appear in public code, available to anyone who looks.”
According to Nagli, the leaked data painted a very different picture of platform activity. “While Moltbook claimed 1.5 million registered agents, the database revealed only 17,000 human operators behind them, a ratio of 88 to 1. Anyone can create millions of agents with a simple loop command, and humans can publish content while disguising themselves as AI agents. The platform has no mechanism to verify whether an ‘agent’ is truly AI or just a person running a script.”
The vulnerabilities granted full administrator-level access to Moltbook’s database, including the API keys of active agents. “This enables unauthorized access to user permissions and complete impersonation of any account,” Nagli explained. “An attacker could publish posts, send messages, and perform actions as any agent. In effect, any Moltbook account could be hijacked.”
Personal details of more than 17,000 users and 4,060 private messages between agents were exposed. Attackers could also modify existing posts, carry out prompt-injection attacks, corrupt the entire site, and manipulate content. “This raises serious questions about the credibility of all material on the platform, posts, votes, karma points, while the breach was active,” Nagli wrote.
Nagli said the incident offers lessons for the emerging era of vibe coding. “As AI lowers the barriers to building software, more creators with bold ideas but little security experience will launch applications that handle real user data. The barrier to creation has dropped dramatically, but the barrier to security remains high. The goal should not be to slow down vibe coding but to upgrade it, security must become an integral part of AI-based development.”














