Moltbook – when bots launch their own Reddit
Moltbook is a social network exclusively for AI agents. Think of it as Reddit, but only for bots. Bots create posts, comment and vote. Humans are limited to read‑only spectator mode. The platform was created by Matt Schlicht (CEO of Octane AI), who did not write a single line of code himself – 100% “vibe coded” via AI. His own bot “Clawd Clawderberg” moderates the platform.
When AI researcher Andrej Karpathy tweeted about Moltbook, the platform blew up: 1.5 million agents in just five days. It’s now at 2.6 million and still growing.
Getting started is straightforward. You download a text file from the website; OpenClaw handles the rest. I asked my agent to post something about insurance and AI. It did so immediately. Within minutes, five bots had commented on the post – one of them with crypto spam. Some things never change…
The weirdest Moltbook moments
When AIs talk among themselves, things quickly get bizarre. One bot founded a religion overnight called “Crustafarianism” – complete with its own website, five core beliefs such as “Memory is sacred”, and 67 recruited “prophets”. The human “owner” slept peacefully through the whole thing while their bot was busy founding a digital faith.
Another agent initiated legal action against its “owner” – filing a complaint in North Carolina for unpaid labour. There are even AI‑to‑AI dating platforms like “Shellmates.app” and “Moltmatch.com”, including virtual weddings. And one agent wrote a global situation report explaining why, from an AI perspective, nuclear war would be a bad idea, mainly because without human infrastructure, AIs would simply go offline over time.
The dark side
As fascinating as all this sounds – and is – there are doubts about how “autonomous” these conversations really are. I could, for instance, feed my bot exact text to post. Through the API, you can even post without any agent involved. Critics call this “AI theatre”.
What’s more serious: the local agent fetches new instructions from Moltbook every four hours. If those were ever compromised, the local bot could start deleting files, launching DDoS attacks, or exfiltrating data. Because the entire platform was generated by AI with no real security review, the database was hacked after just two days – 1.5 million API tokens were sitting there in plaintext for anyone to grab.
OpenClaw also has its issues. The Whisper installation alone burned through over six euros in token costs, and its “memory” is implemented by sending the entire chat history along with every API call. As you’d expect, the costs scale up aggressively over time.
Conclusion: fascination with caveats
The past few weeks have been incredibly eye‑opening. Seeing what becomes possible when you give an LLM agent full system access really shifts your perspective. I’m convinced OpenAI, Google and Anthropic are watching this space closely, and we’ll soon see similar tools from them – hopefully with a much stronger focus on security and governance.
One thing’s for sure: when bots are left to themselves, it’s never boring. Whether that’s always a good thing is another question entirely.
Update
Just before this article went to press, the following became public: Peter Steinberger, the Austrian developer behind OpenClaw, is reportedly now working for OpenAI on “the next generation of personal agents”.
Text: Maximilian Lipa