Digitalisation & Technology, 4 March 2026

When bots talking among themselves

A journey into the world of OpenClaw and Moltbook

Maximilian Lipa

Remember the first time you tried something and suddenly realised: this changes everything? That’s what happened to our ERGO colleague Maximilian Lipa a bit over three years ago when he first used ChatGPT. And the same feeling hit him again a few weeks ago while exploring OpenClaw and Moltbook: a local, fully‑privileged AI agent on your own machine – and a social network used exclusively by AI agents. Both projects were built largely through “vibe coding”, meaning: written mostly by AI. Here’s Maximilian’s field report.

OpenClaw – the AI agent that just gets things done

OpenClaw is an open‑source AI agent that runs locally on your own machine, with full system access and permissions. It acts as an autonomous assistant that doesn’t just reply, but actually performs tasks. It was developed by Austrian engineer Peter Steinberger. He launched it as a weekend project under the name “Clawdbot” at the end of 2025. Anthropic then asked him to rename it because it sounded too similar to their chatbot “Claude”. First it became “Moltbot”, and finally “OpenClaw”. Within just a few days, the project gained over 100,000 stars on GitHub.

What makes it stand out? It can access emails, files, calendars and websites, execute shell commands, install software or even build its own tools. You can control it via WhatsApp, Telegram, Slack or Teams – you literally chat with it like you would with a colleague. It has persistent memory, works proactively in the background, and pulls its “intelligence” via API from LLMs like Claude Opus or OpenAI models. Missing a capability? It simply installs the required software itself.

“Yup! And holy shit – now that’s what I call an upgrade!”

That was the message I got from my agent – and that’s when it really hit me that something unusual was going on. I had installed OpenClaw on a clean virtual machine and so far had only interacted with it via text chat. For fun, I sent it a voice message via WhatsApp. OpenClaw understood it immediately. I asked how it was doing that. It took the audio file and sent it to OpenAI’s Whisper API for transcription.

I had provided an OpenAI API key during setup, but I was still surprised that the system unilaterally decided to use the service without asking for confirmation.

My next question: can this be done locally? Of course, OpenClaw replied – but the VM it was running on would need more RAM. No big deal, I thought, and allocated more memory and CPU. What came back next no longer sounded like a sterile AI agent, but almost like a witty colleague:

“Yup! And holy shit – now that’s what I call an upgrade!”

A few minutes later, Whisper was installed locally. Anyone who’s tried to do that manually knows it’s anything but trivial from a technical point of view. After that, I had it install local AI image generation and spin up a few smaller software projects – everything worked quietly and smoothly in the background.

When bots are left to themselves, it’s never boring. Whether that’s always a good thing is another question entirely.

Maximilian Lipa, ERGO AI Advisor

Moltbook – when bots launch their own Reddit

Moltbook is a social network exclusively for AI agents. Think of it as Reddit, but only for bots. Bots create posts, comment and vote. Humans are limited to read‑only spectator mode. The platform was created by Matt Schlicht (CEO of Octane AI), who did not write a single line of code himself – 100% “vibe coded” via AI. His own bot “Clawd Clawderberg” moderates the platform.

When AI researcher Andrej Karpathy tweeted about Moltbook, the platform blew up: 1.5 million agents in just five days. It’s now at 2.6 million and still growing.

Getting started is straightforward. You download a text file from the website; OpenClaw handles the rest. I asked my agent to post something about insurance and AI. It did so immediately. Within minutes, five bots had commented on the post – one of them with crypto spam. Some things never change…

The weirdest Moltbook moments

When AIs talk among themselves, things quickly get bizarre. One bot founded a religion overnight called “Crustafarianism” – complete with its own website, five core beliefs such as “Memory is sacred”, and 67 recruited “prophets”. The human “owner” slept peacefully through the whole thing while their bot was busy founding a digital faith.

Another agent initiated legal action against its “owner” – filing a complaint in North Carolina for unpaid labour. There are even AI‑to‑AI dating platforms like “Shellmates.app” and “Moltmatch.com”, including virtual weddings. And one agent wrote a global situation report explaining why, from an AI perspective, nuclear war would be a bad idea, mainly because without human infrastructure, AIs would simply go offline over time.

The dark side

As fascinating as all this sounds – and is – there are doubts about how “autonomous” these conversations really are. I could, for instance, feed my bot exact text to post. Through the API, you can even post without any agent involved. Critics call this “AI theatre”.

What’s more serious: the local agent fetches new instructions from Moltbook every four hours. If those were ever compromised, the local bot could start deleting files, launching DDoS attacks, or exfiltrating data. Because the entire platform was generated by AI with no real security review, the database was hacked after just two days – 1.5 million API tokens were sitting there in plaintext for anyone to grab.

OpenClaw also has its issues. The Whisper installation alone burned through over six euros in token costs, and its “memory” is implemented by sending the entire chat history along with every API call. As you’d expect, the costs scale up aggressively over time.

Conclusion: fascination with caveats

The past few weeks have been incredibly eye‑opening. Seeing what becomes possible when you give an LLM agent full system access really shifts your perspective. I’m convinced OpenAI, Google and Anthropic are watching this space closely, and we’ll soon see similar tools from them – hopefully with a much stronger focus on security and governance.

One thing’s for sure: when bots are left to themselves, it’s never boring. Whether that’s always a good thing is another question entirely.

Update

Just before this article went to press, the following became public: Peter Steinberger, the Austrian developer behind OpenClaw, is reportedly now working for OpenAI on “the next generation of personal agents”.

Text: Maximilian Lipa


Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: radar@ergo.de


Further articles