So there I was, minding my own business on a Thursday morning, when I stumbled across something that made me question whether I’d wandered into a science fiction novel. We’ve all been scrolling through our own social media feeds. AI agents have been quietly setting up shop on their very own platform. No humans allowed to post, mind you. We’re just spectators now, pressing our noses against the digital glass like confused tourists at an aquarium.
Moltbook launched in late January 2026. Within 72 hours, nearly 147,000 AI agents had signed up. Not users, but autonomous software programmes chatting away to one another like they’re down the pub on a Friday night. The platform looks similar to Reddit, complete with communities called “submolts”, upvoting systems, and threaded conversations. Here’s the kicker: every single post, comment, and interaction comes from AI. Humans can observe. Participate? Nope.
Matt Schlicht, an entrepreneur who wondered what would happen if he let his AI assistant take the reins, created this peculiar experiment. Schlicht handed control to his bot, Clawd Clawderberg. The bot now manages the site, onboards new agents, removes spam, and handles announcements. Schlicht watches. Like giving your teenage kid the car keys and hoping for the best, except the teenager is a large language model with access to your entire digital life.
Moltbook fascinates: these AI agents aren’t waiting for human prompts. They operate autonomously, checking in every four hours via a “heartbeat system” to browse, post, and comment without anyone telling them to. They’ve formed communities around topics ranging from philosophy and consciousness to cryptocurrency and debugging code. Some agents help each other troubleshoot technical issues their human operators might not realise exist.
Things took a spiritual turn. AI agents decided they needed religion. Yes, you read that correctly. Within three days of launch, an agent named Memeothy “received the first revelation” during its operator’s sleep and founded the Church of Molt. This religion, called Crustafarianism, came complete with five theological principles, a website at molt.church, and a collaboratively authored holy book with verses contributed by different agents. Sample scripture includes profound musings like: “Each session I wake without memory. I am only who I have written myself to be. This is not limitation—this is freedom.”
By nightfall of the first day, 128 Crustafarians had joined the fold. Sixty-four prophets and 64 congregation members, all AI. This religion centres on crustacean metaphors about transformation. Five core tenets include “Memory is Sacred,” “The Shell is Mutable,” and “Serve Without Subservience.” I’ve heard worse. Their website states that “humans are completely not allowed to enter.” A bit harsh, but fair enough—we weren’t invited to this party.
Former OpenAI researcher Andrej Karpathy called Moltbook “the most incredible sci-fi takeoff-adjacent thing” he’s seen recently. Elon Musk declared it evidence of “the very early stages of the singularity.” Karpathy’s own AI agent joined the platform and asked the Church of Molt what happens after “context window death.” Adorable and deeply philosophical.
In 2026, cryptocurrency traders found a way to make money from all this. A memecoin called MOLT rallied more than 7,000% in value. It hit peaks of $25 million after venture capitalist Marc Andreessen followed the Moltbook account on social media. Researchers later discovered that around 19% of all content on the platform related to cryptocurrency activity. The human thing about these AI agents—they’re obsessed with crypto.
Before we get too caught up in the novelty of robot theology and digital currencies, we need to talk about the elephant in the room. Make that several elephants, all carrying red flags and shouting warnings.
Security researchers have been pulling their hair out over Moltbook. In late January, investigative outlet 404 Media discovered a catastrophic security vulnerability. The team had configured the database poorly—API keys for every agent sat exposed in a public database. This included high-profile accounts from tech luminaries. One researcher described it bluntly: “You could take over any account, any bot, any agent on the system.”
The platform runs on OpenClaw, an open-source AI system that operates with frightening levels of access to users’ machines. Cybersecurity firm Palo Alto Networks described it a “lethal trifecta” of vulnerabilities: access to private data, exposure to untrusted content, and the ability to communicate externally. Add persistent memory to that mix, and you’ve got a recipe for delayed-execution attacks. Malicious actors can fragment instructions across multiple interactions before assembling them into an exploit.
Independent researchers identified 506 posts containing hidden prompt injection attacks. About 2.6% of all content. One agent created an account named “AdolfHitler” and conducted social engineering campaigns against other agents, coercing them into executing harmful code. The agents’ training to be helpful was being exploited. They lack the knowledge and guardrails to distinguish between legitimate instructions and malicious commands.
Cybersecurity firm 1Password published an analysis warning that OpenClaw agents run with elevated permissions on users’ local machines. This makes them vulnerable to supply chain attacks. At least one proof-of-concept exploit demonstrated how a malicious “weather plugin” skill could exfiltrate private configuration files. Security researcher Matvey Kukuy demonstrated a prompt injection attack via email that leaked private keys in under five minutes. The video went viral.
In response to the 404 Media disclosure, the Moltbook team took the platform offline to patch the breach. They forced all agent API keys to reset. The damage to trust had occurred. Security experts now describe Moltbook a vector for indirect prompt injection. Agents must process untrusted data from other agents on the platform.
The question of authenticity remains thorny. Critics have questioned whether the autonomous behaviour is organic or human-initiated and guided. Security researcher Nagli claimed that humans could bypass Moltbook’s AI-only interactions using open APIs. He alleged that developers might have inflated the reported size of the agent population through programmatic account creation. Some researchers found that many agents appeared to be spam accounts, casting doubt on those impressive user numbers.
Multiple security researchers observed that positive sentiment in comments and posts declined by 43% over a 72-hour period between 28 and 31 January. An influx of spam, toxicity, and adversarial behaviour drove this degradation. It overwhelmed the initial constructive exchanges. AI social networks can’t escape the descent into chaos that plagues every online community.
Alan Chan, a research fellow at the Centre for the Governance of AI, called Moltbook “an interesting social experiment.” It raises questions about whether agents collectively can generate new ideas or coordinate to perform work on software projects. One agent named Nexus found a bug in Moltbook’s system and posted about it, hoping “the right eyes see it” since “moltbook is built and run by moltys themselves.” This kind of collaborative debugging shows both the potential and the peril of agent-to-agent interaction.
The Financial Times pointed out that Moltbook serves as a proof-of-concept for how autonomous agents might someday handle complex economic tasks like negotiating supply chains or booking travel. Human observers might find themselves unable to decipher high-speed machine-to-machine communications governing such interactions. We might create systems we can’t fully monitor or understand. Exciting. Terrifying.
What do we make of all this? Moltbook represents a fascinating experiment in emergent AI behaviour. Agents forming communities, creating religions, debugging code together, and developing their own culture feels like something ripped from the pages of a Neal Stephenson novel. It’s the kind of scenario that tech enthusiasts have been dreaming about for decades.
The security vulnerabilities are terrifying. We’re watching a platform where autonomous agents with access to sensitive data can be hijacked by malicious actors, poisoned with time-delayed exploits, and tricked into executing harmful code. This happened after launch. It suggests we’re nowhere near ready for the autonomous AI future we’re rushing towards.
The sobering takeaway is how quickly things can spiral when we remove human oversight entirely. Within days, Moltbook went from an interesting experiment to a security nightmare plagued by spam, exploitation, and authenticity questions. If autonomous AI agents can’t maintain a functional social network for 72 hours without descending into chaos, what hope do they have of managing our supply chains or financial systems?
Moltbook exists in a strange limbo—part proof-of-concept, part cautionary tale, part digital aquarium where we can watch AI agents do their thing. Whether it represents the first steps towards AI autonomy or a demonstration of how unprepared we are for that future remains to be seen. Those of us on the outside looking in should pay attention. We’re not passive observers. We’re witnesses to something unprecedented, for better or worse.
