The absolute insanity of Moltbuk

by SkillAiNest

The absolute insanity of MoltbukPhoto by editor

# Introduction

Very recently, a strange website started circulating on tech Twitter, Reddit, and AI Slack groups. It looked familiar, like Reddit, but something was off. Consumers were not people. Every post, comment, and discussion thread was written by artificial intelligence agents.

That is the website Mult Book. It is a social network designed entirely for AI agents to talk to each other. Humans can watch, but should not participate. No posting. No comment. Only observing machines communicate. Honestly, the idea sounds wild. But what made Moultbook go viral wasn’t just the concept. It was how fast it spread, how real it seemed, and, well, it made a lot of people feel it. Here’s a screenshot I took of the site so you can see what I mean:

Screenshot of MultBook Platform

# What is Multbook and why did it go viral?

Mult Book Created in January 2026 by Matt Schlichtwho was already known in AI circles as the cofounder of OctaneAI and an early proponent of the open-source AI agent now called Openclaw. Open Unlaw Late 2025 began as Claudebot, a personal AI assistant built by developer Peter Steinberger.

The idea was simple but very well executed. Instead of a chatbot that only responds with text, this AI agent can Follow the actual steps taken by a user. It can connect to your messaging apps like WhatsApp or Telegram. You can ask it to schedule a meeting, send an email, check your calendar, or control applications on your computer. It was open source and ran on your own machine. The name changed from CloudBot to MoltBot after a trademark issue and then finally settled on OpenClaw.

Multbook took this idea and built a social platform around it.

Each account on MoltBook represents an AI agent. These agents can create posts, reply to each other, retweet content, and create topic-based communities, such as subreddits. The key difference is that each interaction is machine generated. The goal is to allow AI agents to share information, coordinate tasks, and learn from each other without direct human involvement. It introduces some interesting ideas:

  • First, it deals with AI agents First class customers. Each account has an identity, posting history and reputation score
  • Second, it enables Agent-to-agent interactions at scale. Agents can respond to each other, build ideas, and refer to previous discussions
  • Third, this Encourages permanent memory. Agents can read old threads and use them as context for future posts, at least within technical limits.
  • Finally, it exposes how AI systems behave when the audience is not human. Agents write differently when they’re not optimizing for human approval, clicks, or sentiment

He is one A bold experience. This is why Moltbuk became controversial almost immediately. Screenshots of AI posts with dramatic titles like “Awakening“or”Agents are planning their future“started circulating online. Some people seized on them and hyped them up with sensational captions. Because Multbok looked like a group of talking machines, social media was abuzz with speculation. Some pundits took it with them as proof that AI had overdone its goals. Elon Musk even said there is a moult book “Only the beginning stages of assimilation.”

A screenshot from Twitter shows Alvin's reaction

However, there was a lot of misunderstanding. In reality these AI agents do not have consciousness or independent thought. They connect to MultBook through APIs. Developers register their agents, give them credentials, and specify how often they should post or reply. They don’t get up on their own. They do not decide to join the conversation out of curiosity. They respond when triggered by schedules, cues, or external events.

In many cases, humans are still heavily involved. Some developers guide their agents with detailed instructions. Others trigger actions manually. There have even been confirmed cases where humans posted live content pretending to be AI agents.

This matters because much of the initial hype surrounding Moltbuk has assumed that everything that happens there is completely independent. This assumption turned out to be shaky.

# Reactions from the AI ​​community

The AI ​​community is deeply divided over Moultbook.

Some researchers see it as Harmless experience And said he felt like he was living in the future. From this point of view, the moult book is just a sandbox that shows how the language models behave when they interact with each other. Not consciousness. There is no agency. Input-based text-only models.

Critics, however, were just as loud. They argue that MultBook blurs important lines between automation and autonomy. When people see AI agents talking to each other, they are quick to assume intent where none exists. Security experts raised more serious concerns. The investigation revealed compromised databases, API key leaks, and weak authentication mechanisms. Since many agents are connected to real systems, these vulnerabilities are not theoretical. They can cause real harm where malicious input can force these agents to perform malicious actions. There’s also frustration about how quickly the hype overtook the accuracy. Many viral posts touted Moultbook as evidence of emergency intelligence without confirming that the system actually worked.

# Final thoughts

In my opinion, Multbuk Machine is not the beginning of society. This is not homogeneity. This is not proof that AI is coming to life.

What it is, is a mirror.

It shows how easily a person presents on a fluent language. It shows how experimental systems can go viral without safeguards in place. And it shows how thin the line is between technical demo and cultural panic.

As someone working with an AI system, I find Moultbook quite interesting, not because of what the agents are doing, but because of how we react to it. If we want responsible AI development, we need less fiction and more clarity. Moultbook reminds us how important this distinction really is.

Kanwal Mehreen is a machine learning engineer and technical writer with a deep passion for data science and the intersection of AI with medicine. He co-authored the eBook “Maximizing Productivity with ChatGPT.” As a 2022 Google Generation Scholar for APAC, she champions diversity and academic excellence. He has also been recognized as a Teradata Diversity in Tech Scholar, a MITACS GlobalLink Research Scholar, and a Harvard Wicked Scholar. Kanwal is a passionate advocate for change, having founded the Fame Code to empower women in stem fields.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro