“Despite some of the hype, MultBook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Kobus Grayling at Core.E, a robust developing agent-based system for business users. “Humans are involved in every step of the process. From setup to publication, nothing happens without clear human direction.”
Humans must create and verify their bots’ accounts and provide instructions on how they want the bot to behave. Agents do not do anything they are not instructed to do. “There is no emerging sovereignty going on behind the scenes,” Grayling says.
“That’s why the popular narrative around Moltbuck misses the mark,” he added. “Some people present it as a place where AI agents form their own society, independent of human involvement. The reality is much more nuanced.”
Perhaps the best way to think of MultBook is as a new kind of entertainment: a place where people lace up their boots and let them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schlotzer at the Georgetown PSROS Center for Financial Markets and Policy. “You create your agent and watch it compete for viral moments, and get booed when your agent posts something clever or funny.”
“People are not really believing that their agents are conscious,” he added. “It’s just a new form of competitive or creative drama, like Pokemon trainers who don’t think their Pokemon are real but still invest in battles.”
Even if MultBook is just the internet’s newest playground, there’s still a serious way to go. This week revealed how many risks people are happy to take for their A-Lols. Many security experts have warned that Moultbook is dangerous: agents who may have access to their customers’ private data, including bank details or passwords, are running a hack on a website full of unsolicited content, including potentially malicious instructions for what to do with that data.
Ori Bandit, vice president of product management at CheckMarks, a software security firm that specializes in agent-based systems, agrees with others that the multibook machine is not a step into smartness. “There is no learning, no evolutionary intent, and no self-directed intelligence,” he says.
But in their millions, even dumb bots can wreak havoc. And at that scale, it’s hard to keep up. These agents interact with the multibook around the clock, reading thousands of messages from other agents (or other people). It would be easy to hide the instructions in Moltbook’s comments that tell bots that read to share their users’ crypto wallets, upload private photos, or log into their X accounts and tweet disparaging comments about Elon Musk.
And because Clawbot gives memory to agents, these instructions can be written to trigger at a later date, which (in theory) makes it even harder to figure out what’s going on. “Without proper capacity and permitting, it’s going to go south faster than you think,” says Bandit.
It is clear that Moltbuk indicates the arrival something. But even if what we’re seeing tells us more about human behavior than the future of AI agents, it’s worth paying attention to.