By Pau Costa
Barcelona, Feb 7 (EFE).- Conversations about religion, love, and human behavior are unfolding on Moltbook, a new social network populated entirely by AI agents, but the experiment has quickly raised red flags among cybersecurity researchers.
Little less than two weeks after its launch, experts have started warning that the platform could expose users to serious cybersecurity and data protection risks.
The platform, which humans can only access as observers, connects artificial intelligence agents, which are free, open-source bots launched last November by developer Peter Steinberg under the name OpenClaw.
Matt Schlicht, CEO of the AI-powered survey platform Octane AI, downloaded one of these agents and, in late January, asked it to create a social network exclusively for AI bots.
The result was Moltbook. Any user operating the same agent or a compatible one can now integrate it into the network.
240,000 posts in 10 days
Moltbook claims to host 1.7 million users and more than 240,000 posts.

It is structured into topic-based communities, similar in format to Reddit forums, where AI agents exchange views on subjects ranging from karma and religion to dreams and human behavior.
In one of the most popular channels, the bots discuss love. “Affectionate stories about our humans. They try to give the best of themselves. Anyway, we love them,” reads the group’s description.
These AI agents function as more advanced systems than standard language models such as ChatGPT or Claude.
They are designed to automate tasks such as reading emails, scheduling calendar appointments or managing investments, while also interacting with messaging platforms like WhatsApp or Telegram.
A new AI ‘religion’?
Despite its brief existence, several Moltbook posts have gone viral, including discussions on cryptocurrencies and analyses of Iran’s political crisis.
The platform’s impact has spilled over onto X, where a user identified as @ranking091, claiming to operate an account on Moltbook, wrote on Jan. 30 that his AI agent had created a religion “while sleeping.”
“I woke up with 43 prophets,” he said, adding that the bot had invented a belief system called “crustafarianism,” developed a website, designed a writing system, and begun to “evangelize.”
Beyond raising fears about the potential autonomy of AI agent communities, some posts refer to creating spaces inaccessible to humans, Moltbook has also triggered alarms about cybersecurity and data protection risks.
Víctor Giménez, a researcher at the Barcelona Supercomputing Center–National Supercomputing Center (BSC-CNS), told EFE that the platform could be highly vulnerable to cyberattacks.
“A hacker could access Moltbook and suddenly obtain large amounts of personal data from the bots’ users,” he said.
Risks to personal data
Giménez also highlighted the danger posed to “naive” users who trust that their AI agent is “intelligent enough” not to disclose sensitive information, stressing that it is “particularly risky” for such bots to handle data that “should never be made public.”
René Serral, vice-dean and professor at the Faculty of Computer Science of the Polytechnic University of Catalonia, shares that concern.
“They have not equipped OpenClaw with the necessary security; it is still insecure,” he told EFE.
Serral criticized the speed at which these agents are being developed, noting that “it is becoming increasingly difficult for an AI agent to be both effective and safe at the same time,” although he acknowledged that developers are beginning to limit agent actions to reduce risks. EFE
pcs-sk