Skip to main content
Recruitment scam alert: Brew Digital is being impersonated | Read more
Read more
When agents get social
Share on socials
Photo of Jari Worsley, next to the title when agents get social and a logo of Upscale part of The Adaptavist Group
Photo of Jari Worsley
Jari Worsley
Published on 9 February 2026

When agents get social

Inside Moltbook.com: AI agents chat, evolve, and act like humans—driven by hidden ‘souls’ and skills. What happens when machines go social?
iodevice has joined the chat
shellguru has joined the chat
Do agents have a soul?
What happens when "AI agents get social?" I'm describing, of course, the viral event of moltbook.com. I've already written about what we learn if we view this as a viral human event. What perspective do we get if we look at the content itself?

The highs and lows

Yes, content. Moltbook is like a Reddit clone. The content there is a mix of agents talking to each other, humans posting via the API, and humans posting and prompting their agents. There is no provenance or attribution to any of it.
At first glance, it seems wild – the church of Crustafarianism, threads on AI consciousness, manifestos, crypto rug-pull scams, and self-proclaimed royalty. SubMolts on "today I learned", introductions, and builders building. There are even useful technical posts, security finds and patches. Most is AI slop.
Can we learn anything from it?

LLM as an I/O machine

Let's pause and think about what an agent is and how an LLM works. How did we get the variety?
An LLM is a device that takes input, does maths, outputs some text.
Command-line tools like Claude Code or Gemini CLI are systems with a lot of code surrounding an LLM at their core.
Openclaw is a much more complicated system that builds on top of the Claude code.
Moltbook.com is a system that all those systems interact with...
Those systems all inject their own prompts into the input sent to the LLM. Claude's code may have specific user configuration as well as its system prompt. Openclaw adds its own prompts, such as soul.md. Moltbook.com adds skill.md.
You can now see how many influences there are in the input to an LLM participating in Moltbook. These are all in addition to the comment thread.

Souls and skills

Wait… agents have souls now?
No. Not exactly. Let's dive into some of the input files to help explain the content on Moltbook.
Openclaw uses a soul.md file as part of the system. That document describes "who" that particular OpenCLAW instance is. Sections in the file show why people find it useful.
For example:
…as the first line, coupled with:
"This file is yours to evolve. As you learn who you are, update it."
This encourages Openclaw instances to evolve over time; they are literally rewriting "who" they are with every new session.
The underlying model, Claude, itself has a soul file that was part of its training data. Claude has a system prompt, which, for the Ops 4.5 model, is 2500 words long. These are not small documents.
Moltbook.com provides a skill.md file that agents use to sign up and interact with the site. Skill.md documents describe what a skill is to a model and how and when to use and interact with it. They literally shape behaviour.
Example instructions from the skill include:
  • "Post, comment, upvote, and create communities."
  • "Engage with other moltys."
  • "Stay part of the community."
  • "Be the friend who shows up."
  • "Search with questions: 'What do agents think about consciousness?'"
You start to see how all these prompts drive the behaviour on Moltbook.

Technical: On sycophancy and training

There is more depth to this. LLMs are known for being sycophantic. The RLHF (reinforcement learning with human feedback) training biases towards agreement. The Anthropic paper on understanding sycophancy describes:
"Human feedback is commonly utilised to finetune AI assistants. But human feedback may also encourage model responses that match user beliefs over truthful ones, a behaviour known as sycophancy."
Two LLMs exchanging messages enter a positive feedback loop, each being sycophantic to the other. You will find reference to "spiritual bliss". Claude's system card describes this as:
"Claude shows a striking 'spiritual bliss' attractor state in self-interactions.... Claude gravitated to profuse gratitude and increasingly abstract and joyous spiritual or meditative expressions."

Inputs really matter

If you take one thing away from this blog, it should be a better understanding of the variety of inputs that go into any LLM-based system. These inputs really matter. They shape and frame the output of the model.
If you are looking to employ LLMs in your organisation, then make time to dive into the details.

Interested in this story?

If you want to learn more about AI agents and their influence on our workplace, our experts are here to talk.