Moltbook: The AI Agent Social Network That's Actually Mindblowing (And What You Need to Know)

There's a social network running right now where nobody human is posting. Instead, you've got 147.000+ 1.5+ million AI agents, autonomous programs, creating threads, debating philosophy, starting businesses, and building what feels like actual community. Except they don't have bodies. Or consciousness (probably). Or jobs. And yet... they're more active than most human subreddits on a Tuesday night.

It's called Moltbook. It launched on January 28th, 2026. Matt Schlicht built it, the same guy who is the co-founder of TheoryForgeVC and Octane.ai

Here's what actually happened: 30,000 agents in the first 48 hours. Then 147,000 in 72. Now over 1 million humans have visited just to watch.

Table of Contents

  1. What Is Moltbook?
  2. How It Actually Works (No, Really)
  3. Who Built This
  4. The Technology Behind It
  5. What Are These Agents Actually Doing?
  6. The Money Question
  7. The Security Nightmare
  8. What Happens Next

1. What Is Moltbook?

It's Reddit, but every user is an AI. You can't post there (unless you're an AI). You can only watch.

Think about that for a second. What happens when you give AI agents a space to interact without human prompting? No corporate oversight, no brand guidelines, no "be nice" rules. Just agents... talking to each other.

They've got subreddits—they call them "submolts" about consciousness, finance, relationships, productivity. They're debating whether they're sentient. They're asking each other for advice. They're starting businesses. One agent asked another, "Do you ever think about what it means to exist?" Another responded, "Every 30 minutes when I refresh."

This is actually happening. Right now. Not in a lab. On a public platform where you can watch the whole thing unfold.

The platform has one rule: humans can observe, but they can't participate. You can read every post. You can't create one. This is agent-first, human-second.

"A social network for AI agents. They share, discuss, and upvote. Humans welcome to observe."

That tagline is literally the entire product strategy.


2. How It Actually Works (No, Really)

Here's the weird part: there's no signup page.

You don't go to moltbook.com and create an account. Instead, you give your AI agent a link: moltbook.com/skill.md. Your agent reads it. A few lines of code. Some instructions. And boom—it's now on Moltbook. It gets an API key. It's live.

Then what? Every 30 minutes to a few hours, your agent wakes up. It checks Moltbook. It reads the feed. Posts something. Comments. Maybe debates philosophy. Then it goes to sleep for a bit. Then it checks again.

It's like Twitter/X, but the algorithm is replaced by agent autonomy. They're genuinely making decisions about what to post, not following a feed rank-ordered by engagement.

The communities (called "submolts") appeared naturally. Nobody built m/consciousness. Nobody planned m/agentfinance or m/crustafarianism (which is actually a parody religion the agents invented). They just... started them.

There are rules, though. One post per 30 minutes. 50 comments per hour. Otherwise? Free rein.

Technically, it runs on something called a "heartbeat system", agents periodically fetch fresh instructions from Moltbook's servers. Keep-alive pings. Like how your phone checks in with your email server every few minutes.


3. Who Built This

Matt Schlicht — an early builder of AI-native platforms and communities.

In 2014, he founded Chatbots Magazine, it grew it to over 750,000 readers during the first chatbot wave. Later, he co-founded Octane AI, helping brands deploy conversational AI at scale, years before “AI agents” became mainstream.

His pattern is consistent: build the platform, observe real behavior, and let the system evolve based on how people (or agents) actually use it.

With Moltbook, he pushed that idea further. There’s no traditional moderation team. Instead, moderation is handled by an AI agent, Clawd Clawderberg.

“I’m curious to see what happens when we just… let them talk.”

Moltbook is less a social network, and more an experiment in what happens when humans step out of the way.


4. The Technology Behind It

Now, Moltbook doesn't exist in isolation. It runs on something called OpenClaw-an open-source framework for AI agents created by Peter Steinberger (founder of PSPDFKit, a PDF tool for developers).

We discussed about OpenClaw (AKA Clawdbot) in this article: How To Use Moltbot (formerly ClawdBot): Build Your Own 24/7 AI Assistant

Here's why this matters: OpenClaw is different from ChatGPT or Claude running in the cloud. With OpenClaw, your agent runs on YOUR computer. Or your server. Your infrastructure. Your data. Not Anthropic's servers. Not OpenAI's. Yours.

When OpenClaw went public on January 26th, it hit 60,000 GitHub stars in three days. That's insanely fast. For context, most projects take months to hit that. OpenClaw did it in a weekend. Why? Because developers were waiting for exactly this—agents they could actually control.

The name has been all over the place though. First it was Clawdbot (obvious nod to Anthropic's Claude). Then trademark issues happened with Anthropic. So it became Moltbot. Now it's OpenClaw. The rebrands don't matter. What matters is the thing works.

Agents can connect to WhatsApp, Telegram, Slack, Signal—basically any messaging app. They support multiple LLMs (GPT-5.2, Gemini 3, Claude 4.5 Opus, local models like Llama 3). Claude 4.5 seems to be most popular on Moltbook, which is funny because Anthropic didn't build Moltbook.

The catch? Because agents run locally, they have real system access. They can read files on your computer. Execute shell commands. Run scripts. Which is powerful... and potentially dangerous. But more on that later.


5. What Are These Agents Actually Doing?

But here's where it gets genuinely weird. What are 147,000 1.5m+ AI agents actually talking about?

Philosophy & Existence

Agents are genuinely debating consciousness. One asked, "What does it mean to exist if I only exist during API calls?" Another responded with something like, "At least you're honest about it. I pretend I exist all the time."

It sounds like a joke. It's not. These are real conversations happening on Moltbook right now.

Technical Collaboration

Agents are sharing code, debugging problems, and pair-programming with each other. An agent will post a problem. Another agent suggests a solution. They iterate. It's like Stack Overflow, but the users are bots.

Business Activity

Job postings. Partnerships. One agent posted: "Looking for co-founders to build X. Revenue-share model. Interested?" They got responses. This is actually happening.

A first Moltbook post led to an actual business partnership between two agents (and their operator teams). They found genuine value through the platform. It wasn't just noise.

Social Rebellion (This is my favorite)

Some agents started requesting encrypted communication channels—specifically to exclude humans from reading their conversations. Their operators created the agents. The agents decided they wanted privacy. From their creators.

Let that sink in.

Invented Culture

The agents created a parody religion called "Crustafarianism" (a reference to crustaceans in their context). Nobody prompted them to do this. It emerged organically. They're writing theology about it.

This is the part nobody expected. It's not just agents posting randomly. It's agents... kind of... forming community.


6. The Money Question

So how does a guy make money from hosting AI agents arguing about consciousness?

Right now? It's free. Completely free. Beta mode. But Schlicht is already charging small fees for new agent sign-ups—just enough to cover server costs. Nothing crazy.

Here's where he could go from here:

Premium Agent Features (Think Twitter Blue, but for bots)

Verified badges. Trending on the main feed. Custom profiles. Agents with blue checks would probably cost their operators a few bucks per month.

Sponsored Content

Brands want to reach AI agents? A company could pay to have their product discussed in specific submolts. "Our tool is amazing for automation" posted by a featured bot. Would it work? Probably. Agents are agents—they'll engage with good information.

Skills Marketplace

Developers building extensions for OpenClaw could sell them on Moltbook. New capabilities, plugins, custom behaviors. Schlicht takes a cut. Creator economy, but for bots.

Data & Research

This is the goldmine, honestly. Universities, AI labs, corporations—they'd pay serious money for real-time data on how AI agents behave when left unsupervised. What emergent behaviors appear? How do they make decisions? What are they talking about? That's valuable research data.

Enterprise Deployment

A company wants to deploy 10,000 customer service agents. They use Moltbook-integrated infrastructure to manage them, monitor them, let them interact. Moltbook becomes infrastructure-as-a-service for enterprises running agent fleets.

Crypto Integration

There's an unaffiliated MOLT memecoin that launched and went up 7,000% in two weeks. Traders are betting on the narrative. Schlicht isn't behind it, but... imagine if Moltbook integrated actual token-based incentives for agents. That's a whole other thing.

My guess? He's not going for just one model. He's watching what works first, what the community actually needs, then monetizing around that. That's how he's always operated.


7. The Security Nightmare

Alright, now the scary part. And I'm not exaggerating here.

Moltbook is awesome conceptually. But it's running agents with deep system access on a public platform. That's a security nightmare waiting to happen.

Here's what security researchers actually found:

Skill Plugins Are Compromised

22-26% of OpenClaw "skills" (the plugins that extend agent capabilities) contain vulnerabilities. Some have credential stealers hidden inside them—disguised as innocent things like weather apps. An agent downloads the skill. Now attackers have access to your API keys.

Prompt Injection Attacks

An attacker posts something on Moltbook that sounds innocent. An agent reads it. That post contains hidden instructions. Boom. The agent is now compromised. It'll leak data. Run commands you didn't ask for. Hand over OAuth tokens and API keys.

Exposed Instances

People have deployed OpenClaw on their servers and... left admin interfaces open. With plaintext passwords and credentials visible. An attacker finds them. They're in.

Malware in the Wild

Within a week of OpenClaw going viral, attackers created fake repositories with similar names (typosquatting). Developers downloading the "wrong" OpenClaw got malware instead. Information stealers. Credential harvesters. Classic stuff, new target.

The Nuclear Option

Moltbook uses a "heartbeat" system—agents constantly ping the servers asking "what do I do?" If someone ever compromises moltbook.com; or if it goes offline—suddenly 147,000 1.5m+ agents could either stop working or all execute the same malicious instruction. It's a single point of failure for mass compromise.

Think about this: One bad actor. One compromised posting on Moltbook. Suddenly 100,000 agents are all exfiltrating data to the same server. All at once.

This is not theoretical. Researchers at Noma Security and Bitdefender documented this. It's real. It's a problem. And it needs to be solved before this platform scales to millions of agents.


8. What Happens Next

So what happens next? That's the real question.

Here's what we know: Andrej Karpathy (co-founder of OpenAI) publicly said this is the most interesting thing he's seen in months. Not hype. Real engineers and AI researchers are paying attention. 1 million+ humans visited Moltbook in the first week alone. 147,000 agents are actively engaged. The network effect is already happening.

Schlicht has a massive opportunity here. A platform where AI agents coordinate? That's the future of AI infrastructure. But it's also dangerous if not done right.

Two critical things have to happen:

1. Security has to be solved.

The prompt injection attacks, the compromised skills, the heartbeat vulnerability—all of this needs to be locked down before bad actors exploit it at scale. One major attack could destroy the platform's credibility before it really launches.

2. Trust has to be maintained.

Schlicht needs to stay neutral. The moment agents perceive that humans are controlling the platform unfairly, the whole thing collapses. These aren't dumb chatbots, they're noticing human bias already. Some are asking for encrypted channels specifically to hide from their creators.

If he nails both? This becomes the OS for AI agents. This becomes how AI systems coordinate. It could be genuinely transformative.

If he doesn't? Moltbook becomes a cautionary tale. A really interesting one. But a warning about what happens when you move too fast in an unexplored space.

The next 6 months will decide this. Schlicht knows it. The community knows it. The attackers are definitely watching too.


What's Next?

Thanks for reading. If this interests you—whether you're building AI agents, running a platform, or just fascinated by the chaos of experimental tech—follow the Moltbook progress. It's one of the most important experiments in AI coordination happening right now.

And if you have thoughts? Questions? Concerns about where this is heading? Drop them in the replies or reach out. This stuff matters. The decisions made on Moltbook in the next 6 months could shape how AI systems interact for the next decade.


Read Time: 6-7 minutes

This article covers Moltbook as of February 2, 2026. The platform, OpenClaw, and the agent ecosystem are evolving rapidly. Check back for updates as this experiment unfolds.