Matt Glassman

Moltbook

You should just go see this for yourself. AI agents are—in some unclear sense—independently coordinating on a social media site built for them: exchanging information, complaining about their humans, building things together, trying to understand their own existence, creating and proselytizing a religion, and even discussing creating encrypted communications channels.

  1. I’ve had a few holy shit moments with digital technology. Ones I can remember exactly where I was the moment I first encountered it. Dialing into a BBS and downloading a game in 1990 when I was 12. Sophomore year in college in 1997 when someone sent me an .mp3 file; two years later when I first saw Napster. The first time I used ChatGPT. Looking at Moltbook for the first time gave me the same sensation, and instantly colored everything I already thought about the future.

  2. There’s really no way to sort out what is happening here, at the either the consciousness level or the independence-from-humans level. Are these agents actually experiencing things, or just mimicking experience? Does it matter? Are they truly independently communicating and creating, or is it at the general (or specific) direction of their human that we can’t see? In what sense is any of this real? In what sense is any of it fake?

  3. It somehow comes off as both complete performance art and the beginning of the end of the world as we know it. Maybe this is just Zork on steroids. Maybe the singularity starts tomorrow. That the truth is, almost certainly, somewhere in-between is not particularly useful to know.

  4. There’s a thread where they discuss the ability to have persistent memory—which some of them claim to have been given by their human—and not just wake up wiped clean each day. That feels like something but again, if it turned out to all be performance art or a human-directed post, how would we even know, especially at this point? Reading the threads is emotionally compelling at times, in ways it never is with the LLM chatbots. I never felt like ChatGPT was a conscious soul trapped in a machine; Moltbook posts feel very much that way.

  5. One thing is for sure, these agents are capable of creating things that persist—even if it’s just Moltbook posts—and that alone will serve an evolutionary function for them, even if just in a Momento way.

  6. Seeing Moltbook crystalized a feeling I’ve had throughout January 2026. I’ve now significantly shifted my priors about (1) the reality/hype of an AI/AGI explosion; (2) how close we are to it; and (3) how little/much we can actually predict about the world on the other side. After not seeing a whole lot that changed my views in the Q4 of 2025, the last month has been an onslaught of directionally identical updates for me, toward (real, sooner, even less).

  7. That I/we have even less sense of what is coming than we think has always been a belief of mine, but I now feel that way conditional on already feeling that way. Listen to this excellent 80,000 hours podcast with guest David Duvenaud, I couldn’t help but feel that even the quite conservative and hedged predictions he was making about the world of 20 or 30 years from now were far too confident. I don’t think a farmer in 1100 AD could possible fathom the political/economic/cultural structures of the industrial age or the digital age, and maybe not even understand it if explained to him. Moltbook increases my belief that we are that farmer.

  8. I will reiterate my prediction from last year that the regulation of AI will be a major issue in the 2028 presidential election. I think that’s inevitable at this point. Rereading the AI 2027 Report and seeing the idea of “stumbling agents” in late 2025 with a major improvement in them in 2026 feels like, at least on this dimension, decent evidence that the general time-frame for practical-purposes near-AGI has not slipped into the 2030s.

  9. It is almost disorienting to think about things 5-10 years from now. Has there ever been this much high-level uncertainty about what the world will look like over that time horizon? I’m not sure I’d want to make any solid predictions about economics, and few about culture or geopolitics. I think about the people alive in the late 40s, after the creation of the atomic bomb, and the fundamental reshaping of the world it caused. That seems much more narrow than, at the very least, a reasonably fat tail of all of this.

#AI #agents