A friend who conducts multiple agent swarms recently had his Claude Max account turned off by Anthropic.

· Bits and Bobs 3/30/26
  • A friend who conducts multiple agent swarms recently had his Claude Max account turned off by Anthropic.
    • He wasn't doing anything against the Terms of Service… and wasn't even using Claude as aggressively as some people I know.
    • It felt gutting in the moment, but then immediately resolved itself, with a "I'll just use Codex."
    • A few minutes later, he was back up to speed, as if nothing had happened.
    • I think of it like the Paul Rudd meme on the computer: "Oh shit! … I'm OK."
    • Anthropic might think they're buying loyalty with Claude Max subscriptions, but the model is "below the API."
    • Illustrates that the idea of subsidizing all-you-can-eat usage for a provider's agent harness does not make sense, because no meaningful proprietary state accretes within the agent harness.
      • All meaningful state accretes in the filesystem as data and code, that can easily be understood by any model.

More on this topic

From other episodes