LLMs need checkpoints of knowledge (context) that is based on human judgment.

· Bits and Bobs 7/14/25
  • LLMs need checkpoints of knowledge (context) that is based on human judgment.
    • Context is a stepping stone that gives you leverage.
    • Good context is great.
    • Bad context is terrible.
    • When LLMs do it themselves they do middling or even bad context and that spirals.[fa]
    • That's why things like Claude.md are not a hack.
    • They are the human making higher level contextual assertions.
    • Claude leaving Claude.md notes to itself help improve quality in your codebase is an example of a coactive surface.
    • The human can then co-create that context, editing, curating, adding.
    • How can we generalize that interaction?

More on this topic

From other episodes