There are a lot of new techniques to "frack" LLMs to squeeze more out of them.

· Bits and Bobs 1/13/25
  • There are a lot of new techniques to "frack" LLMs to squeeze more out of them.
    • If you get just a single english-language append-only log designed for a human, you make the LLM take its rich understanding and distill it out through a teeny straw of a single, human-understandable line of thought.
    • Techniques like test-time compute in O1 and similar models allow the model to spray out lots of low quality ideas and then refine.

More on this topic

From other episodes