It seems like playing with reasoning will be enormously useful in improving the capability of models.

· Bits and Bobs 2/3/25
  • It seems like playing with reasoning will be enormously useful in improving the capability of models.
    • In the earlier micro-era of LLMs it was all about the scale of how much world knowledge you could cram in.
      • Extremely capital intensive.
    • It feels like we've topped out on that, and now the new micro-era of competition is extracting as much power out of reasoning.
    • This is a distillation exercise, but also a UX and tinkering challenge.
    • Reasoning currently is not great; it often comes back to the wrong point multiple times before breaking through.
      • That implies there's tons of low hanging fruit.
    • OpenAI tried to keep the reasoning tokens as proprietary advantage.
      • But it turns out it was extremely easy to copy.
      • Now they're on a tech island.
      • OpenAI has only a few hundred employees to tinker and come up with ideas of how to get models to reason better.
      • DeepSeek and other open-ish alternatives can benefit from the exploratory capability of the entire ecosystem.
    • We seem like we're just now entering the Cambrian explosion micro-era for the model layer.

More on this topic

From other episodes