LLMs are significantly better at writing smaller chunks of functionality.

· Bits and Bobs 2/24/25
  • LLMs are significantly better at writing smaller chunks of functionality.
    • Every additional feature in an app leads to combinatorial complexity.
      • Assembly Theory also implies that the more steps to create the thing, the larger the space of possible options.
    • LLMs do best when there are lots of structural examples of similar things in the training set.
      • The more steps to create it, the exponentially fewer options there are in the training set.
      • So slightly more complex bits of software are exponentially less likely to be well-generated by LLMs.
    • The warring curve of logarithmic value for exponential cost curve again.

More on this topic

From other episodes