Collapsing intuition to formal rules is an expensive, combinatorial process.
- Collapsing intuition to formal rules is an expensive, combinatorial process.
- The intuition is squishy and fluid, but the rules are hard.
- To capture one unit of squishness[ant] requires an order of magnitude more hard rules.
- This combinatorial explosion is what A Small Matter of Programming ran into.
- Now we have LLMs to do some of the squishy, high-context things that can float around the problem domain.
- But that means that if you've iterated to find something you like, you want to "pin it down" as you have it go into the details.
- You still want to "pin down" the parts that you like into formal rules to make it not be fully free-floating, and make the output be more predictable.
- It should be possible to have a continuous gradient from nothing pinned down to everything pinned down, where the user can decide when it makes sense to dive into the details to pin them down[anw].
- Plus, the LLM is much better at coming up with proposed formal rules that capture the intuition that you can simply react to.
- Instead of having to draft the rules, you can see what the LLM generated, pin the ones that were good, and spin the LLM roulette wheel again on the ones you didn't like.