We often need to capture aspects of the real world in formal, black and white models.
These models might be rules, policies, predictions, ontologies, etc.
But the real world is fractally complicated.
The closer you look, the more nuance you will see, another level of intricateness.
This nests fractally and for practical purposes never ends.
Each rule costs energy to construct, test, and maintain.
The value of a rule is defined by the volume enclosed.
The cost is defined by the surface area.
But the fractal nesting leads to many orders of magnitude more surface area the closer you look for the same volume.
This is effectively the shoreline paradox.
This is one of the reasons that anyone who has ever tried to fully capture some subset of the real world to any degree of fidelity in a symbolic ontology within some domain invariably gives up roughly 80% of the way through.
The costs scale significantly faster than the value.
However, at a certain point, you can get away with a fuzzy approximation judged by some other system.
In systems made up of humans, that's often just what a generic employee could reasonably make a call on.
But that kind of "reasonableness" approximation used to be hard for computers.
LLMs do a great job with straightforward reasonableness approximations.
The result is that before you get too many layers of fractal complication deep, you can simply reduce it to asking the LLM.
"This cake recipe calls for 5 tablespoons of tabasco sauce. Is that reasonable?" would be the kind of edge case that is very expensive to exhaustively capture in a formal rule system but is easy if you can just ask an LLM.