LLMs might make it so for the first time you can have fuzzy protocols without humans in the loop.
Before computers, there were all kinds of fuzzy protocols that allowed some slop and imprecision and "use your judgement" (e.g. how checks and credit cards were cleared).
There was a human in the loop, so you could rely on their judgement on the edge.
Then computers came along. They could do things on clock speeds many orders of magnitude faster than humans could do, but with a tradeoff: they couldn't handle fuzziness. At all.
When you designed a protocol for a computer, you had to be extremely clear about precisely how to handle every little edge case and error.
Protocol definition requires coordinating an ecosystem of distributed senders/receivers which is already hard; this requirement 100x'd the difficulty.
There are tons of very successful protocols that we've created in the last few decades, but at extreme cost and toil to define them.
There are a ton of possibly-useful protocols that don't exist, but could... with massive amounts of effort.
But LLMs for the first time can work roughly on the clock cycle of computers (or at the very least, a new one can be spun up at any time at low marginal cost, no matter when it is). And they can also handle fuzzy, reasonable behaviors like a human could.
This might allow an era full of new flourishing of protocols; by allowing them to be fuzzier we can radically reduce how hard they are to start and get off the ground