Writing code is expensive.
...s expensive. It requires an expensive, specialist human. Running code is cheap. LLMs are more expensive than normal code, but can write bespoke code that can be run cheaply.
1,598 mentions · 615 chunks · 117 episodes
...s expensive. It requires an expensive, specialist human. Running code is cheap. LLMs are more expensive than normal code, but can write bespoke code that can be run cheaply.
... techniques like folksonomies. What if you could have LLM-assisted JIT schemas? LLMs interpreting things like OpenAPI specs on two sides and writing bespoke translation code. A massive number of services today document themselves with...
...ive accomplishment and benefit... but also leaky, hard to reason about cleanly. LLMs can maintain a quite large context window to patiently sift through. Code written for LLMs will have less abstraction, less leverage. Code written by...
People are launching platforms for building things with LLMs faster than people are building useful LLM-native apps. As an industry we learned the "in a gold rush sell pickaxes" lesson, and now everyone is doin...
... AI-native your application is: Could it have ever plausibly been viable before LLMs if you had enough capital?
The ways that LLMs find great ideas is different than humans. A human with a high IQ (a Newton) could think deeply about a problem for extended periods of time. A Newto...
...te code. Those 9% are the people who are on the precipice of being activated by LLMs. "I could never code!" transforms into "look, I made this!"
A friend's teenage son is learning to code in an age of LLMs. He can build extremely impressive applications. He asks Claude or another LLM to write the "goop" – the black boxed, magical incantations he needs t...
...you say, not what you mean. That's one of the reasons that programming is hard. LLMs are good at doing what you mean, not what you say.
...rs. We all agreed that the hardest thing as an advanced user about working with LLMs today is the copy/pasting of data in and out of the LLM. You need to copy in all of the context the LLM needs to make a good decision, and then copy ...
LLMs have a hard time with Rust's borrow checker. The semantics of the borrow checker are totally implicit, hard to reason about. LLMs are much better whe...
LLMs can get confused by the meandering path it took to the right answer in the conversation. You need to continually prune the conversation history and r...
LLM literacy is a thing. LLMs are not tools for lay people. They're expert tools. They're easy for anyone to pick up and quickly get surprisingly good answers. But to wield them a...
... came up with to describe the (strange, unintuitive) fact that state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggle with some very dumb problems."
It's very hard to teach LLMs to be good at math. As with the "Which is bigger: 9.9 or 9.11" question from a few days ago that led Andrej to his "jagged intelligence" tweet. LLMs ...
LLMs are sponges that absorb the implied grammar of a system by soaking in millions of examples, no matter how complex the grammar. So a good formal gramm...
Software is expensive to write, so you get one size fits none tools. But LLMs are great at duct taping together not-too-complex software on demand.
A product that uses LLMs has to assume the LLM will be hilariously, disastrously, unpredictably wrong sometimes. If it's not resilient to that, the product isn't viable.
...ling back on hallucinating one on demand. This allows ratcheting up in quality; LLMs as the floor. The LLM doesn't have to get it right all the time; it has to get it right some of the time, and then humans in the loop help sift throu...
Simon Willison's frame on LLMs: imitation intelligence. I love this frame! Both in terms of Imitation meat. Not quite the real thing, a bit off in a way that makes you a bit queasy...