We're running out of features to spec!
...ion debt of product teams is going down as we can burn through the backlog with LLMs. Engineers were the bottleneck so PMs had time to plan and think. [bp] But not any more!
1,598 mentions · 615 chunks · 117 episodes
...ion debt of product teams is going down as we can burn through the backlog with LLMs. Engineers were the bottleneck so PMs had time to plan and think. [bp] But not any more!
...that means the power accretes on their turf. A raw deal for users. But now with LLMs, software isn't precious.
LLMs can do fiddly slogs that are meticulous and require expertise. Vercel is all about lots of small fiddly details for a great developer experience. You...
...nlikely to me. But a bottom-up AGI, where every person can marshal the power of LLMs to create compounding tools for themselves, feels way more plausible to me. It would also be a runaway process that it's hard to reason about the mul...
...eces and layers and the outcome still feels like a miracle. Markets, evolution, LLMs, and life itself all have this characteristic.
I wonder if the whole focus on AGI is downstream of LLMs talking like humans. Like, the idea of a planetary scale omniscient personality is easy to imagine, and also terrifying. Chatbots make this kind of f...
... Coders see coding as an end unto itself. Before, they looked the same. But now LLMs reveal the difference. Builders love LLMs, and coders hate them.
Systems that assume LLMs will be nearly perfect won't work. LLMs will never be perfect. Resilient systems assume that they can be confused.
LLMs can be made to be default-converging. Every input to them, if scoped small enough, it will do what a reasonable person would with that information. S...
...b. The expected danger of them is the multiplication of "powerful" and "naive." LLMs with access to sensitive data are the most confusable deputies ever!
...umbling along as cold war, but will become a hot one as the power of unleashing LLMs on your data heats up.
...t mitigate the danger of malicious skills. Or that mitigate the danger of naive LLMs confusing themselves. But still nothing in the market that credibly mitigates prompt injection.
...en makes the case that our current best practice design process is obsoleted by LLMs. The process that was the best practice fundamentally assumes that software is extremely expensive to create.
StrongDM and OpenClaw are downstream of where LLMs hit a new scaling threshold of agentic ability. They were inevitable; the time had come. They were at the right place at the right time to surf the w...
...ke a chatbot, but that's incidental. A couple of years ago people tried to wrap LLMs and create agents, but it was premature. The model quality wasn't there yet. So they concluded it wasn't possible. But it just wasn't ready yet. Now ...
...of durable ends. Those are the form factors that enable the potential energy of LLMs to blossom best.
...f you try to review every line of AI code you will go crazy. It's not possible. LLMs can produce code so quickly the only way to tackle it is to use LLMs to review it.
People use LLMs as interpreters more than compilers. An LLM as compiler could make guardrails for itself that then limit bad behavior.
...red to write before: lightly held, possibly throwaway. But it's possible to use LLMs to help write code in much more disciplined ways that give compounding advantage, and that is not a flippant or unserious exercise. Agentic engineeri...
...mall enough you can clear the threshold where any LLM can answer it reasonably. LLMs are more expensive than mechanistic code, but much more flexible and able to handle variance. When you have a working amalgam of LLMs and mechanistic...