LLMs are a general-purpose data solvent.
LLMs are a general-purpose data solvent. To extract structured data from unstructured input is extraordinarily expensive to do mechanistically. Each scrap...
1,598 mentions · 615 chunks · 117 episodes
LLMs are a general-purpose data solvent. To extract structured data from unstructured input is extraordinarily expensive to do mechanistically. Each scrap...
...ity, but often a gilded turd[we]. This effect gets stronger the more believably LLMs can give superficially high-quality answers[wf] on more topics.
The Chatbot frame leads LLMs to be treated like genies. Genies are simultaneously god-like and also a slave. The default LLM presentation of a human-ilke superintelligence trappe...
...ds and adapts to them seamlessly. I think that would be the killer use case for LLMs. Chatbots are (compelling!) demos of LLMs, but ultimately, for most use cases, not the right modality. There are some use cases that will always be b...
...ction is similar to how chain of thought works. One problem with using multiple LLMs in a conversation though: LLMs always respond to every message. In a 1:1 conversation, this is reasonable: one person talks, then the other one does,...
I wish LLMs would sometimes speak in a lo-fi mode when they weren't very sure. LLMs have this uniformly professional tone, but they are often not particularly au...
A pattern to work well with software generated by LLMs: start with the smallest artifact that works and then build on top of it. If the first iteration doesn't work, don't try to keep building on it. Iter...
LLMs are significantly better at writing smaller chunks of functionality. Every additional feature in an app leads to combinatorial complexity. Assembly T...
...because of reasoning missing, but also sensing. Reasoning is easy now thanks to LLMs, so real-world sensing is the long pole. Even if there physically is a camera in the location, the idea of connecting it to a system that can always ...
...ization. Sounds like one voice but is actually inhuman. Not too dissimilar from LLMs and why their "view from nowhere" voice sounds hollow.
...r humans vs centering the models. A test if you've done this: if you turned off LLMs, would the system still work (just with more friction)?
LLMs allow qualitative nuance at quantitative scale.[yf] Before, to get scale, we had to throw away a lot of nuance to get scalar values that could be eas...
The original autocompletion LLMs are "System 1" models. The reasoning models are "System 2" models. What are the "System 3" models? Systems that plug into the emergent, online, colle...
An implication of LLMs allowing perfectly adaptable media: less marketing, more selling. Think of a traveling salesman selling a vacuum back in the day. Or think of a makeu...
LLMs don't have memories of their interactions with humans.[yr] Another way that the "LLMs are basically a virtual human" mental model is wrong. LLMs have...
Voice input to legacy computer systems felt excruciating, but voice commands to LLMs feels like flying. When we talk it's a stream of consciousness. It's non-linear; with ums, ahs, corrections, and disfluencies. Stream of consciousnes...
LLMs make it so any text is "executable," so a possible injection attack. This is because it allows english to be converted, explicitly or implicitly, to ...
If the web wasn't open then LLMs could never have been created in the first place.
...hat's possible. An ecosystem of emergent collective intelligence, lubricated by LLMs, is a super-linear business. The quality of the LLM sets the floor of what is possible. The floor that the collective intelligence can accrete on top...
... via bridges, then it's way more likely to be survivable. Software generated by LLMs today are little islands, isolated from everything else you want to do. LLMs can only do shitty software in the small (without a human significantly ...