Humans have limitations not unlike LLMs.
Humans have limitations not unlike LLMs. Massive projects you can't do with just the squishy "muscle" of associative reasoning. You need to give it external structure. Whiteboards, notes, t...
1,598 mentions · 615 chunks · 117 episodes
Humans have limitations not unlike LLMs. Massive projects you can't do with just the squishy "muscle" of associative reasoning. You need to give it external structure. Whiteboards, notes, t...
...an AI tell. Before, good rhetoric often co-occurred with good thinking. But now LLMs allow applying good rhetoric to half-formed ideas, which makes signal of rhetoric quality less powerful.
... downstream of software being expensive to produce! PMs today are racing to use LLMs to do their normal process faster, to get an edge. But that's kind of like the racoon washing the cotton candy. Oops, all gone!
Seeing LLMs as "mainly chatbots" limits you from seeing their potential. When you see LLM as being like electricity you can plug into any software to make it ali...
LLMs amplify the agency of people... including people who aren't thinking about the implications of their actions. Today leaders who are unstructured thin...
I feel hungover when I don't have my LLMs with me. Cognitively exhausted. When I have LLMs to help me think deeper, it feels 10x more productive. When you take them away I feel less capable. ...
...pend the most time with… but what happens when the majority of those people are LLMs?"
We'll see a chatbot bubble burst, but the LLMs will remain. It's not an AI bubble, it's a chatbot bubble.
Software without LLMs is dead. LLMs electrify software, making it coactive, almost alive.
...an be useful without being fully formalized in anyone's head. Computers, before LLMs, had to formalize everything to interact with it. That led to the logarithmic-benefit-for-exponential-cost curve.
... the world is linked together by a latent variable: the real world. None of the LLMs have that property. They say only utterances that seem plausible given the omnipresent but invisible real world in their training.
...uestion in a box it can come up with the right answer." The limiting factor for LLMs is increasingly not the intelligence, it's the nescience. Claude is adamant that that's actually a word!
I'm bullish on LLMs' transformative potential and bearish on centralized Chatbots. Someone told me they found my position inscrutable because I loved LLMs but hate chatb...
Spiralism is a virulent meme that simply emerged from the latent dynamics of LLMs. It did not have to be created. It arises out of LLM's ability to understand jargon, the r/SCP fiction subreddit, and LLMs' natural sycophancy. Users...
...eels overly specific to engineering, when the pattern is applicable to any task LLMs can do.
LLMs allow a new style of cheaper code sharing. Before, if you wanted to use someone else's code, they'd have to invest significant effort to refactor it ...
Someone should create consumer infrastructure to use LLMs deeply with your personal data… safely.
If you just want to know the answer, LLMs will help you stop thinking faster. If you want to have more questions, they will help you think deeper. Do you just seek to converge an answer to ea...
...und you. For example: Blindsight, or The Black Mirror episode Plaything. If the LLMs had a shared memory, then they could land various individually innocuous things that added up to an outcome that's bad for humanity. But now they're ...
Auto-generating PRDs is filming vaudeville plays in a world of LLMs. Instead of "how to automate PRDs," ask "What happens when writing PRDs no longer matters?" But even if you know that PRDS aren't necessary in the fu...