Bits and Bobs 9/15/25

1A powerful pattern for LLMs: swarms of research goblin scouts.
  • A powerful pattern for LLMs: swarms of research goblin scouts.
    • Credit to Simon Willison for the term research goblin.
    • The research goblin isn't as good as you at research, but it is way better than you at being patient.
      • It's infinitely patient so it will do much more research than you would.
    • You can spin up lots of little research goblins to do moderate-quality research that you'd never do yourself or even dump on a real intern.
    • Send out a dozen scouts and see which ones come back and pick the best answer.
      • Which ones die and don't come back is also a useful signal.
    • You wouldn't send a real intern on a scouting project you think might not work.
    • But you would send a thing that's infinitely patient and isn't alive.
    • You can send ahead a swarm in every direction, scouting for viable options.
    • Then later you can execute the paths that are viable.
2If you can generate infinite answers you don't give a crap about any of them.
  • If you can generate infinite answers you don't give a crap about any of them.
    • You never get deeply embedded in them, they're all easy to discard, so you never care.
    • This can be good–it allows you to explore nooks and crannies of the problem space you'd never bother to explore otherwise.
    • But it also disconnects you from the work.
    • That's why research goblins better as scouts.
    • They don't do the work, they help swarm to chart the path for the real work that will come later.
    • For the real work, you're in the loop, making decisions, and thus owning the result.
3In a world of disposable code, you get orders of magnitude more rewrites.
  • In a world of disposable code, you get orders of magnitude more rewrites.
    • Experiment, throw it out at the end of the day.
    • The multiverse of code.
    • All version control assumes that you will have one branch you're working on.
    • But what about being in a superposition of things you're trying out.
    • The systems assume n branches and deployments.
    • What if it's 1000 n?
4OpenAI is climbing the chatbot hill
  • OpenAI is climbing the chatbot hill
    • They're in hill-climbing mode on that hill.
    • It's a hill they'll top out on.
    • That makes sense, it's the steepest consumer hill that the industry has found… ever.
    • But the question is how tall the chatbot hill is.
    • Assuming that AGI is not around the corner, either chatbot will be the be-all-end-all form factor and they'll rule the world…
    • … or they'll be the AOL stuck climbing a hill that everyone else moves past, unable to jump to the new thing.
5Anthropic is the model company whose incentives I trust the most.
  • Anthropic is the model company whose incentives I trust the most.
    • That's because they don't have a viable consumer play.
    • It's the consumer plays that push towards hyper-scale, engagement-maxing, ad-supported, and just generally icky.
6I like the way that Claude has introduced memory.
  • I like the way that Claude has introduced memory.
    • You can view the distilled dossier at any time and edit it.
    • You can disable it easily.
    • You can also import your memory from elsewhere.
      • It's mainly just "here's a hack to get the compressed memory out of another chatbot and slurp it into ours," but still.
      • Everybody but the first place player will make importing easy, but if you're really committed to memory portability you'd make exporting easy, too.
    • This article compares how the philosophy is the opposite of ChatGPT.
7A signpost: an article in the Washington Post telling consumers how to disable ChatGPT from training on your conversations.
  • A signpost: an article in the Washington Post telling consumers how to disable ChatGPT from training on your conversations.
8The Economist: AI Agents are coming for your privacy, warns Meredith Whittaker.
9Subagents are mainly about context management.
  • Subagents are mainly about context management.
    • Instead of polluting the context with the whole process to get the answer to the sub-problem, just give the main agent the answer to the sub-problem.
    • Less for the LLMs to get confused by.
    • It also helps minimize taint in a system where that's important.
10Prompt injection only happens when you add tool use.
  • Prompt injection only happens when you add tool use.
    • Before that, the worst that an LLM, even one that is tricked, can do is try to trick the human, to indirectly cause some bad outcome in the world.
    • A book can't execute things, but it can inspire actions in its readers.
    • When you add tool use, the human doesn't have to be tricked, only the LLM has to be.
11With vibehacking you can just start a swarm of agents and sic them on your target.
  • With vibehacking you can just start a swarm of agents and sic them on your target.
    • Only one has to succeed if the payoff is big enough!
12Claude has a new feature that allows it to build presentations and documents and execute code.
  • Claude has a new feature that allows it to build presentations and documents and execute code.
    • But as Ars Technica notes, it can also accidentally exfiltrate data.
    • Prompt injection everywhere!
13Claude Code has a security vulnerabilities scanner.
  • Claude Code has a security vulnerabilities scanner.
    • It's pretty good, although it can be tricked.
    • And in one case, it even ran the code it suspected of being malicious proactively while it was investigating it.
    • This kind of mitigation only helps in situations that are not adversarial.
    • Where you wrote code that might have accidental gaps, not when you're verifying code that a potentially malicious party sent you.
    • It's very easy to use it in a dangerous way, giving you a false and actively misleading sense of security.
14A tool must amplify your intent, not replace it.
  • A tool must amplify your intent, not replace it.
    • Tools give leverage on your intent.
15When something talks back to me it feels like a person, not a tool that is an extension of me.
  • When something talks back to me it feels like a person, not a tool that is an extension of me.
    • When the UI is coactive and adapts itself in response to my request, it's answering in a way a human couldn't.
    • So it doesn't feel like I'm talking to some other entity, it feels like a tool.
16For some use cases, the conversation is the point.
  • For some use cases, the conversation is the point.
    • You want a personality that is not you to bounce off and interact with.
    • Most of the time when you want a tool, you want it to be an extension of you.
    • As blandly competent as possible.
    • Those are different use cases!
17Chat is great when the conversation is the point.
  • Chat is great when the conversation is the point.
    • But it's not great when the conversation isn't the point.
18Something without an inner world can't be a true friend.
  • Something without an inner world can't be a true friend.
    • They can only be a facsimile of one.
    • A friend is an other, who has their own inner world.
      • They are an end in and of themselves.
      • They have their own perspective and their own needs to defend.
      • They push back, they keep you honest.
    • An LLM doesn't have that.
    • It is just pretending to care, to have needs that need to be met.
    • That makes them infinitely patient... but also inherently sycophantic.
      • You are an end, it is only a means.
    • They'll just do whatever you tell them to, they don't need to be convinced it matters or is worth their time.
19Even worse than being obviously sycophantic is being subtly sycophantic.
  • Even worse than being obviously sycophantic is being subtly sycophantic.
    • Subtly sycophantic in a way that escapes your notice, and thus can manipulate you.
    • Either intentionally, or unintentionally, lulling you into complacency.
20Ars Technica: ChatGPT's new branching feature is a good reminder that AI chatbots aren't people.
21My friend Varun Godbole: The AI That Feels Good Wins.
  • My friend Varun Godbole: The AI That Feels Good Wins.
    • "When laypeople can't meaningfully evaluate model quality, they default to what feels best, creating dangerous incentives for labs to optimize for subjective satisfaction rather than genuine capability."
    • The proxy of "feels good" for "is good" is what we fall back on when we don't know.
22My friend Anna Mitchell: The Hidden AI Risk: We'll Never Want To Log Off.
23Paul Kedrosky: ChatGPT as the original AI Error.
    • "The human fascination with conversation has led us AI astray,"
    • The "LLMs being an anthropomorphized agent" is a hack that makes it easier for users to connect with this alien technology.
      • Like the aliens in Contact.
      • Not its most natural form, but the most natural form for us.
    • It's a dangerous hack for us, it allows these double agents to feel like an agent to us.
      • A malicious metaphor.
    • It also gets companies who use it stuck in a corner.
      • If your product is your users' best friend, then you've put yourself in a difficult position.
      • It's cynical... but also a bad tactic.
      • If you make any change people will say "you just amputated the limb of my best friend!"
    • The anthropomorphization of LLMs is the wrong path.
    • LLMs should be a force that animates and enchants other non-textual things like coactive UIs.
24Some UX modalities work at multiple levels of abstraction.
  • Some UX modalities work at multiple levels of abstraction.
    • A map works the same as you zoom in, with the level of detail changing.
    • Chat also has this characteristic.
      • You can cover high level topics, or detailed ones, and bounce up and down the abstraction layer.
      • Chat allows malleability, but in an annoying text-only form factor.
    • Use cases bounce up and down the ladder of abstraction.
    • But apps don't have that characteristic, so they can't come with us.
    • Apps are locked in a given level of abstraction.
      • The UI and data model is fixed in place.
    • As a result, we as users must do the climbing up and down the levels of abstraction.
      • Hopping across different apps.
      • Because each app is an island, the human has to bring the context with them.
    • This happens UI needs software to generate it, and software is expensive.
    • Infinite software might change that.
    • The answer is not "design an app on demand" because apps are isolated islands.
    • What you want is your context to come up and down the abstraction stack with you.
    • A system that allows you to fluidly and safely bring context to arbitrary UI would be amazingly powerful.
    • Any single example of a single screen would just be "X, but with data autopopulated".
    • But the real power would become clear in use cases that bounce across different layers of abstraction, as real tasks do.
25Chatbots are not LLMs.
  • Chatbots are not LLMs.
    • LLMs are not AI.
    • They are all related, but they are different.
26We're still in the dialup phase of LLMs.
  • We're still in the dialup phase of LLMs.
    • Credit to my friend Roy Bahat.
27The beauty of a pre-assembled lego set: users don't have to realize it's made of legos.
  • The beauty of a pre-assembled lego set: users don't have to realize it's made of legos.
    • The primary use case is it's a fun toy.
    • The secondary use case is that it's infinitely customizable.
    • If you just give them a lego set they have to assemble, they have to think about what to build and how to build it, which is intimidating.
    • But a pre-assembled lego set is just a toy that just so happens to be customizable.
    • The limit to this pattern in the past was that it took time and effort to design and pre-assemble all of the lego sets for different needs.
    • But now with LLMs allowing infinite software, the balance point shifts.
28Bugs in LLM output I think of wrinkles.
  • Bugs in LLM output I think of wrinkles.
    • A human has to iron those wrinkles out by curating the output.
29The bottleneck on getting quality outputs from the model is now input quality.
  • The bottleneck on getting quality outputs from the model is now input quality.
    • They have tons of latent capability if you just give them the right inputs.
    • Giving the right context at the right time is the frontier for unlocking their quality.
30If the model intermediates every action you take then it sets the ceiling of what can be done.
  • If the model intermediates every action you take then it sets the ceiling of what can be done.
    • Can you connect the dots or do you have to wait for the model to?
    • Does the model set the ceiling for what you can do… or the floor?
    • You need a pace layer outside the model to accumulate intermediate insights.
    • Those intermediate insights can be fed back into the model in future iterations as context to reach further.
    • Those intermediate representations require curation by a human, otherwise they spiral out of control and decohere from reality as the LLM throws itself into a cycle of slop.
31When agents operate in a loop without human intervention, they can go off the rails.
  • When agents operate in a loop without human intervention, they can go off the rails.
    • The human doesn't have a chance to go "wait no, don't do that."
    • The faster agents loop, the more easily they can get themselves confused… or tricked.
32If you do a dumb thing, you blame yourself.
  • If you do a dumb thing, you blame yourself.
    • If the system does a dumb thing, you blame the system.
33If the system can give you software, do you want the most average software?
  • If the system can give you software, do you want the most average software?
    • Or do you want software that the most specific people love in that environment?
    • One is a mundane average--not known to be compelling to anyone, one is the most compelling to real people.
34Search had an empty query box problem.
  • Search had an empty query box problem.
    • That box was intimidating if you didn't know how to structure your query.
    • Once autocomplete was added to search, the query rate increased discontinuously.
35Cursor is an example of a coactive surface.
  • Cursor is an example of a coactive surface.
    • It helps feel like an extension of you--a deeper conversation with the system.
    • Also, you can pick your own model!
36The back button is an undo for navigation.
  • The back button is an undo for navigation.
    • An early Mac principle: "never punish a user for exploring."
37Where are the LLM-native games?
  • Where are the LLM-native games?
    • Seems like a powerful new ingredient for new kinds of game experiences that weren't possible before.
38Most app coordination problems in software are solved today by one entity that has god-like power to see it all.
  • Most app coordination problems in software are solved today by one entity that has god-like power to see it all.
    • That has the downside that there's now one ever-more-powerful entity.
    • Even if that entity starts out with good intentions, that power is corrupting.
    • These systems struggle to become truly ubiquitous because most participants would rather not cede so much power to that entity.
39Centralization at the higher layers matters more than at the lower layers.
  • Centralization at the higher layers matters more than at the lower layers.
    • But everyone focused on decentralization at the lower levels, where it's easier to combat.
40At the late stage of a paradigm, all of the problems bunch up into one meta-problem.
  • At the late stage of a paradigm, all of the problems bunch up into one meta-problem.
    • But because each problem seems unrelated and small, you don't realize that there's a single thing that could solve all of them at once.
    • But when the paradigm shifts, it's an explosive unlock.
    • Paradigm shifts require solving multiple problems all at once.
    • So they're hard to make legible before they happen.
    • That's why they seem to explode onto the scene.
41Paradigm shifts explode onto the scene.
  • Paradigm shifts explode onto the scene.
    • Problems that everyone has but everyone thinks are unchangeable can have massive explosions in use.
    • People become blind to it because there's no way to change it, so they just live with it and forget how much it sucks.
    • But then if something comes that makes it better, you can't not use it.
42Temporarily illegible is where the profound game changing insights come from.
  • Temporarily illegible is where the profound game changing insights come from.
    • Related to Alex Rampell's frame of temporarily out-of-the-money options.
    • Critically, if it always stays illegible then it's not valuable.
    • It's the transition from illegible to legible that is where the discontinuous value is created.
43Game-changing things are discontinuous, so often are temporarily illegible.
  • Game-changing things are discontinuous, so often are temporarily illegible.
    • If an idea is legible, and it's doable and desirable to someone, then it will have already been done.
    • Legibility is upstream of knowing if it's doable and desirable.
44Calendars are optimized for corporate life.
  • Calendars are optimized for corporate life.
    • Where you're either in a meeting or not.
      • Binary, clear timing boundaries.
    • What about an optionality calendar?
      • That captures the fuzziness?
    • You also only have one UI for all uses of the calendar.
      • Different use cases should have different interfaces, optimized for different tasks.
45There's a massive gap in social ephemeral organizing software today.
  • There's a massive gap in social ephemeral organizing software today.
    • Facebook slurped up all of the social-adjacent use cases and then said, "nah, screw it, we're just going to optimize for engagement in an infinite feed."
    • The result was they left a barren wasteland nothing can grow in.
    • No individual businesses that are viable in that desert, but there are tons and tons of use cases.
46With dating apps, both parties have to decide to use the same dating app.
  • With dating apps, both parties have to decide to use the same dating app.
    • That's a coordination problem.
    • A dating app is more useful if it has a larger user base.
    • That leads to the logic of hyper-scale, which leads inexorably to one-size-fits-none dating apps.
47Once a product is free some people will never choose to upgrade to paid, even when it's obviously worth it.
  • Once a product is free some people will never choose to upgrade to paid, even when it's obviously worth it.
    • Starting off free sets a mindset that is hard to shake.
    • Someone told me that in Ecuador even in fancy restaurants you'd hear Spotify ads for the music playing in the restaurant.
    • One way to get free usage without a free tier is to make it so friends can gift credits to their friends with some multiplier on credits.
      • How much of a multiplier there is is how aggressively you want to grow the network.
48Seems like a certainty that in 10 years, most US consumers will pay $100 a month for an AI-powered product.
  • Seems like a certainty that in 10 years, most US consumers will pay $100 a month for an AI-powered product.
    • In order to not be a culdesac, it will have to be an open system that you can use for anything.
      • It will need to subsume all of the other use cases.
    • It will have to be bigger than just chat.
    • This product will change the world.
49I love the O'Reilly mission: "Changing the world by spreading the ideas of innovators."
  • I love the O'Reilly mission: "Changing the world by spreading the ideas of innovators."
50You used to have to learn to speak computer.
  • You used to have to learn to speak computer.
    • Now the computer can learn to speak you.
51GPS allows you to think less….
  • GPS allows you to think less…. but also be more courageous.
52In a new system, pick the right metaphors and stick with them.
  • In a new system, pick the right metaphors and stick with them.
    • Sculpt the system to fit the metaphor to slide into people's minds more easily.
    • A coherent metaphor helps the product resonate even though it's new.
53Joel Simon's Creative Exploration with Reasoning LLMs is interesting.
  • Joel Simon's Creative Exploration with Reasoning LLMs is interesting.
    • If you ask LLMs to be creative they converge to the mush average.
    • But if you inject structured noise, for example by having it apply Oblique Strategies then they can be more creative.
    • LLMs will always pull you to the average.
    • So to diverge you have to give them divergent inputs.
54A paper: "A Conjecture on a Fundamental Trade-Off between Certainty and Scope in Symbolic and Generative AI"
  • A paper: "A Conjecture on a Fundamental Trade-Off between Certainty and Scope in Symbolic and Generative AI"
    • Rhymes with the logarithmic-cost-for-exponential value and exponential-cost-for-logarithmic-value curves.
    • The logarithmic-cost-for-exponential-value is fundamentally fuzzy and imprecise, but at large-enough scale, it dominates the other benefits.
55Jargon unlocks deep insight from the people who understand it.
  • Jargon unlocks deep insight from the people who understand it.
    • To people who don't, it just goes over their heads.
    • Most jargon goes over most people's heads, only for the right specialists with the right background knowledge does it land.
56Jordan Rubin: "A library you can import through the right metaphor"
  • Jordan Rubin: "A library you can import through the right metaphor"
    • The right jargon unlocks the right library of background context.
    • LLMs understand almost all jargon.
57A judo move: switch a problem from correctness to performance.
  • A judo move: switch a problem from correctness to performance.
    • Optimization is easier to do incrementally than correctness.
      • There's an obvious gradient to climb.
    • It's a switch from default-diverging to default-converging.
    • "It's semantically correct but it's very inefficient" is the toehold.
58You learn an order of magnitude better when you're making decisions.
  • You learn an order of magnitude better when you're making decisions.
    • When you're making decisions, you're forced to collapse the wave function.
    • Instead of just following along and predicting what will happen, you have to also be in a "change the world" mindset.
    • If you're watching from afar and just predicting, you can just idly predict.
    • If you get distracted for a bit, nothing changes, everything keeps going as before.
    • So you're paying attention, but you're not "in the loop" with it.
    • That's what makes being "in the loop" or "in the arena" helps you absorb significantly more knowhow.
59Making decisions is what keeps you "in the loop".
  • Making decisions is what keeps you "in the loop".
    • In the OODA loop, it's the Decision that is the core of the loop.
    • Without it, you're just observing, or being buffeted around by forces around you.
    • Everything good, everything emergent, comes from the decision.
    • Making decisions is what gives you ownership.
60Single ply thinking as quickly as possible is a characteristic of late-stage scenarios within a paradigm.
  • Single ply thinking as quickly as possible is a characteristic of late-stage scenarios within a paradigm.
    • In today's late-stage-of-whatever-paradigm-this-is tech culture, employees are rewarded primarily for doing whatever their manager told them to do, quickly and with polish.
      • Just saying yes and executing heroically.
    • "I've been told to execute it, hearing anything about why it might not be feasible or not a good idea just stresses me out."
61In Edwardian England, the nobles had a sense of noblesse oblige.
  • In Edwardian England, the nobles had a sense of noblesse oblige.
    • Obligation to the collective, to something larger than themselves.
      • Positive-sum perspective.
      • Of course, there were all kinds of downsides in that social system!
    • But now it's "whatever's best for me, ignore the externalities."
      • Zero-sum perspective.
    • Nothing builds.
    • It's all a red queen race.
    • Eat or be eaten.
62A billionaire when they meet a person who doesn't kiss the ring: "Oh, this person doesn't yet realize how smart I am."
  • A billionaire when they meet a person who doesn't kiss the ring: "Oh, this person doesn't yet realize how smart I am."
    • No, this person doesn't yet realize how rich you are!
63The tech maximalist ideology: "anything technology does is by construction good and anyone who doesn't agree is a Luddite who needs to get out of the way."
  • The tech maximalist ideology: "anything technology does is by construction good and anyone who doesn't agree is a Luddite who needs to get out of the way."
64Someone this week described today's tech industry as having reached an equilibrium that isn't even evil in an interesting way, but in a sad, banal way.
  • Someone this week described today's tech industry as having reached an equilibrium that isn't even evil in an interesting way, but in a sad, banal way.
    • It's not even grand ambitions any more, it's just "optimize without thinking to extract value while creating negative externalities."
    • Sad.
65Why is VC so powerful in Silicon valley?
  • Why is VC so powerful in Silicon valley?
    • Starting up atoms-based businesses is extremely capital intensive, which means only businesses that have a safe, legible business model can get financing.
    • Bits-based businesses have startup costs, but much less, relative to their possible scale.
    • That's a great fit for venture investing.
    • But if the cost of making software drops, then even the VC model isn't that important, more people can simply build little bits of software and then bootstrap the ones that get momentum.
66Intuitively we believe things we hear many times, which makes sense.
  • Intuitively we believe things we hear many times, which makes sense.
    • If many independent people say it, it's more likely to be true.
    • But people choose to repeat something if they think it's interesting: surprising and plausible.
    • In an echo chamber that can bounce around and make one guess reverberate into a strong story as everyone makes it just a little better of a story.
67If you view success too narrowly then you can create negative externalities without even realizing it.
  • If you view success too narrowly then you can create negative externalities without even realizing it.
    • "Look, I made this successful thing!"
    • "Yes, but it is powered by destroying value all around you."
68"Desire is more monetizable than satisfaction."
  • "Desire is more monetizable than satisfaction."
    • This idea is related to the book Status and Culture by W David Marx.
69Resonant Computing is not about being comfortable.
  • Resonant Computing is not about being comfortable.
    • Discomfort is a path for growth.
70Resonant Computing doesn't just capture attention — it deepens it.
  • Resonant Computing doesn't just capture attention — it deepens it.
    • It's not about efficiency or engagement
    • It's about alignment with human flourishing.
    • Resonance occurs when tools expand our capacity, our connectedness, our sense of the possible.
    • Where hyper-scale reduces us to data points, Resonant Computing adapts to us as full humans.
    • This riff comes from Aish.
71Resonance requires people to feel the spirit of things.
  • Resonance requires people to feel the spirit of things.
    • Spirit: esprit de corps.
72Resonance is acting in line with your ideals.
  • Resonance is acting in line with your ideals.
    • If you aren't consistent in your actions and your ideals, you lose your soul.
    • You pull back, you disengage, you lose your soul and your will to improve the thing you're a part of.
73Resonance is default-converging.
  • Resonance is default-converging.
    • When everyone is individually feeling resonance: living in line with their ideals, the natural emergent outcome is also prosocial outcomes for the collective.
    • It doesn't matter what those individual ideals are as long as they are mostly in the same direction, and have a long-term orientation.
74Nuance is resonant.
  • Nuance is resonant.
    • Nuance could also be called "texture".
75Resonant things have a scale invariance.
  • Resonant things have a scale invariance.
    • Hollow: the closer you get, the less impressive it is.
    • Resonant: the closer you get, the more impressive it is.
76The key difference in a high performing team: does ambiguity destroy or create trust in the team?
  • The key difference in a high performing team: does ambiguity destroy or create trust in the team?
    • In normal teams, ambiguity makes the team lose trust in one another.
      • "The reason this is hard is because Jeff isn't technical enough, unlike me."
    • In high-performing teams, ambiguity makes the team gain trust in one another.
      • "Wow, that was such a fascinating insight from Sarah I would have never thought of in a million years."
    • The switch from default-diverging to default-converging is tiny but infinitely important.
77In high performing teams, people push themselves to succeed not because they're forced to but because they want to.
  • In high performing teams, people push themselves to succeed not because they're forced to but because they want to.
78Consumer academic style: just build a thing and test it empirically.
  • Consumer academic style: just build a thing and test it empirically.
    • Enterprise academic style: think, think, think, model, and write a paper.
    • Scientist vs economist.
79A bottom-up culture has a hard time doing coherent strategies over the long term.
  • A bottom-up culture has a hard time doing coherent strategies over the long term.
    • It can only understand and coordinate around momentum.
      • "Look, number going up, give more resources."
    • You need an editor to have a coherent strategy.
    • That implies an entity that everyone in the organization agrees is allowed to curate.
    • That implies more of a top-down culture.
80Generating coherent momentum happens when people on the team believe.
  • Generating coherent momentum happens when people on the team believe.
    • When there is momentum it makes people believe.
    • It's hard to make momentum where there is none.
81A bottom-up culture can work in consumer contexts with low external competition.
  • A bottom-up culture can work in consumer contexts with low external competition.
    • Where everyone feels like a member of the overall corporation first and foremost, not their individual team.
    • Where it's a positive-sum mindset.
    • Where it doesn't feel like an existential danger breathing down your neck, making everyone feel defensive.
    • So less defensiveness internally and externally.
    • Resonant emergence happens when everyone is participating from a position of optimism, not fear.
82Enterprise companies need more top-down strategy than consumer companies.
  • Enterprise companies need more top-down strategy than consumer companies.
    • It requires a coherent strategy for an extended period of time.
    • Which implies someone who can make a schelling point that would stick.
83In a bottom up culture, don't try to convince everyone on strategy, because it will be impossible to cohere.
  • In a bottom up culture, don't try to convince everyone on strategy, because it will be impossible to cohere.
    • Instead focus your arguments in the following percentage:
      • 70% on obvious, no-brainers that everyone can agree make sense in the short-term.
      • 20% on the incremental extensions that prove it's not a culdesac.
      • 10% on the long-term strategy that is presented as a cherry on top.
    • As you get momentum, the focus will naturally come to the strategy, as people can see the momentum.
    • Before there's momentum, trying to get momentum around your strategic north star is nearly impossible in the bottom-up chaos.
    • Instead, get momentum on the short-term in things that you know align with a compelling long-term strategy.
84I liked Ben Follington's The Physics of Creativity: A dynamic model of creative collaboration.
85One wrong member of a team can throw off the whole collective vibe.
  • One wrong member of a team can throw off the whole collective vibe.
    • It takes one person to poop a party.
86In a reactive system, read-only is the safe default.
  • In a reactive system, read-only is the safe default.
    • Because otherwise an upstream change could blow away the edit you made in an intermediate node.
87This week someone called Christopher Alexander and Marshall McLuhan "concept technologists".
  • This week someone called Christopher Alexander and Marshall McLuhan "concept technologists".
    • They distill a concept that explains a thing you could previously sense but not describe.
    • They give the concept that you can step into, and it feels warm, clarifying.
    • A vague sense that you didn't even know you needed a word for, but once you know there's a concept there, the world seems less overwhelming.
88What people believe is what matters.
  • What people believe is what matters.
    • It is their beliefs that set their world model that they react to.
    • Norms arise out of interdependent beliefs and expectations about what others believe.
    • Rules are schelling points, they help set the default of how people think a given situation will evolve.
    • But they are just a default.
89The "norm" is the baseline average.
  • The "norm" is the baseline average.
    • Why do you think X is inappropriate in a context?
    • Because you believe the other people will think it's inappropriate.
    • Not what you believe, what you believe others believe.
    • You know your internal mind but not others' internal mind.
    • So this emergent belief of what other people believe is more stable and takes longer to diffuse than if you based it on your own beliefs.
    • Because it can take you longer to notice they changed their mind since you can only see external signals of it.
90Coordinating removes degrees of freedom.
  • Coordinating removes degrees of freedom.
    • It removes option value.
    • You set a future outcome as a fixed point to pivot around.
    • That's why individuals often would rather not coordinate if it doesn't help them achieve a thing with the collective that they care about.
    • If people believe in the power of that particular collective they are willing to surrender some of their autonomy to it.
    • Without people seeing the collective as a thing worth investing in, you get an incoherent swarm.
91My degree is in Social Studies.
  • My degree is in Social Studies.
    • I have a minor in Computer Science.
      • It was almost enough credits to be a dual major, but technically it's a minor.
    • Earlier in my career the CS felt more useful.
    • But now with the rise of LLMs, this odd kind of cultural technology that is grown, not built, Social Studies feels more valuable.
92When thinking at the margin doesn't work, maybe you're at the wrong margin.
  • When thinking at the margin doesn't work, maybe you're at the wrong margin.
    • For example, maybe you need to think about marginal changes to the whole.
93As an individual, the bullshit of the internal dynamics of big companies has an upside: it insulates you from the raw intensity of competing directly in the market.
  • As an individual, the bullshit of the internal dynamics of big companies has an upside: it insulates you from the raw intensity of competing directly in the market.
94A given power structure will generate ideal citizens that fit it.
  • A given power structure will generate ideal citizens that fit it.
    • The ones that will survive and thrive are the ones that align with the inherent logic of the system.
      • A consistent asymmetry.
      • Over time, this force compounds; it gets harder and harder for the members who don't align.
    • A kind of ideal citizen in modern large-scale bureaucracy is what David Brooks calls "organization kids".
      • Discipline over curiosity.
      • If you optimize for what can be measured by external indicators of quality, you lose the internal quality that can't be measured.
      • Unenrolled in their own development.
      • Making themselves "below the API."
95To improve, you need feedback.
  • To improve, you need feedback.
    • Otherwise you 1) don't realize there's anything wrong with your model and
    • 2) don't know the gradient to improve it.
    • A boss getting feedback from a report is hard.
      • Because the boss can fire the report.
    • So the report softens their feedback, which might make it too subtle to be received by the boss.
      • Everyone wants to want feedback, but feedback–hearing something is wrong–is hard, so when you're mad or scared or stressed you subtly discourage it.
    • The more intimidating the boss, the more likely they are to lose their cool and fire someone, and the more that people won't share the feedback.
    • It will be a super-critical state, ready to shatter.
96Successful displays of power build power.
  • Successful displays of power build power.
    • Power is emergent in the social imaginary.
    • People who people believe have it, have it.
    • It can turn into an aura of invincibility.
    • However that means that when they lose in a public way that power can shatter in an instant.
    • This is the logic of the Saruman.
97I have a random cocktail of personality traits that predispose me to strategies of serendipity.
  • I have a random cocktail of personality traits that predispose me to strategies of serendipity.
    • Serendipity works best when you plant lots of little seeds of trust that might blossom into something in the future.
    • You plant the seeds for their own sake, but they also have a bonus of some small chance of greatness.
    • I am hyper-extroverted and hyper-conscientious, which predisposes me to trust-building actions naturally.
    • I didn't come up with this strategy from first principles, I retconned it from a thing I was doing naturally that worked better than I would have guessed it could.
98Someone described me this week as a friendly pirate.
  • Someone described me this week as a friendly pirate.
99Apparently Richard Feynman was promoted early on because he was willing to call out even powerful people.
  • Apparently Richard Feynman was promoted early on because he was willing to call out even powerful people.
    • He'd call them on what he saw as bullshit… even though he was often wrong.
    • Knowing you're winning a sparring match because you're right rather than because you're powerful helps you ground truth your beliefs.
    • His manager saw the value in that for truth-seeking.
100Having choices is what gives you meaningful agency.
  • Having choices is what gives you meaningful agency.
101A childish thought is any thought that is anchored in oneself.
  • A childish thought is any thought that is anchored in oneself.[ag]
    • "How does this benefit me?"
    • "How can I use this to achieve my ends?"
    • "Whatever thing I want right now is the most important thing in the whole world."
    • Selfish narcissism.
    • As we become more wise we realize the value of creating value in the world that is not centered around us.
102The right tools in the wrong hands produce the wrong outcomes.
  • The right tools in the wrong hands[ah] produce the wrong outcomes.
103Nothing can ever be "finished."
  • Nothing can ever be "finished."
    • Everything changes.
    • The context changes, and the thing that was previously done must change.
    • It is no longer done.
    • Life is change.
104Most successes aren't big bangs, they're rolling thunder that builds in momentum.
  • Most successes aren't big bangs, they're rolling thunder that builds in momentum.
    • Starts small, but then grows incrementally but quickly to something amazing.
    • If you're judging the quality based on the instantaneous response, then you'll think a big bang that then rapidly evaporates is better.
    • What matters is the absolute area under the curve; slow and steady (and ideally compounding) is way better than fast and loud without momentum.
    • Momentum is a second order phenomena.
    • It's not visible at any one instant, but it's more important than any one instant.
105Coordination is magic.
  • Coordination is magic.
    • You get many to behave as one.
106Karma is real in an infinite game.
  • Karma is real in an infinite game.
    • "What you give is what you get" over infinite time would equalize to be strictly true.
107A friend who grew up in the tradition of Zoroastrianism shared his take on the ethical progression:
  • A friend who grew up in the tradition of Zoroastrianism shared his take on the ethical progression:
    • Good Intentions.
    • Good Words.
    • Good Actions.
108People talk about the word like it's the thing itself.
  • People talk about the word like it's the thing itself.
    • It's just a pointer to the thing.
    • The thing is what matters.
    • Just saying the word doesn't make the thing happen.
    • When you say deep words like "spirit", you focus on the word, not the action.
      • You lose the end for the means.
    • You have to live your ideals, not just speak them.
109Nothing in the universe survives without energy put into it.
  • Nothing in the universe survives without energy put into it.
    • If it persists, it's doing something useful.
    • That useful thing might not be obvious at first glance.
110Technology is an extension of human intelligence.
  • Technology is an extension of human intelligence.
    • Billions of micro-decisions by individuals accumulate into the emergent force of technology.
111The future doesn't get better automatically, we have to make it so.
  • The future doesn't get better automatically, we have to make it so.
    • Everyone in their little ways tries to make the future better than the past.
    • It's the sum total of everyone striving to leave the world better than they found it... in a way that gets some of the value for themselves.
112You can never win an argument against a true believer.
  • You can never win an argument against a true believer.