Bits and Bobs 2/24/25
1I think this tweet about finding the right UX for AI-native tools is directionally correct.
- I think this tweet about finding the right UX for AI-native tools is directionally correct.
- I want interfaces that are intelligent, not in a human way, but in a way where the tool anticipates my needs and adapts to them seamlessly.
- I think that would be the killer use case for LLMs.
- Chatbots are (compelling!) demos of LLMs, but ultimately, for most use cases, not the right modality.
- There are some use cases that will always be best served by a chatbot-like interaction (e.g. intellectual brainstorming[ws]).
- But most use cases that use LLMs will not be best served by a chatbot.
- The most transformative way to use LLMs will not be chatbots for most users.
- The killer use case of LLMs is yet to be discovered.
2I want my tools to help me thrive, not just engage.
- I want my tools to help me thrive, not just engage.
3Tools are supposed to feel like extensions of us.
- Tools are supposed to feel like extensions of us[wt].
- A good tool literally feels like an extension of our body.
- Our minds are very, very good at establishing this illusion.
- When you learn to ride a bike, the bike feels, in a very real way, like an extension of your body.
- The edge of us is the edge of our directly manipulated intention.
- Boundaries evaporate between our bodies and the edge of the tool.
- The test of how good of a tool it is is, how fully does the boundary between the tool and you feel like it evaporates?
- A tool that is itself agentic can never have this evaporation of boundary; there is always the "other" to reason about[wu].
4I want to take software for granted.
- I want to take software for granted.
- I want my data to come alive, in a way that I don't even have to think about how the software is being created.
- My data in my tool simply helps me accomplish meaningful things, responding to my intentions in a way that feels like an extension of me.
5AI should feel less like talking to a god, and more like an enchanted tool.
- AI should feel less like talking to a god, and more like an enchanted tool[wv].
- AGI is often envisioned as a chatbot.
- How backwards!
- All of that awesome power in a little personified god in a box.
- AI should be about empowerment of people, enchanting our tools.
6The ability for users to write turing complete code within a multi-user platform is typically hard (to write) and dangerous (to allow).
- The ability for users to write turing complete code within a multi-user platform is typically hard (to write) and dangerous (to allow).
- But if you could make it easy and safe, whole new categories of experiences could become possible.
7Apps are not about people.
- Apps are not about people.
- Everything else is, ultimately, about people.
- Every heavily used tool that has even incidental multi-user support becomes inherently a social experience of meaning-making.
- We think of "social" today as primarily a broadcast / engage loop, the insatiable social vortex.
- But that's not some fundamental, inescapable fact.
- You can only create or share experiences that fit within code the platform owner's engineers wrote.
- This is part of what causes the collapse towards #content[ww] and the engagement vortex.
- Social software experiences today are anti-social.
- Social should be about co-creating meaning and value in the world.[wx]
- Social experiences today are not turing-complete.
- If we had a way to safely allow bottom-up turing-complete experiences, we could reinvent what social tools can be.
- Social as it was meant to be.
- Human-scale.
- Cozy.
- Collaborative.
8Dozens of companies over the years have pitched: 'an intelligent personal assistant for your email'.
- Dozens of companies over the years have pitched: 'an intelligent personal assistant for your email'.
- All of them have mostly fallen flat.
- But at some point, one of them will actually work.[xd]
- The reason it keeps popping up as a pitch is because everyone wants it so much.
- Tech early adopters have become more skeptical of the pitch just because they've heard it so much and it's never worked, so they assume the next one pitching it also won't work.
- Most consumers never hear about the ones that don't work, they still would be 'fresh' for the frame.
9A sweet spot for a tool adding value with emails: the emails you filter but don't read.
- A sweet spot for a tool adding value with emails: the emails you filter but don't read.
- Email that's definitely important, you likely read right away.
- Most email goes into your inbox and you never read.
- Emails you bothered to write a filter for implies it might be valuable, but you likely don't sift through it.
- Something that helps sift through that to find useful nuggets for you among the chaff would be very useful!
10Today we are inundated by a cacophony of information trying to draw our attention.
- Today we are inundated by a cacophony of information trying to draw our attention.
- Every time a given communication channel saturates (e.g. email) a new one opens up (e.g. app notifications).
- At the beginning that new channel is a blessed refuge for only the most important stuff.
- But over time it saturates too, the tyranny of the marginal use case, and it becomes just another channel for millions of bits of information to constantly fly at your face[xe].
- The loudest stuff gets your attention, not the most meaningful.[xf]
- Imagine an information stream that knew what you wanted and could help sift through all of the channels to help highlight what was actually important for you.
11Encouragement without curation is randomizing.
- Encouragement without curation is randomizing.
- There needs to be judgment on what subset to encourage, otherwise it's just diffusion energy that accelerates entropy.
- You need to have a high taste, high judgment curation function to target your "yes, and" energy to an intentional subset.
12Claude feels divergent, ChatGPT feels convergent.
- Claude feels divergent, ChatGPT feels convergent.
- Claude is willing to follow you on whatever wild flight of fancy you have.
- "What an astute observation" to even the most ridiculous points you make.
- ChatGPT feels like it wants to reel you into its conception of the right answer much more actively.
- Claude has great creative "bounce," and is a fun, discursive thought partner.
- But be careful; if you aren't curious enough to find and dig into disconfirming evidence Claude will happily "yes, and" you off a cliff.
13It could be useful to have multiple LLM participants in a conversation.
- It could be useful to have multiple LLM participants in a conversation.[xg]
- Even if it's the same model, each instance could wear a different "hat" in the conversation, and the interplay between them could generate useful new insights that the LLM acting as a single "individual" couldn't have.
- The process of "thinking" within an LLM is different from the process to output text and then respond to text already in the conversation.
- The LLM outputting text in one "voice" to then pass on to be absorbed by a later model can wring more insight out of it than a single-shot generation could.
- Distilling the fuzzy internal vibes to specific words collapses the wave function in a way that reduces ambiguity but forces it to lock in a specific POV.[xh]
- This dynamic of giving space to reflect and collapse the wave function is similar to how chain of thought works.
- One problem with using multiple LLMs in a conversation though: LLMs always respond to every message.
- In a 1:1 conversation, this is reasonable: one person talks, then the other one does, and it always ping-pongs back and forth.
- LLMs are hyper-optimized for this behavior, it's basically impossible to get them to not do it.
- But in a multi-person conversation, the rules for when someone should speak are way different.
- Each participant has to understand if they have a useful-enough thing to add to the conversation, or would just distract the flow of the conversation.
- In humans there are tons of social cues we're constantly looking at to figure out if we overstepped in a conversation; LLMs don't have that.
- LLMs today will simply respond every time they are "spoken" to, even if they have nothing interesting to say[xi].
14When was the last time you got accidentally tricked by a hallucinated fact from an LLM and didn't catch it?
- When was the last time you got accidentally tricked by a hallucinated fact from an LLM and didn't catch it?
- You look at its response and think "Look, it knew the answer to X" then as you look more closely you realize it just hallucinated something plausible and you didn't even notice.
- Even when you know it can happen, you don't think to check.
- Because by definition its hallucinated answers all look totally plausible, so they don't flag your quick smoke test glancing at them.
15I wish LLMs would sometimes speak in a lo-fi mode when they weren't very sure.
- I wish LLMs would sometimes speak in a lo-fi mode when they weren't very sure.
- LLMs have this uniformly professional tone, but they are often not particularly authoritative[xj].
- A rule that all PMs know: when looking for feedback, show mocks at the level of visual fidelity you want feedback on.
- If you want feedback on precise styling, show pixel perfect mocks[xk].
- If you want feedback on the overall features and flow, use a style like Balsamiq mocks, that looks hand-drawn.
- If an LLM isn't sure, I'd rather have it make some spelling mistakes, write in all lower case, and generally sound unprofessional.
- Deep Research communicates in deeply cited multiple page reports.
- The first impression that gives is extraordinary but it often fails to be impressive the closer you look.
- Performative rigor!
16A map can be useful even if it's a bit incorrect or smudged.
- A map can be useful even if it's a bit incorrect or smudged.
- It can help get you oriented in novel domains.
- If you think of Deep Research's output as a smudged map, it can still be useful, especially for domains that you're a novice in.
- Just don't take it too literally.
17Liquid media sublimates a gas; liquid software dissolves a solid.
- Liquid media sublimates a gas; liquid software dissolves a solid.
- The gas of fuzzy human intention can be sublimated into a fluid that can be poured and manipulated.
18One way to get broad appeal is to be generic.
- One way to get broad appeal is to be generic.[xl]
- You dumb it down to be good enough for as large a market as you can.
- This leads to the tyranny of the marginal user.
- This is the only approach to scale in a proprietary, top-down system.
- Another approach is to have an ecosystem of lots of bottom-up emergent niches[xm], created by various actors within the ecosystem.
- The ecosystem as a whole has broad appeal, but any given experience within it is hyper niche.
- Way more niche than a top-down structure could have ever been worth it to design and build.
- The hyper-optimized niche ensures it's a great offering for precisely the users in that niche.
- The swarm of niches ensures broad coverage overall.
- The best of both worlds.
19A closed system early in a disruptive era can't hope to keep up.
- A closed system early in a disruptive era can't hope to keep up.
- The sum total of the exploratory innovation in the open ecosystem will dominate the proprietary[xn] option at the beginning.
- At the beginning of a disruptive stage, the needles in the haystack of new good ideas haven't been found yet.
- In the later stage, once most of the new good ideas have been found, the power shifts to the entity that can best execute and improve the good ideas.
- The entity in the role of AOL gets tricked into thinking they can dominate because they're the starter pistol, so they get an early lead.
- Logarithmic return for exponential cost. Early benefit, but towards a low ceiling.
- It's the warring curve again.
20It's better to start with a bottom-up mess that you can then rank than to have only clean, top-down constructed use cases.
- It's better to start with a bottom-up mess[xo] that you can then rank than to have only clean, top-down constructed use cases.
- A big box of random legos.
- Overwhelming, but in an inspiring way.
- If there's not enough stuff, then you're out of luck if your use case doesn't work.
- If you have a big bag of random legos to rummage through, there's a solution in there somewhere if you look hard enough.
- You can create a ranking function to suggest the best legos from the bag, and get the best of both worlds.
- Ranking on top of an open ended ecosystem is a strategically great position.
- You get both ubiquity (on top of a broad ecosystem) and differentiated quality (your proprietary ranking on top).
- As the creator of the ranker, the quality of the ranking gets better faster than your employees can improve it themselves.
- The investment of your employees adds linear returns, giving you a linearly increasing edge over other ranking functions.
- If your ranking gets better in proportion to the scale of activity in the ecosystem, as the ecosystem gets better, your effective ranking quality improves at a compounding rate.
21One of the primary scarce resources in digital contexts: namespaces.
- One of the primary scarce resources in digital contexts: namespaces.[xp]
- Namespaces are the points where the ecosystem coordinates.
- Everyone goes to the namespace that everyone else cares most about, which makes it rivalrous.
- There is one Barack Obama article on Wikipedia because Wikipedia has one main namespace[xq].
- This is what forces the random percolating energy about that topic to be convergent vs divergent.
- Everyone has to collaboratively debate for their perspective to "win" and be absorbed into the single article.
22If the LLM doesn't understand what you wrote there's a good chance your readers won't either.
- If the LLM doesn't understand what you wrote there's a good chance your readers won't either.
24"If you don't like it, fork it!"
- "If you don't like it, fork it!"
26LLMs can help structure unstructured data.
- LLMs can help structure unstructured data[xt].
- Unstructured data is underpriced in the market due to it being less useful when unstructured.
- But now it can be structured!
27It is not possible to fully enumerate in human language all of the edge cases of a real world phenomena.
- It is not possible to fully enumerate in human language all of the edge cases of a real world phenomena.
- You get the logarithmic return for exponential cost curve.
- This curve collapses under its own weight.
- Each incremental thing to extend it costs more than it creates value.
- It's underwater.
- If you can reduce it to a level of precision where everyone on earth would agree[xu], then you can simply ask an LLM and not have to go into deeper formalization.
- LLMs allow a cut off of a reasonably high floor of "good enough[xv]," making many more scenarios viable.
28A pattern to work well with software generated by LLMs: start with the smallest artifact that works and then build on top of it.
- A pattern to work well with software generated by LLMs: start with the smallest artifact that works and then build on top of it.
- If the first iteration doesn't work, don't try to keep building on it.
- Iterate until it works, then build on it.
- Also true for human built-things!
- Only build on top of things that work.
- Putting more on top of a thing that doesn't work makes it more likely to never work.
29LLMs are significantly better at writing smaller chunks of functionality.
- LLMs are significantly better at writing smaller chunks of functionality.
- Every additional feature in an app leads to combinatorial complexity.
- Assembly Theory also implies that the more steps to create the thing, the larger the space of possible options.
- LLMs do best when there are lots of structural examples of similar things in the training set.
- The more steps to create it, the exponentially fewer options there are in the training set.
- So slightly more complex bits of software are exponentially less likely to be well-generated by LLMs.
- The warring curve of logarithmic value for exponential cost curve again.
30The logarithmic benefit for exponential cost cure creates a charismatic trap.
- The logarithmic benefit for exponential cost cure creates a charismatic trap.
- In the beginning, you get huge amounts of value for small amounts of effort.
- You then commit to that approach, but as you get further you start getting less and less return for more and more effort.
- There's never a good time to switch to a bottom-up exponential-value-for-logarithimic-cost curve, so you get stuck.[xw]
31LLM generated software is a charismatic trap.
- LLM generated software is a charismatic trap.
- It looks cool but has a low ceiling.
- The idea of generating full, complex apps that are useful enough to exist as a isolated data island hits a low ceiling.
32Some lenses are not multi-ply but 1.5 ply.
- Some lenses are not multi-ply but 1.5 ply.
- It gives the superficial appearance of a multi-ply idea.
- Superficially compelling, but the closer you look the more empty it seems.
- Convincing only to people who don't know what multi-ply thinking looks like in that context.
33I think "Agents" is a 1.5 ply frame for software in the era of AI.
- I think "Agents" is a 1.5 ply frame for software in the era of AI.
- It sounds insightful because it's not just chatbots, it's a step beyond.
- But the more you pull on the thread, the more the problems of letting balls of LLM agency take real actions on your behalf starts to run into limits.
34Marketing a horizontal platform to consumers is hard.
- Marketing a horizontal platform to consumers is hard.
- Consumers, unless they hear a description of a precise problem they have, won't think "maybe that would work for me."
- Consumers are busy and distracted.
- One path is to market five vertical use cases in a trenchcoat, not even mentioning the horizontal platform underneath.
35Why do you get consent dialogs when you use services?
- Why do you get consent dialogs when you use services?
- Why doesn't the service get consent dialogs about the terms that you assert if it wants to work with you?
- Because software is expensive, and the creator of the software has the power to define terms.
- "Don't like it? That's OK, just don't use it."
- The software is scarce which means the software creator wins.
- But we can flip that in a world where software is an afterthought.
36Modern OSes treat the app like a black box, and primarily control its access to resources.
- Modern OSes treat the app like a black box, and primarily control its access to resources.
- As far as the OS is concerned, it doesn't know or care what pixels the app shows within its rectangle on the screen.
- But the OS can mediate access to sensitive resources, like the camera or notifications.
- More modern OSes can do things like give you an option to "Grant location access only while using the app."
- Imagine if you could systematize this granular functionality of the app down to more granular levels.
- Getting a maximum of precise, niche control over subsets of the functionality of the app, where you could grant a bit of location data to one subset of the app but not the other parts of the app.
- In the limit you'd get the app broken up into tiny bits of grains of sand that could be poured, almost like a liquid, into any number of differently shaped containers.
- The OS would then have fine-grained legibility over all of the sensitive behaviors of the app and how they could be combined.
37Most of the world isn't legible to computers.
- Most of the world isn't legible to computers.
- Humans can locomote themselves to physical locations in the world and look / hear / touch.
- But computers are by default blind and deaf and have to have special eyes and ears positioned and connected in the world.
- These eyes and ears are most commonly statically positioned.
- A lot of problems get way harder if you say "imagine you can't see anything, how would you do X task".
- A lot of things that are easy for humans are hard for computers not just because of reasoning missing, but also sensing.
- Reasoning is easy now thanks to LLMs, so real-world sensing is the long pole.
- Even if there physically is a camera in the location, the idea of connecting it to a system that can always watch it and take actions is potentially terrifying.
38A pattern to dull the downside risk of agents: have them only write "drafts".
- A pattern to dull the downside risk of agents: have them only write "drafts".
- The drafts still need to be activated by a human before executing the action in the real world.
- This provides a natural checkpoint to cap the most egregious downside risk.
- But this now means that users have to constantly check back in when the agent has a proposed task to do.
- If most of their useful actions (and research gathering ones) require actions that could plausibly be dangerous, the agent (and the user operating it) will alternate between being in twiddling-thumbs node while they wait for the other.
- The amount of time the human spends actually doing the action gets smaller and smaller as the quality of recommendations and filters gets better.
39Transformative platforms often have their "order a pizza" demo.
- Transformative platforms often have their "order a pizza" demo.
- Here's one in the MCP ecosystem.
- In the early days of the web, when pages were mostly static, there was a pizza ordering demo that allowed you to order a pizza from a webpage.
- The demo was mostly smoke and mirrors: behind the scenes it was a cgi-bin script that sent a fax to a specific pizza place in Palo Alto.
- But still, it got people's imaginations going and seeing the potential of what this new thing could accomplish.
- The pizza demo helps set a beacon[xx] of what is possible to inspire others in the ecosystem to make it real.
40Once someone sees a screenshot of your product, it becomes easier to copy.
- Once someone sees a screenshot of your product, it becomes easier to copy.
- Even if you described it comprehensively in language, it still feels abstract and hard to grasp concretely.
- That can be bad ("i don't understand it until I see it!") but it can also be good; people can get intrigued and feel alignment with the values, even before anyone can see it[xy] to copy it.
41Some people write software because it's the only means to a particular end.
- Some people write software because it's the only means to a particular end.
- But if they could get that end without writing software they'd do that instead.
- Some people write software because they enjoy writing software; it's an end in and of itself.
- Most people write software because it's the best means to achieve a goal of theirs.
42Cluetrain.com is a blast from the past of earlier eras of the web, but still deeply relevant.
- Cluetrain.com is a blast from the past of earlier eras of the web, but still deeply relevant.
- "Corporations do not speak in the same voice as these new networked conversations. To their intended online audiences, companies sound hollow, flat, literally inhuman."
- Companies are the hive mind, the average, of the organization.
- Sounds like one voice but is actually inhuman.
- Not too dissimilar from LLMs and why their "view from nowhere" voice sounds hollow.
43I loved "If you're so smart, why can't you die?"
- I loved "If you're so smart, why can't you die?"
- Dives into LLM intelligence's fundamental character and limitations.
- One of the freshest, most thought-provoking things I've read in a while.
44Kevin Kelly's 50 years of travel trips is excellent.
- Kevin Kelly's 50 years of travel trips is excellent.
- "The most significant criteria to use when selecting travel companions is: do they complain or not, even when complaints are justified? No complaining! Complaints are for the debriefing afterwards when travel is over."
- "Perfection is for watches. Trips should be imperfect. There are no stories if nothing goes amiss."
- "If you detect slightly more people moving in one direction over another, follow them. If you keep following this "gradient" of human movement, you will eventually land on something interesting—a market, a parade, a birthday party, an outdoor dance, a festival."
45I thought this tweet about RLHF and taste was insightful:
- I thought this tweet about RLHF and taste was insightful:
- "the problem with RLHF is that a lot of humans:[xz]
- A. lack taste
- B. have different tastes
- A makes it bad, B makes it average"
46When people interact with a big black box that has important effects on their life but is inscrutable, they tend to develop superstitious beliefs about how it works.
- When people interact with a big black box that has important effects on their life but is inscrutable, they tend to develop superstitious beliefs about how it works.
47Sometimes a company will indemnify customers for any downside produced by their use of the product.
- Sometimes a company will indemnify customers for any downside produced by their use of the product.
- This can lead to much more usage of the product, because it removes the downside.
- But be careful: if you are the largest user, or loom way larger than the provider itself, then the provider could go out of business and you'd be left holding the rest of the bag.
- Sometimes companies get the thresholds in their models wrong, modeling the worst case scenario wrong and oops, they go out of business–and everyone else is left holding the rest of the bag.
48Going from infinitesimal trust to zero trust requires infinite energy.
- Going from infinitesimal trust to zero trust requires infinite energy.
- A logarithmic benefit for an exponential cost.
- It turns out that you can go from "a fair bit of trust" to "not that much trust" to "barely any trust at all" quite cheaply!
- It's the very last step that's a doozy.
49Big data problems are inherently challenging.
- Big data problems are inherently challenging.
- But cozy data problems are easier.
- Sometimes simply do it the obvious way... and at small enough scales, it's good enough!
50The catalyst is the thing you didn't know you needed to make the reaction work.
- The catalyst is the thing you didn't know you needed to make the reaction work.
- The discontinuous secret.
- The missing key that unlocks the possibility you didn't realize was there.
51Everyone wants to believe their subjective view is objectively true.
- Everyone wants to believe their subjective view is objectively true.
- Sarumans force their subjective view to be manifested in those who work for them.
- "You are replaceable, all that matters is your loyalty to me and your ability to perform the tasks that I assign you to a quality level I find satisfactory."
52It takes one person to poop a party.
- It takes one person to poop a party.
- Imagine a dinner party with seven guests, all of whom show up with a desire to have an active, exploratory, open-ended discussion.
- An infinite game.
- Imagine one guest is approaching the conversation as a game to be won, maximizing points.
- Instantly the conversation collapses from an infinite game to a finite one.
- A confident and savvy host will politely push that party pooper to the sidelines of the conversation and regain control, allowing the infinite vibe to blossom again.
53Imagine someone discovers a powerful lever that will cause the outcome they and others desire.
- Imagine someone discovers a powerful lever that will cause the outcome they and others desire.
- They look around and see that no one else has pulled this obvious lever.
- A Saruman will declare: "It must be that I'm the only one bold enough to pull the lever."
- A Radagast will answer: "No, you're the only one dumb enough to not see the indirect downside cost if you pull it, or shameless enough to not care."
- Maybe there's a non-obvious reason that this obvious lever hasn't been pulled?
54"This is a big problem therefore it's an important one."
- "This is a big problem therefore it's an important one."
- Those two dimensions are distinct!
55A red ocean is mature, the competition has ramped up.
- A red ocean is mature, the competition has ramped up.
- Lots of competitors and predators.
- Blue ocean is immature, open-ended.
- Blue oceans, if they're fertile, don't stay blue for long.
56Single threaded ownership cuts through bureaucracy.
- Single threaded ownership cuts through bureaucracy.
- The single owner can counteract the dulling consensus forces of the bureaucracy.
- But be careful; that single owner can make a massive mess.
- If you swarm the single-owners intentionally, checking ambition with ambition, you can get the best of the swarm innovating (resilience, adaptability) without the downside of empowering the most active person to dominate everyone else.
- But the downside is you get a chaotic jumble!
57People who are entirely focused on short term concerns will do things that are self-evidently a terrible idea from a slightly broader perspective.
- People who are entirely focused on short term concerns will do things that are self-evidently a terrible idea from a slightly broader perspective.
- If someone has a screw loose and is focusing entirely on short-term, watch out, with enough leverage they can do a ton of damage.
58In organizations there's a trade off between efficiency of output and coherence.
- In organizations there's a trade off between efficiency of output and coherence.
- If you want everyone in "producing" mode all of the time, they can't be in "waiting" mode, waiting for the coordination point in another team to be reached.
- By default if you allow everyone to run at full speed at all times you get an incoherent mess.
- A way to balance both goals is to have a clear, ambitious goal for everyone to sight off of so it's messy but default converging.
- When people are waiting, everyone's twiddling their thumbs (but the performative version, twiddling by running around in circles). But at least the outcome is coherent.
- Which is more important in your context, coherence or resource utilization?
59If everyone knows that everyone knows it can't work, it can't work.
60The market can fail to deliver good outcomes when all of the buyers' "want-to-want" and "want" are misaligned in a consistent way.
- The market can fail to deliver good outcomes when all of the buyers' "want-to-want" and "want" are misaligned in a consistent way.
- If you have a thing everyone "wants to want" (e.g. operating efficiency of appliances) but no one actually "wants" (they buy whatever has the lowest purchase cost) then the market will fail to deliver.
- There won't be options that align what users "want to want" and what they "want" so the options will get more and more aligned to simply what they "want".
- Companies that want to compete to deliver the "want to want" can't, so they're pulled towards catering to the "want" or going out of business.
- "Want to want" is often longer term," want" is often hyper short term satisfaction.
- But if the government sets a regulation that sets a floor for all providers to align the "want to want" it can fix the market failure.
- Now competition works and everyone competes to give the thing that aligns the "want and want" to "want" best.
- For example, ban plastic straws and now there's rigorous competition to provide a cheap straw that is durable and cheap but also compostable.
61I wish I had a few sliders to change the personality of the LLM.
- I wish I had a few sliders to change the personality of the LLM.
- Maybe formality, verbosity, cleverness, etc.
- One exercise I like to do in a new problem domain: try to imagine the MECE set of geek mode sliders necessary to "describe" the full latent space the product and all of its myriad use cases, now and into the future, cover.
- Then reconceptualize your current islands of functionality as regions in the latent space, and try to make it possible for users to smoothly slide between different regions.
- Similar in vibe to https://thesephist.com/posts/latent/#swimming-in-latent-space
62The most important part of my knowledge management process is the weekly ritual of synthesis.
- The most important part of my knowledge management process is the weekly ritual of synthesis.
- I have a particular flow of taking notes, curating them, and organizing them in my own home-grown knowledge management tools.[yc]
- But the single most load bearing part is the discipline to take a few hours once a week to riffle through the notes and take the time to synthesize them.
- That's the hardest part to recreate.
63One of the best feelings in the world: momentum on a thing you think matters.
- One of the best feelings in the world: momentum on a thing you think matters.
- When you feel it together as a team, it's transcendent.
- A powerful, auto-catalyzing force.
- But it can be asymmetrically spoiled by one Debbie Downer.
- One person who's clearly not engaged or clearly doesn't care.
64Momentum is often a proxy for meaning.
- Momentum is often a proxy for meaning.
- The mid-life crisis often happens when you get to a point where you momentum stalls out; you don't fall but you're no longer climbing.
- That lack of momentum causes a lack of meaning.
- That also means that sometimes you can get momentum, but in a direction that turns out to not be fundamentally meaningful.