I thought the Stratechery interview with Ben Evans on AI was interesting.

· Bits and Bobs 3/3/25
  • I thought the Stratechery interview with Ben Evans on AI was interesting. A few of my highlights:
    • "[LLMs] are good at things that don't have wrong answers."[vp]
    • "With an intern, the power of the intern is, you can tell them why they did it wrong. … One of the challenges with these models is, you can't really teach them. You're dependent on, "Hopefully my feedback gets back into the next training run and it gets better". It's a weird inversion where the way to get more uses of these models is not to teach the models, you have to teach yourself how to use the model and understand its limitations and what it can be good at so that you give it appropriate jobs in the future."[vq]
    • "I look at Grok and I think, okay, in less than two years, you managed to produce a state-of-the-art model… What this tells us is [LLMs are] a commodity[vr][vs]."
    • "If you went to 1996, 1997 and said the entire future of the Internet is the feed, people wouldn't know what you were talking about. Like a BBS forum? No, it's not going to be in chronological order, it's going to be algorithmically ranked, it's going to be personalized to every single person, and that's actually the entire foundation of the consumer Internet is the algorithmic, individualized feed, but no one could imagine it years into the Internet, and I wouldn't be surprised if in 2040 or 2045, there's this explosion in entirely new categories of applications we can't think of, that if we went back to this podcast conversation, it'd be like, "Man, you guys had no idea"."
    • "There's just a really stark fundamental difference between 100% accuracy and 99% accuracy."
    • "I feel like [OpenAI and Anthropic] have gone to market ahead of product-market fit. I feel like the prompt looks like a product but isn't, or it's only a product for certain segments, and certain kinds of people, and certain use cases."
    • "The GUI is a way of surfacing what the computer can do, that you don't have to memorize commands. But the other thing is that the GUI is the sort of instantiation of a lot of institutional knowledge about what the user should be doing here."
    • "The Linux approach, you start with the tech and then put buttons on the front. The Apple approach, you start with the buttons and then build the tech behind it"
    • "LLMs just give you the answer, unlike a Google, which there was a two-way relationship [with the publisher]. Yes, we're pulling the information from you, but we're also giving you traffic. So there is a payoff here and there is an incentive for you to keep creating stuff. Is it just intrinsic to AIs, whether in the case of analysts or in the case of web pages, where it's a one-time harvest and there's a real paucity in terms of seeding what's next."
    • "Creativity is … doing something which scores wrong in a machine learning system. You are doing something that's wrong that doesn't match the pattern, but doesn't match the pattern in a good way. And so all this push to make the LLMs less error-prone and more accurate is, if you squint, indistinguishable from squashing out, 'we've got to get Galileo out of the system, he's hallucinating.'"
    • "The original idea for the plot for the Matrix was that the people would collectively be the compute…[vt] all the human brains collectively were the brain that was running the Matrix, which makes much more sense. That's clearly how Google works, that's how Instagram works, that's how TikTok works; they're aggregating what people do and this is what LLMs do."
    • "Does the model sit at the top and run everything else or do you wrap the model underneath as an API call inside traditional software?"
      • To which I counter: why does the surrounding software have to be traditional software?
      • Why can't it be a new kind of AI-native software[vu]?

More on this topic

From other episodes