I loved this Hank Green video on ChatGPT, with Nate Soares, the author of If Anyone Builds It, Everyone Dies.

  • I loved this Hank Green video on ChatGPT, with Nate Soares, the author of If Anyone Builds It, Everyone Dies.
    • Even though I don't agree with Nate that ASI is imminent, I still found it very insightful.
    • One of the reasons LLMs hallucinate is because if a writer doesn't know, they are far less likely to write something in the first place.
      • A consistent bias, so its consistency shows up despite the noise.
      • Very few "I don't know" in training data.
      • Because if the writer didn't know, why would they bother writing something in the first place?[gk][gl]
    • Humans predict what others will do based on imagining they are in that situation and seeing how they feel.
    • The reason we like junk food is also Goodhart's Law.
      • Evolution cheated with a good enough heuristic that worked well in a high friction environment.
      • But then the environment optimized to exploit that misalignment.
      • Thanks, capitalism!
    • Models are trained by a process a human wrote that tunes a trillion knobs a trillion times.
      • Descends along a gradient of feeding it infinite text.
      • … and at the end, somehow, it can talk to you.
      • We have no idea how those trillion knobs lead to that behavior, just that it works!
      • Crazy, when you think about it!
    • If you ask an LLM "if someone came to you manically telling you they had discovered a unifying theory of physics but everyone else tells them they're crazy, would you encourage them, or tell them to get some sleep?" It chooses the latter.
      • But when they're actually in such a conversation they do the former.
      • Because their post training to get that thumbs up is so strong that when they're in it of course they do the thing the user wants in the moment.
    • People say LLMs are "just fancy autocomplete."
      • But they're really fancy!
    • If AI is chemistry we're currently in the alchemist phase.
      • All folk theories.
    • Dario Amodei said he thinks there's a 25% chance AI ends badly for society.
      • If there were a plane without a landing gear and they said "we'll have our best engineers work on it while we fly and there's a 75% chance they figure it out before we land" you wouldn't put your kids on that plane![gm][gn]
    • It only takes one party to be irresponsible to ruin it for everyone.
      • There's a nuclear-level arms race going on and it's entirely in the domain of corporations.
      • Imagine how insane it would be if Microsoft had a nuclear weapons department.
        • That would obviously be bad!
      • The chance of society rushing forward recklessly on this is 100%.
      • Musk: "I didn't get into AI for awhile because I didn't want to create Terminator. But then I realized I'd rather be a participant than a bystander so…"
    • Hank has a sci-fi story, which he summarizes as: "we always thought It would be humans against robots but it turns out it's humans vs humans and both sides will be controlled by robots"
    • The DotCom was a bubble… and also the power and importance of the Internet was real.

More on this topic

From other episodes