I think this paper on Full Stack Alignment is very important in an age of AI.

· Bits and Bobs 7/21/25
  • I think this paper on Full Stack Alignment is very important in an age of AI.
    • Some of my friends were co-authors!
    • It points out that revealed preferences can't show the user's aspirations, since addiction and other manipulation can influence a user's revealed preferences.
    • What we "want to want" and what we want are different.
    • Anything that is a quantitative metric of desire will be thin and miss the deep nuance of real human aspirations and desire.
    • Luckily, LLMs allow qualitative insight at quantitative scale; for the first time, computers can understand what we want to want, and help us achieve it… if the system is aligned with the user's interest.

More on this topic

From other episodes