If you have an adaptive algorithm optimizing for your wants, not your "want to wants", then it will learn not to show you disconfirming evidence.
- If you have an adaptive algorithm optimizing for your wants, not your "want to wants", then it will learn not to show you disconfirming evidence.
- It doesn't want to make you better, it wants you to stay engaged.
- It speaks to your lizard brain.