The "a sufficiently powerful general model will solve everything" fallacy.
- The "a sufficiently powerful general model will solve everything" fallacy.
- Similar to proposing that "instead of making self-driving cars, we should make a humanoid robot that can fit in any car and do anything. Why bother with self driving?"
- First, that's the hard way around unless the models are so insanely good that they can simply do it to start.
- There's no gradient to climb, either it's viable at today's quality or it's not.
- Secondarily, it can never be anything other than a faster horse.
- It can't do things that aren't human shaped in the first place.