A utilitarian / reductionist looks at the success of AI systems in chess and sees a model for fixing the world.
Chess algorithms (at least, before the Alpha class models) worked by encoding a tuned utilitarian accounting of board states.
But chess is a perfect knowledge system with clearly defined win conditions, a closed system, rigid categories.
None of those apply in complex environments, which is the ones we live in!
The utilitarian argument of "as long as you have the perfect accounting of misery and thriving points it's easy" is a smuggled infinity.
Everything past the "perfect" is absurd, because perfection is impossible in complex environments.