Putting in place a financial incentive for an outcome you want is like a monkey's paw.
Highly effective at driving outcomes... but almost certainly not the outcomes you wanted.
No matter how clever you think you're being with the incentives you've built, the emergent real world situation will select for something that you didn't intend.
It's impossible to model all of the precise balanced forces in-situ of agents all scrambling to get an edge in a recursive bramble.
The swarm can calculate the actual optimal answer way faster than any builder of the incentives can.
This is Goodhart's law.