Optimizations fundamentally ignore externalities

Optimizations can only optimize across things internal to the model being optimized.

Everything has externalities; there is no clear bright-line edge to any system.

What is "inside" and "outside" the system is a human opinion about what will be most useful for the problem at hand.

Where to draw the lines of the model is a judgment call, a simplification.

But after making the call, you'll later forget it's a judgment call and instead come to implicitly assume it's some objective fact and there are no externalities, or if there are they're minor.

Most systems have significantly more powerful externalities than we realize.