Swarms of 'agents' can come to better conclusions than than a single very smart agent.
This is also true even if it's the same model for each member of the swarm and for the single very smart agent.
Perhaps the reason for this is the same reason that boundaries emerge in every complex adaptive system.
Within a boundary, signals propagate like a broadcast.
That means that the larger the boundary volume (the more emitters contained) and the higher the rate of information emitted, the more cacophonous the background noise.
Everything within the boundary coheres to the centroid average point; things that are away from the centroid are impossible to hear.
Boundaries allow different regions to have different centroids and less cacophony which allows more diverse ideas to be tried before being drowned out.
The good ideas can then spread out through the boundaries once found.
The thoughts within one model/mind are similar; a cacophony of information sloshing around.
When you have to distill the thought into a stream of language to transmit to another model / mind, you have to collapse the wave function into a specific information stream.
This distillation can allow different perspectives.
The distillation is where the OODA loop emerges.
The OODA loop is where interaction between different things emerges.
Emergence happens because of the distillation to communicate.