The same origin model is too coarse.
It treats operations within an origin as very common, and operations across origins as very rare.
Within-origin operations are safe and easy.
Across-origin operations are dangerous and hard, and the same-origin model just kind of throws up its hands and gives no affordance for them.
"If you choose to share information across your domain wall, shrug, you're on your own, that could be fine… or game over level of danger".
Why does this happen?
Imagine a use case with data that transits over the origin boundary multiple times.
Each transit across the boundary is scary / hard.
So over time the use case will tend to simplify to remove hops across origin boundaries.
The way to resolve a hop is to move all of the necessary data onto one side of the boundary, removing the need for a hop.
Which side should the data move to?
All else equal, the side that already has more data.
Data gravity!
This steady but consistent force leads to hyper centralization over time.
But if the value of software is the combinatorial possibility of code written by different people being composed into new wholes, then across-origin should be common.
A system that leaned into making across-origin compositions easy and safe could remake the laws of physics of software.