You know many experts in your network who are the absolute best at a given topic.
A good way to approach those problems is to ask yourself "What would X do in this situation?"
For example, for engineering architectural issues, I ask myself, "What would Dimitri do?"
You can get better and better at this over time.
For each individual situation, form a hypothesis of what Dimitri would say.
Then ask Dimitri.
If you got it right, great! If that happens a lot over time, you've successfully absorbed a simulation of Dimitri's knowhow.
If you got it wrong, update your model.
This process is finicky and takes a lot of time.
But imagine that this expert has written quite a lot about their area of expertise.
This is rare, but it happens!
If you somehow had read all of their writing and could recall all of it, your prediction would be way better.
That's hard for humans to do… but easy for LLMs!
Just feed it all of the embeddings of their writing, and it can do a convincing facsimile of that expert's reasoning.
If you had that, you could assemble little virtual rooms of your preferred experts on different topics and have them discuss the answer.
A "Gordon" to discuss product and design issues, a "Dimitri" for the architectural issues, an "Erika" for the emergent org and cultural issues…