LLMs need to be understood via your System 1, not your System 2.
LLMs are inherently complex and hard to reason about.
The only way to be able to use them effectively is to develop significant experiential knowledge with them: knowhow.
Engineers are most familiar with using the CS lens to understand a problem.
But CS is fundamentally a hyper-reductionist lens.
It cannot be used to understand complex phenomena.
I can't tell you how many extremely smart engineers I know who have endeavored to "understand" LLMs by building one themselves.
This takes months of careful study and experimentation, and at the end you get a crappy little model that is orders of magnitude worse than the leading models.
The key question with LLMs is not how they work, but what you can use them for.
The latter question is impossible to answer with CS, especially for a non-ML expert.
The only way to answer that question is to actually play with them, deeply, and often.
How an LLM works gives you zero insight into how to use them.
To become an LLM wizard, you have to develop the knowhow.