Honestly, you are just confused.
With LLMs, The "knowing" you're describing is trivial and doesn't really constitute knowing at all. It's just the physics of the substrate. When people say LLMs are a black box, they aren't talking about the hardware or the fact that it's "math all the way down." They are talking about interpretability.
If I hand you a 175-billion parameter tensor, your 'knowledge' of logic gates doesn't help you explain why a specific circuit within that model represents "the concept of justice" or how it decided to pivot a sentence in a specific direction.
On the other hand, the very professions you cited rely on interpretability. A civil engineer doesn't look at a bridge and dismiss it as "a collection of atoms" unable to go further. They can point to a specific truss and explain exactly how it manages tension and compression, tell you why it could collapse in certain conditions. A software engineer can step through a debugger and tell you why a specific if statement triggered.
We don't even have that much for LLMs so why would you say we have an idea of what's going on ?