The jello mold sets the constraints that the jello flows into.
In the 70's there were all kinds of novelty jello molds, allowing various shapes.
You need an external structure that sets the laws of physics, that creates a cage, a jello mold for the slime mold to expand into.
An LLM is like jello.
LLMs are inherently, fundamentally gullible.
You cannot rely on them for the structure of any part of your security model.
A GPT could configure a solid box to have certain privacy properties but it can't be the box.
The structure is what makes it safe.
The LLMs are what make it magic.