A rule of thumb for when prompt injection might be a problem: if the model can call external tools and can accept untrusted inputs.
But as long as the model can't call external tools, or the data comes only directly from a user or a trusted component, then prompt injection isn't too big of a worry.
But note that although the latter is typically obvious when done directly, it can be very easy to do accidentally indirectly.
For example, if you use RAG in your pipeline to create a summary with an LLM with no tool use (safe) but then pass that summary on to a downstream LLM that allows tool use (potentially unsafe).