Sensitive information guardrails when using unhosted LLM
Summary:
With new 26A feature of allowing documents to be uploaded into the Chat, it's possible that sensitive data could make its way to a public LLM without the user knowing. Specifically, when the agent uses an LLM that is not hosted by the customer in their tenancy, i.e., OpenAI GPT5 (full version). Are there guardrails to help stop accidental data leaks to public LLMs?
Content (please ensure you mask any confidential information):
With new 26A feature of allowing documents to be uploaded into the Chat, it's possible that sensitive data could make its way to a public LLM without the user knowing. Specifically, when the agent uses an LLM that is not hosted by the customer in their tenancy, i.e., OpenAI GPT5 (full version). Are there guardrails to help stop accidental data leaks to public LLMs?