Prompt Injection and Data Security Risks
Since the AI agents are powered by LLMs and can retrieve enterprise data or invoke tools/APIs, we understand that prompt injection is an emerging risk across the industry. For example, malicious instructions embedded in user prompts or retrieved documents could potentially attempt to override system instructions or request sensitive data.
Could you please share:
- What protections Oracle currently provides to mitigate prompt injection risks in Fusion AI Agents?
- Whether Oracle provides built-in guardrails, input/output filtering, or policy enforcement mechanisms for agents interacting with enterprise data sources?
- Any recommended best practices from Oracle for securing RAG sources or preventing malicious instructions embedded in documents or external inputs?
Tagged:
0