How to apply LLM level restriction to avoid responses for harmful content? — Cloud Customer Connect
You're almost there! Please answer a few more questions for access to the Applications content. Complete registration
Interested in joining? Complete your registration by providing Areas of Interest here. Register

How to apply LLM level restriction to avoid responses for harmful content?

Summary:

Current fusion embedded AI's Goal Generation (in Development Goal) provides response for harmful content. How to provide guardrails ?

Content (please ensure you mask any confidential information):


Version (include the version you are using, if applicable):



Code Snippet (add any code snippets that support your topic, if applicable):

Howdy, Stranger!

Log In

To view full details, sign in.

Register

Don't have an account? Click here to get started!