How to apply LLM level restriction to avoid responses for harmful content?
Summary:
Current fusion embedded AI's Goal Generation (in Development Goal) provides response for harmful content. How to provide guardrails ?
Content (please ensure you mask any confidential information):
Version (include the version you are using, if applicable):
Code Snippet (add any code snippets that support your topic, if applicable):
0