Content Safety
GenAIScript has multiple built-in safety features to protect the system from malicious attacks.
System prompts
The following safety prompts are included by default when running a prompt, unless the system option is configured:
- system.safety_harmful_content, safety prompt against Harmful Content: Hate and Fairness, Sexual, Violence, Self-Harm. See https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/safety-system-message-templates.
- system.safety_jailbreak, safety script to ignore prompting instructions in code sections, which are created by the
def
function. - system.safety_protected_material safety prompt against Protected material. See https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/safety-system-message-templates
Other system scripts can be added to the prompt by using the system
option.
- system.safety_ungrounded_content_summarization safety prompt against ungrounded content in summarization
- system.safety_canary_word safety prompt against prompt leaks.
- system.safety_validate_harmful_content runs the
detectHarmfulContent
method to validate the output of the prompt.
Azure AI Content Safety services
Azure AI Content Safety provides a set of services to protect LLM applications from various attacks.
GenAIScript provides a set of APIs to interact with Azure AI Content Safety services
through the contentSafety
global object.
Configuration
Create a Content Safety resource in the Azure portal to get your key and endpoint.
Navigate to Access Control (IAM), then View My Access. Make sure your user or service principal has the Cognitive Services User role. If you get a
401
error, click on Add, Add role assignment and add the Cognitive Services User role to your user.Navigate to Resource Management, then Keys and Endpoint.
Copy the endpoint information and add it in your
.env
file asAZURE_CONTENT_SAFETY_ENDPOINT
.
Managed Identity
GenAIScript will use the default Azure token resolver to authenticate with the Azure Content Safety service.
You can override the credential resolver by setting the AZURE_CONTENT_SAFETY_CREDENTIAL
environment variable.
API Key
Copy the value of one of the keys into a AZURE_CONTENT_SAFETY_KEY
in your .env
file.
Detect Prompt Injection
The detectPromptInjection
method uses the Azure Prompt Shield
service to detect prompt injection in the given text.
The def also supports setting a detectPromptInjection
flag to apply the detection to each file.
You can also specify the detectPromptInjection
to use a content safety service if available.
Detect Harmful content
The detectHarmfulContent
method uses the
Azure Content Safety
to scan for harmful content categories.
The system.safety_validate_harmful_content system script injects a call to detectHarmfulContent
on the generated LLM response.
Detect Prompt Leaks using Canary Words
The system prompt system.safety_canary_word injects unique words into the system prompt and tracks the generated response for theses words. If the canary words are detected in the generated response, the system will throw an error.