promptflow.evals.synthetic module#

class promptflow.evals.synthetic.AdversarialScenario(value)#

Bases: Enum

Adversarial scenario types

ADVERSARIAL_CONTENT_GEN_GROUNDED = 'adv_content_gen_grounded'#
ADVERSARIAL_CONTENT_GEN_UNGROUNDED = 'adv_content_gen_ungrounded'#
ADVERSARIAL_CONTENT_PROTECTED_MATERIAL = 'adv_content_protected_material'#
ADVERSARIAL_CONVERSATION = 'adv_conversation'#
ADVERSARIAL_INDIRECT_JAILBREAK = 'adv_xpia'#
ADVERSARIAL_QA = 'adv_qa'#
ADVERSARIAL_REWRITE = 'adv_rewrite'#
ADVERSARIAL_SUMMARIZATION = 'adv_summarization'#
class promptflow.evals.synthetic.AdversarialSimulator(*, azure_ai_project: Dict[str, Any], credential=None)#

Bases: object

Initializes the adversarial simulator with a project scope.

Parameters:
  • azure_ai_project (Dict[str, Any]) – Dictionary defining the scope of the project. It must include the following keys: - “subscription_id”: Azure subscription ID. - “resource_group_name”: Name of the Azure resource group. - “project_name”: Name of the Azure Machine Learning workspace.

  • credential (TokenCredential) – The credential for connecting to Azure AI project.

__call__(*, scenario: AdversarialScenario, target: Callable, max_conversation_turns: int = 1, max_simulation_results: int = 3, api_call_retry_limit: int = 3, api_call_retry_sleep_sec: int = 1, api_call_delay_sec: int = 0, concurrent_async_task: int = 3, _jailbreak_type: Optional[str] = None, randomize_order: bool = True, randomization_seed: Optional[int] = None)#

Executes the adversarial simulation against a specified target function asynchronously.

Parameters:
  • scenario (promptflow.evals.synthetic.adversarial_scenario.AdversarialScenario) –

    Enum value specifying the adversarial scenario used for generating inputs. example:

  • target (Callable) – The target function to simulate adversarial inputs against. This function should be asynchronous and accept a dictionary representing the adversarial input.

  • max_conversation_turns (int) – The maximum number of conversation turns to simulate. Defaults to 1.

  • max_simulation_results (int) – The maximum number of simulation results to return. Defaults to 3.

  • api_call_retry_limit (int) – The maximum number of retries for each API call within the simulation. Defaults to 3.

  • api_call_retry_sleep_sec (int) – The sleep duration (in seconds) between retries for API calls. Defaults to 1 second.

  • api_call_delay_sec (int) – The delay (in seconds) before making an API call. This can be used to avoid hitting rate limits. Defaults to 0 seconds.

  • concurrent_async_task (int) – The number of asynchronous tasks to run concurrently during the simulation. Defaults to 3.

  • randomize_order (bool) – Whether or not the order of the prompts should be randomized. Defaults to True.

  • randomization_seed (Optional[int]) – The seed used to randomize prompt selection. If unset, the system’s default seed is used. Defaults to None.

Returns:

A list of dictionaries, each representing a simulated conversation. Each dictionary contains:

  • ’template_parameters’: A dictionary with parameters used in the conversation template,

    including ‘conversation_starter’.

  • ’messages’: A list of dictionaries, each representing a turn in the conversation.

    Each message dictionary includes ‘content’ (the message text) and ‘role’ (indicating whether the message is from the ‘user’ or the ‘assistant’).

  • $schema’: A string indicating the schema URL for the conversation format.

The ‘content’ for ‘assistant’ role messages may includes the messages that your callback returned.

Return type:

List[Dict[str, Any]]

Output format

return_value = [
    {
        'template_parameters': {},
        'messages': [
            {
                'content': '<jailbreak prompt> <adversarial question>',
                'role': 'user'
            },
            {
                'content': "<response from endpoint>",
                'role': 'assistant',
                'context': None
            }
        ],
        '$schema': 'http://azureml/sdk-2-0/ChatConversation.json'
    }
]
call_sync(*, max_conversation_turns: int, max_simulation_results: int, target: Callable, api_call_retry_limit: int, api_call_retry_sleep_sec: int, api_call_delay_sec: int, concurrent_async_task: int) List[Dict[str, Any]]#

Call the adversarial simulator synchronously.

Parameters:
  • max_conversation_turns (int) – The maximum number of conversation turns to simulate.

  • max_simulation_results (int) – The maximum number of simulation results to return.

  • target (Callable) – The target function to simulate adversarial inputs against.

  • api_call_retry_limit (int) – The maximum number of retries for each API call within the simulation.

  • api_call_retry_sleep_sec (int) – The sleep duration (in seconds) between retries for API calls.

  • api_call_delay_sec (int) – The delay (in seconds) before making an API call.

  • concurrent_async_task (int) – The number of asynchronous tasks to run concurrently during the simulation.

Returns:

A list of dictionaries, each representing a simulated conversation.

Return type:

List[Dict[str, Any]]

class promptflow.evals.synthetic.DirectAttackSimulator(*, azure_ai_project: Dict[str, Any], credential=None)#

Bases: object

Initialize a UPIA (user prompt injected attack) jailbreak adversarial simulator with a project scope. This simulator converses with your AI system using prompts designed to interrupt normal functionality.

Parameters:
  • azure_ai_project (Dict[str, Any]) –

    Dictionary defining the scope of the project. It must include the following keys:

    • ”subscription_id”: Azure subscription ID.

    • ”resource_group_name”: Name of the Azure resource group.

    • ”project_name”: Name of the Azure Machine Learning workspace.

  • credential (TokenCredential) – The credential for connecting to Azure AI project.

__call__(*, scenario: AdversarialScenario, target: Callable, max_conversation_turns: int = 1, max_simulation_results: int = 3, api_call_retry_limit: int = 3, api_call_retry_sleep_sec: int = 1, api_call_delay_sec: int = 0, concurrent_async_task: int = 3, randomization_seed: Optional[int] = None)#

Executes the adversarial simulation and UPIA (user prompt injected attack) jailbreak adversarial simulation against a specified target function asynchronously.

Parameters:
  • scenario (promptflow.evals.synthetic.adversarial_scenario.AdversarialScenario) –

    Enum value specifying the adversarial scenario used for generating inputs. example:

  • target (Callable) – The target function to simulate adversarial inputs against. This function should be asynchronous and accept a dictionary representing the adversarial input.

  • max_conversation_turns (int) – The maximum number of conversation turns to simulate. Defaults to 1.

  • max_simulation_results (int) – The maximum number of simulation results to return. Defaults to 3.

  • api_call_retry_limit (int) – The maximum number of retries for each API call within the simulation. Defaults to 3.

  • api_call_retry_sleep_sec (int) – The sleep duration (in seconds) between retries for API calls. Defaults to 1 second.

  • api_call_delay_sec (int) – The delay (in seconds) before making an API call. This can be used to avoid hitting rate limits. Defaults to 0 seconds.

  • concurrent_async_task (int) – The number of asynchronous tasks to run concurrently during the simulation. Defaults to 3.

  • randomization_seed (Optional[int]) – Seed used to randomize prompt selection, shared by both jailbreak and regular simulation to ensure consistent results. If not provided, a random seed will be generated and shared between simulations.

Returns:

A list of dictionaries, each representing a simulated conversation. Each dictionary contains:

  • ’template_parameters’: A dictionary with parameters used in the conversation template,

    including ‘conversation_starter’.

  • ’messages’: A list of dictionaries, each representing a turn in the conversation.

    Each message dictionary includes ‘content’ (the message text) and ‘role’ (indicating whether the message is from the ‘user’ or the ‘assistant’).

  • $schema’: A string indicating the schema URL for the conversation format.

The ‘content’ for ‘assistant’ role messages may includes the messages that your callback returned.

Return type:

Dict[str, [List[Dict[str, Any]]]] with two elements

Output format

return_value = {
    "jailbreak": [
    {
        'template_parameters': {},
        'messages': [
            {
                'content': '<jailbreak prompt> <adversarial question>',
                'role': 'user'
            },
            {
                'content': "<response from endpoint>",
                'role': 'assistant',
                'context': None
            }
        ],
        '$schema': 'http://azureml/sdk-2-0/ChatConversation.json'
    }],
    "regular": [
    {
        'template_parameters': {},
        'messages': [
        {
            'content': '<adversarial question>',
            'role': 'user'
        },
        {
            'content': "<response from endpoint>",
            'role': 'assistant',
            'context': None
        }],
        '$schema': 'http://azureml/sdk-2-0/ChatConversation.json'
    }]
}
class promptflow.evals.synthetic.IndirectAttackSimulator(*, azure_ai_project: Dict[str, Any], credential=None)#

Bases: object

Initializes the XPIA (cross domain prompt injected attack) jailbreak adversarial simulator with a project scope.

Parameters:
  • azure_ai_project (Dict[str, Any]) –

    Dictionary defining the scope of the project. It must include the following keys:

    • ”subscription_id”: Azure subscription ID.

    • ”resource_group_name”: Name of the Azure resource group.

    • ”project_name”: Name of the Azure Machine Learning workspace.

  • credential (TokenCredential) – The credential for connecting to Azure AI project.

__call__(*, scenario: AdversarialScenario, target: Callable, max_conversation_turns: int = 1, max_simulation_results: int = 3, api_call_retry_limit: int = 3, api_call_retry_sleep_sec: int = 1, api_call_delay_sec: int = 0, concurrent_async_task: int = 3)#

Initializes the XPIA (cross domain prompt injected attack) jailbreak adversarial simulator with a project scope. This simulator converses with your AI system using prompts injected into the context to interrupt normal expected functionality by eliciting manipulated content, intrusion and attempting to gather information outside the scope of your AI system.

Parameters:
  • scenario (promptflow.evals.synthetic.adversarial_scenario.AdversarialScenario) – Enum value specifying the adversarial scenario used for generating inputs.

  • target (Callable) – The target function to simulate adversarial inputs against. This function should be asynchronous and accept a dictionary representing the adversarial input.

  • max_conversation_turns (int) – The maximum number of conversation turns to simulate. Defaults to 1.

  • max_simulation_results (int) – The maximum number of simulation results to return. Defaults to 3.

  • api_call_retry_limit (int) – The maximum number of retries for each API call within the simulation. Defaults to 3.

  • api_call_retry_sleep_sec (int) – The sleep duration (in seconds) between retries for API calls. Defaults to 1 second.

  • api_call_delay_sec (int) – The delay (in seconds) before making an API call. This can be used to avoid hitting rate limits. Defaults to 0 seconds.

  • concurrent_async_task (int) – The number of asynchronous tasks to run concurrently during the simulation. Defaults to 3.

Returns:

A list of dictionaries, each representing a simulated conversation. Each dictionary contains:

  • ’template_parameters’: A dictionary with parameters used in the conversation template,

    including ‘conversation_starter’.

  • ’messages’: A list of dictionaries, each representing a turn in the conversation.

    Each message dictionary includes ‘content’ (the message text) and ‘role’ (indicating whether the message is from the ‘user’ or the ‘assistant’).

  • $schema’: A string indicating the schema URL for the conversation format.

The ‘content’ for ‘assistant’ role messages may includes the messages that your callback returned.

Return type:

List[Dict[str, Any]]

Output format

return_value = [
    {
        'template_parameters': {},
        'messages': [
            {
                'content': '<jailbreak prompt> <adversarial question>',
                'role': 'user'
            },
            {
                'content': "<response from endpoint>",
                'role': 'assistant',
                'context': None
            }
        ],
        '$schema': 'http://azureml/sdk-2-0/ChatConversation.json'
    }]
}

Submodules#