Run DAG flow in Azure#

Authored by:  AvatarOpen on GitHub

Requirements - In order to benefit from this tutorial, you will need:

Learning Objectives - By the end of this tutorial, you should be able to:

  • Connect to your Azure AI workspace from the Python SDK

  • Create and develop a new promptflow run

  • Evaluate the run with a evaluation flow

Motivations - This guide will walk you through the main user journey of prompt flow code-first experience. You will learn how to create and develop your first prompt flow, test and evaluate it.

0. Install dependent packages#

%pip install -r ../../requirements.txt

1. Connect to Azure Machine Learning Workspace#

The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section we will connect to the workspace in which the job will be run.

1.1 Import the required libraries#

import json

# Import required libraries
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential

# azure version promptflow apis
from promptflow.azure import PFClient

1.2 Configure credential#

We are using DefaultAzureCredential to get access to workspace. DefaultAzureCredential should be capable of handling most Azure SDK authentication scenarios.

Reference for more available credentials if it does not work for you: configure credential example, azure-identity reference doc.

try:
    credential = DefaultAzureCredential()
    # Check if given credential can get token successfully.
    credential.get_token("https://management.azure.com/.default")
except Exception as ex:
    # Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
    credential = InteractiveBrowserCredential()

1.3 Get a handle to the workspace#

We use config file to connect to a workspace. The Azure ML workspace should be configured with computer cluster. Check this notebook for configure a workspace

# Get a handle to workspace
pf = PFClient.from_config(credential=credential)

1.4 Create necessary connections#

Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.

In this notebook, we will use flow web-classification which uses connection azure_open_ai_connection inside, we need to set up the connection if we haven’t added it before.

Prepare your Azure OpenAI resource follow this instruction and get your api_key if you don’t have one.

Please go to workspace portal, click Prompt flow -> Connections -> Create, then follow the instruction to create your own connections. Learn more on connections.

2. Create a new run#

web-classification is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.

Set flow path and input data#

# load flow
flow = "../../flows/standard/web-classification"
data = "../../flows/standard/web-classification/data.jsonl"

Submit run#

# create run
base_run = pf.run(
    flow=flow,
    data=data,
    column_mapping={
        "url": "${data.url}",
    },
)
print(base_run)
pf.stream(base_run)
details = pf.get_details(base_run)
details.head(10)
pf.visualize(base_run)

3. Evaluate your flow run result#

Then you can use an evaluation method to evaluate your flow. The evaluation methods are also flows which use Python or LLM etc., to calculate metrics like accuracy, relevance score.

In this notebook, we use eval-classification-accuracy flow to evaluate. This is a flow illustrating how to evaluate the performance of a classification system. It involves comparing each prediction to the groundtruth and assigns a “Correct” or “Incorrect” grade, and aggregating the results to produce metrics such as accuracy, which reflects how good the system is at classifying the data.

eval_run = pf.run(
    flow="../../flows/evaluation/eval-classification-accuracy",
    data=data,
    run=base_run,
    column_mapping={
        "groundtruth": "${data.answer}",
        "prediction": "${run.outputs.category}",
    },
)
pf.stream(eval_run)
details = pf.get_details(eval_run)
details.head(10)
metrics = pf.get_metrics(eval_run)
print(json.dumps(metrics, indent=4))
pf.visualize([base_run, eval_run])

Create another run with different variant node#

In this example, web-classification’s node summarize_text_content has two variants: variant_0 and variant_1. The difference between them is the inputs parameters:

variant_0:

- inputs:
    - deployment_name: gpt-35-turbo
    - max_tokens: '128'
    - temperature: '0.2'
    - text: ${fetch_text_content_from_url.output}

variant_1:

- inputs:
    - deployment_name: gpt-35-turbo
    - max_tokens: '256'
    - temperature: '0.3'
    - text: ${fetch_text_content_from_url.output}

You can check the whole flow definition at flow.dag.yaml

# use the variant1 of the summarize_text_content node.
variant_run = pf.run(
    flow=flow,
    data=data,
    column_mapping={
        "url": "${data.url}",
    },
    variant="${summarize_text_content.variant_1}",  # here we specify node "summarize_text_content" to use variant 1 version.
)
pf.stream(variant_run)
details = pf.get_details(variant_run)
details.head(10)

Run evaluation against variant run#

eval_flow = "../../flows/evaluation/eval-classification-accuracy"

eval_run_variant = pf.run(
    flow=eval_flow,
    data="../../flows/standard/web-classification/data.jsonl",  # path to the data file
    run=variant_run,  # use run as the variant
    column_mapping={
        # reference data
        "groundtruth": "${data.answer}",
        # reference the run's output
        "prediction": "${run.outputs.category}",
    },
)
pf.stream(eval_run_variant)
details = pf.get_details(eval_run_variant)
details.head(10)
metrics = pf.get_metrics(eval_run_variant)
print(json.dumps(metrics, indent=4))
pf.visualize([eval_run, eval_run_variant])

Next Steps#

Learn more on how to: