Chat with class based flex flow in Azure#
Learning Objectives - Upon completing this tutorial, you should be able to:
Submit batch run with a flow defined with python class and evaluate it in azure.
0. Install dependent packages#
%%capture --no-stderr
%pip install -r ./requirements-azure.txt
1. Connection to workspace#
Configure credential#
We are using DefaultAzureCredential
to get access to workspace.
DefaultAzureCredential
should be capable of handling most Azure SDK authentication scenarios.
Reference for more available credentials if it does not work for you: configure credential example, azure-identity reference doc.
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
Get a handle to the workspace#
We use config file to connect to a workspace.
from promptflow.azure import PFClient
# Get a handle to workspace
pf = PFClient.from_config(credential=credential)
Create necessary connections#
Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
In this notebook, we will use flow basic
flex flow which uses connection open_ai_connection
inside, we need to set up the connection if we haven’t added it before.
Prepare your Azure OpenAI resource follow this instruction and get your api_key
if you don’t have one.
Please go to workspace portal, click Prompt flow
-> Connections
-> Create
, then follow the instruction to create your own connections.
Learn more on connections.
2. Batch run the function as flow with multi-line data#
Create a flow.flex.yaml
file to define a flow which entry pointing to the python function we defined.
# show the flow.flex.yaml content
with open("flow.flex.yaml") as fin:
print(fin.read())
from promptflow.core import AzureOpenAIModelConfiguration
# create the model config to be used in below flow calls
config = AzureOpenAIModelConfiguration(
connection="open_ai_connection", azure_deployment="gpt-4o"
)
Batch run with a data file (with multiple lines of test data)#
flow = "." # path to the flow directory
data = "./data.jsonl" # path to the data file
# create run with the flow and data
base_run = pf.run(
flow=flow,
init={
"model_config": config,
},
data=data,
column_mapping={
"question": "${data.question}",
"chat_history": "${data.chat_history}",
},
stream=True,
)
details = pf.get_details(base_run)
details.head(10)
3. Evaluate your flow#
Then you can use an evaluation method to evaluate your flow. The evaluation methods are also flows which usually using LLM assert the produced output matches certain expectation.
Run evaluation on the previous batch run#
The base_run is the batch run we completed in step 2 above, for web-classification flow with “data.jsonl” as input.
eval_flow = "../eval-checklist/flow.flex.yaml"
config = AzureOpenAIModelConfiguration(
connection="open_ai_connection", azure_deployment="gpt-4o"
)
eval_run = pf.run(
flow=eval_flow,
init={
"model_config": config,
},
data="./data.jsonl", # path to the data file
run=base_run, # specify base_run as the run you want to evaluate
column_mapping={
"answer": "${run.outputs.output}",
"statements": "${data.statements}",
},
stream=True,
)
details = pf.get_details(eval_run)
details.head(10)
import json
metrics = pf.get_metrics(eval_run)
print(json.dumps(metrics, indent=4))
pf.visualize([base_run, eval_run])
Next steps#
By now you’ve successfully run your chat flow and did evaluation on it. That’s great!
You can check out more examples:
Stream Chat: demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message.