Chat with prompty#
Learning Objectives - Upon completing this tutorial, you should be able to:
Write LLM application using prompty and visualize the trace of your application.
Understand how to handle chat conversation using prompty
batch run prompty against multi lines of data.
0. Install dependent packages#
%%capture --no-stderr
%pip install promptflow-devkit
1. Prompty#
Prompty is a file with .prompty extension for developing prompt template. The prompty asset is a markdown file with a modified front matter. The front matter is in yaml format that contains a number of metadata fields which defines model configuration and expected inputs of the prompty.
with open("chat.prompty") as fin:
print(fin.read())
Create necessary connections#
Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
Above prompty uses connection open_ai_connection
inside, we need to set up the connection if we haven’t added it before. After created, it’s stored in local db and can be used in any flow.
Prepare your Azure OpenAI resource follow this instruction and get your api_key
if you don’t have one.
from promptflow.client import PFClient
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
# client can help manage your runs and connections.
pf = PFClient()
try:
conn_name = "open_ai_connection"
conn = pf.connections.get(name=conn_name)
print("using existing connection")
except:
# Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure OpenAI resource.
connection = AzureOpenAIConnection(
name=conn_name,
api_key="<your_AOAI_key>",
api_base="<your_AOAI_endpoint>",
api_type="azure",
)
# use this if you have an existing OpenAI account
# connection = OpenAIConnection(
# name=conn_name,
# api_key="<user-input>",
# )
conn = pf.connections.create_or_update(connection)
print("successfully created connection")
print(conn)
Execute prompty as function#
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("chat.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(question=question)
result
You can override connection with AzureOpenAIModelConfiguration
and OpenAIModelConfiguration
.
from promptflow.core import AzureOpenAIModelConfiguration, OpenAIModelConfiguration
# override configuration with created connection in AzureOpenAIModelConfiguration
configuration = AzureOpenAIModelConfiguration(
connection="open_ai_connection", azure_deployment="gpt-4o"
)
# override openai connection with OpenAIModelConfiguration
# configuration = OpenAIModelConfiguration(
# connection=connection,
# model="gpt-3.5-turbo"
# )
override_model = {
"configuration": configuration,
}
# load prompty as a flow
f = Prompty.load("chat.prompty", model=override_model)
# execute the flow as function
question = "What is the capital of France?"
result = f(question=question)
result
Visualize trace by using start_trace#
from promptflow.tracing import start_trace
# start a trace session, and print a url for user to check trace
start_trace()
Re-run below cell will collect a trace in trace UI.
# rerun the function, which will be recorded in the trace
result = f(question=question)
result
Eval the result#
In this example, we will use a prompt that determines whether a chat conversation contains an apology from the assistant.
eval_prompty = "../eval-apology/apology.prompty"
with open(eval_prompty) as fin:
print(fin.read())
Note: the eval flow returns a json_object
.
# load prompty as a flow
eval_flow = Prompty.load(eval_prompty)
# execute the flow as function
result = eval_flow(question=question, answer=result, messages=[])
result
2. Batch run with multi-line data#
from promptflow.client import PFClient
flow = "chat.prompty" # path to the prompty file
data = "./data.jsonl" # path to the data file
# create run with the flow and data
pf = PFClient()
base_run = pf.run(
flow=flow,
data=data,
column_mapping={
"question": "${data.question}",
"chat_history": "${data.chat_history}",
},
stream=True,
)
details = pf.get_details(base_run)
details.head(10)
3. Evaluate your prompty#
Then you can use an evaluation prompty to evaluate your prompty.
Run evaluation on the previous batch run#
The base_run is the batch run we completed in step 2 above, for web-classification flow with “data.jsonl” as input.
eval_run = pf.run(
flow=eval_prompty,
data="./data.jsonl", # path to the data file
run=base_run, # specify base_run as the run you want to evaluate
column_mapping={
"messages": "${data.chat_history}",
"question": "${data.question}",
"answer": "${run.outputs.output}", # TODO refine this mapping
},
stream=True,
)
details = pf.get_details(eval_run)
details.head(10)
Next steps#
By now you’ve successfully run your first prompt flow and even did evaluation on it. That’s great!
You can check out more Prompty Examples.