Getting started with prompty#

Authored by:  Avatar AvatarOpen on GitHub

Learning Objectives - Upon completing this tutorial, you should be able to:

  • Write LLM application using prompty and visualize the trace of your application.

  • batch run prompty against multi lines of data.

0. Install dependent packages#

%%capture --no-stderr
%pip install promptflow-core

1. Execute a Prompty#

Prompty is a file with .prompty extension for developing prompt template. The prompty asset is a markdown file with a modified front matter. The front matter is in yaml format that contains a number of metadata fields which defines model configuration and expected inputs of the prompty.

with open("basic.prompty") as fin:
    print(fin.read())

Note: before running below cell, please configure required environment variable AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT by create an .env file. Please refer to ../.env.example as an template.

import os
from dotenv import load_dotenv

if "AZURE_OPENAI_API_KEY" not in os.environ:
    # load environment variables from .env file
    load_dotenv()
from promptflow.core import Prompty

# load prompty as a flow
f = Prompty.load(source="basic.prompty")

# execute the flow as function
result = f(question="What is the capital of France?")
result

You can override configuration with AzureOpenAIModelConfiguration and OpenAIModelConfiguration.

from promptflow.core import AzureOpenAIModelConfiguration, OpenAIModelConfiguration

# override configuration with AzureOpenAIModelConfiguration
configuration = AzureOpenAIModelConfiguration(
    # azure_endpoint="${env:AZURE_OPENAI_ENDPOINT}",  # Use ${env:<ENV_NAME>} to surround the environment variable name.
    # api_key="${env:AZURE_OPENAI_API_KEY}",
    azure_deployment="gpt-4o",
)

# override configuration with OpenAIModelConfiguration
# configuration = OpenAIModelConfiguration(
#     base_url="${env:OPENAI_BASE_URL}",
#     api_key="${env:OPENAI_API_KEY}",
#     model="gpt-3.5-turbo"
# )

override_model = {"configuration": configuration, "parameters": {"max_tokens": 512}}

# load prompty as a flow
f = Prompty.load(source="basic.prompty", model=override_model)

# execute the flow as function
result = f(question="What is the capital of France?")
result

Visualize trace by using start_trace#

from promptflow.tracing import start_trace

# start a trace session, and print a url for user to check trace
start_trace()

Re-run below cell will collect a trace in trace UI.

# rerun the function, which will be recorded in the trace
question = "What is the capital of Japan?"
ground_truth = "Tokyo"
result = f(question=question)
result

Eval the result#

Note: the eval flow returns a json_object.

# load prompty as a flow
eval_flow = Prompty.load("../eval-basic/eval.prompty")
# execute the flow as function
result = eval_flow(question=question, ground_truth=ground_truth, answer=result)
result

2. Batch run with multi-line data#

%%capture --no-stderr
# batch run requires promptflow-devkit package
%pip install promptflow-devkit
from promptflow.client import PFClient

pf = PFClient()
flow = "./basic.prompty"  # path to the prompty file
data = "./data.jsonl"  # path to the data file

# create run with the flow and data
base_run = pf.run(
    flow=flow,
    data=data,
    column_mapping={
        "question": "${data.question}",
    },
    stream=True,
)
details = pf.get_details(base_run)
details.head(10)

3. Evaluate your flow#

Then you can use an evaluation method to evaluate your flow. The evaluation methods are also flows which usually using LLM assert the produced output matches certain expectation.

Run evaluation on the previous batch run#

The base_run is the batch run we completed in step 2 above, for web-classification flow with “data.jsonl” as input.

eval_prompty = "../eval-basic/eval.prompty"

eval_run = pf.run(
    flow=eval_prompty,
    data="./data.jsonl",  # path to the data file
    run=base_run,  # specify base_run as the run you want to evaluate
    column_mapping={
        "question": "${data.question}",
        "answer": "${run.outputs.output}",  # TODO refine this mapping
        "ground_truth": "${data.ground_truth}",
    },
    stream=True,
)
details = pf.get_details(eval_run)
details.head(10)
# visualize run using ui
pf.visualize([base_run, eval_run])

Next steps#

By now you’ve successfully run your first prompt flow and even did evaluation on it. That’s great!

You can check out more examples:

  • Basic Chat: demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message.