Prompty output format#
Learning Objectives - Upon completing this tutorial, you should be able to:
Understand how to handle output format of prompty like: text, json_object.
Understand how to consume stream output of prompty
0. Install dependent packages#
%%capture --no-stderr
%pip install promptflow-devkit
1. Create necessary connections#
Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
Above prompty uses connection open_ai_connection
inside, we need to set up the connection if we haven’t added it before. After created, it’s stored in local db and can be used in any flow.
Prepare your Azure OpenAI resource follow this instruction and get your api_key
if you don’t have one.
from promptflow.client import PFClient
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
# client can help manage your runs and connections.
pf = PFClient()
try:
conn_name = "open_ai_connection"
conn = pf.connections.get(name=conn_name)
print("using existing connection")
except:
# Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure OpenAI resource.
connection = AzureOpenAIConnection(
name=conn_name,
api_key="<your_AOAI_key>",
api_base="<your_AOAI_endpoint>",
api_type="azure",
)
# use this if you have an existing OpenAI account
# connection = OpenAIConnection(
# name=conn_name,
# api_key="<user-input>",
# )
conn = pf.connections.create_or_update(connection)
print("successfully created connection")
print(conn)
2. Format prompty output#
Text output#
By default the prompty returns the message of first choices.
with open("text_format.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("text_format.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)
# note: the result is a string
result
Json object output#
When the user meets the following conditions, prompty returns content of first choices as a dict.
Define
response_format
totype: json_object
in parametersSpecify the return json format in template.
Note: response_format is compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. For more details, refer to this document.
with open("json_format.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("json_format.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)
# note: the result is a dict
result
All choices#
When the user configures response as all
, prompty will return the raw LLM response which has all the choices.
with open("all_response.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("all_response.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)
# note: the result is a ChatCompletion object
print(result.choices[0])
Streaming output#
When stream=true
is configured in the parameters of a prompt whose output format is text, promptflow sdk will return a generator type, which item is the content of each chunk.
with open("stream_output.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("stream_output.prompty")
# execute the flow as function
question = "What's the steps to get rich?"
result = f(question=question)
for item in result:
print(item, end="")
Notes: When stream=True
, if the response format is json_object
or response is all
, LLM response will be returned directly. For more details about handle stream response, refer to this document.
Batch run with text output#
from promptflow.client import PFClient
data = "./data.jsonl" # path to the data file
# create run with the flow and data
pf = PFClient()
base_run = pf.run(
flow="text_format.prompty",
data=data,
column_mapping={
"question": "${data.question}",
},
stream=True,
)
details = pf.get_details(base_run)
details.head(10)
Batch run with stream output#
from promptflow.client import PFClient
data = "./data.jsonl" # path to the data file
# create run with the flow and data
pf = PFClient()
base_run = pf.run(
flow="stream_output.prompty",
data=data,
column_mapping={
"question": "${data.question}",
},
stream=True,
)
details = pf.get_details(base_run)
details.head(10)