promptflow.core module#

class promptflow.core.AsyncFlow(*, data: dict, code: Path, path: Path, **kwargs)#

Bases: FlowBase

Async flow is based on Flow, which is used to invoke flow in async mode.

Example:

from promptflow.core import class AsyncFlow
flow = AsyncFlow.load(source="path/to/flow.yaml")
result = await flow(input_a=1, input_b=2)
async __call__(*args, **kwargs) Mapping[str, Any]#

Calling flow as a function in async, the inputs should be provided with key word arguments. Returns the output of the flow. The function call throws UserErrorException: if the flow is not valid or the inputs are not valid. SystemErrorException: if the flow execution failed due to unexpected executor error.

Parameters:
  • args – positional arguments are not supported.

  • kwargs – flow inputs with key word arguments.

Returns:

async invoke(inputs: dict, *, connections: dict = None, **kwargs) LineResult#

Invoke a flow and get a LineResult object.

class promptflow.core.AsyncPrompty(path: Union[str, PathLike], model: Optional[dict] = None, **kwargs)#

Bases: Prompty

Async prompty is based on Prompty, which is used to invoke prompty in async mode.

Simple Example:

import asyncio
from promptflow.core import AsyncPrompty
prompty = AsyncPrompty.load(source="path/prompty.prompty")
result = await prompty(input_a=1, input_b=2)
async __call__(*args, **kwargs) Mapping[str, Any]#

Calling prompty as a function in async, the inputs should be provided with key word arguments. Returns the output of the prompty. The function call throws UserErrorException: if the flow is not valid or the inputs are not valid. SystemErrorException: if the flow execution failed due to unexpected executor error.

Parameters:
  • args – positional arguments are not supported.

  • kwargs – flow inputs with key word arguments.

Returns:

class promptflow.core.AzureOpenAIModelConfiguration(azure_deployment: str, azure_endpoint: str = None, api_version: str = None, api_key: str = None, connection: str = None)#

Bases: ModelConfiguration

api_key: str = None#
api_version: str = None#
azure_deployment: str#
azure_endpoint: str = None#
connection: str = None#
classmethod from_connection(connection: AzureOpenAIConnection, azure_deployment: str)#

Create a model configuration from an Azure OpenAI connection.

Parameters:
class promptflow.core.Flow(*, data: dict, code: Path, path: Path, **kwargs)#

Bases: FlowBase

A Flow in the context of PromptFlow is a sequence of steps that define a task. Each step in the flow could be a prompt that is sent to a language model, or simply a function task, and the output of one step can be used as the input to the next. Flows can be used to build complex applications with language models.

Example:

from promptflow.core import Flow
flow = Flow.load(source="path/to/flow.yaml")
result = flow(input_a=1, input_b=2)
__call__(*args, **kwargs) Mapping[str, Any]#

Calling flow as a function, the inputs should be provided with key word arguments. Returns the output of the flow. The function call throws UserErrorException: if the flow is not valid or the inputs are not valid. SystemErrorException: if the flow execution failed due to unexpected executor error.

Parameters:
  • args – positional arguments are not supported.

  • kwargs – flow inputs with key word arguments.

Returns:

invoke(inputs: dict, connections: dict = None, **kwargs) LineResult#

Invoke a flow and get a LineResult object.

class promptflow.core.ModelConfiguration#

Bases: object

abstract classmethod from_connection(connection, **kwargs)#

Create a model configuration from a connection.

class promptflow.core.OpenAIModelConfiguration(model: str, base_url: str = None, api_key: str = None, organization: str = None, connection: str = None)#

Bases: ModelConfiguration

api_key: str = None#
base_url: str = None#
connection: str = None#
classmethod from_connection(connection: OpenAIConnection, model: str)#

Create a model configuration from an OpenAI connection.

Parameters:
model: str#
organization: str = None#
class promptflow.core.Prompty(path: Union[str, PathLike], model: Optional[dict] = None, **kwargs)#

Bases: FlowBase

A prompty is a prompt with predefined metadata like inputs, and can be executed directly like a flow. A prompty is represented as a templated markdown file with a modified front matter. The front matter is a yaml file that contains meta fields like model configuration, inputs, etc..

Prompty example: .. code-block:

---
name: Hello Prompty
description: A basic prompt
model:
    api: chat
    configuration:
      type: azure_openai
      azure_deployment: gpt-35-turbo
      api_key="${env:AZURE_OPENAI_API_KEY}",
      api_version=${env:AZURE_OPENAI_API_VERSION}",
      azure_endpoint="${env:AZURE_OPENAI_ENDPOINT}",
    parameters:
      max_tokens: 128
      temperature: 0.2
inputs:
  text:
    type: string
---
system:
Write a simple {{text}} program that displays the greeting message.

Prompty as function example:

from promptflow.core import Prompty
prompty = Prompty.load(source="path/to/prompty.prompty")
result = prompty(input_a=1, input_b=2)

# Override model config with dict
model_config = {
    "api": "chat",
    "configuration": {
        "type": "azure_openai",
        "azure_deployment": "gpt-35-turbo",
        "api_key": "${env:AZURE_OPENAI_API_KEY}",
        "api_version": "${env:AZURE_OPENAI_API_VERSION}",
        "azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}",
    },
    "parameters": {
        "max_token": 512
    }
}
prompty = Prompty.load(source="path/to/prompty.prompty", model=model_config)
result = prompty(input_a=1, input_b=2)

# Override model config with configuration
from promptflow.core import AzureOpenAIModelConfiguration
model_config = {
    "api": "chat",
    "configuration": AzureOpenAIModelConfiguration(
        azure_deployment="gpt-35-turbo",
        api_key="${env:AZURE_OPENAI_API_KEY}",
        api_version="${env:AZURE_OPENAI_API_VERSION}",
        azure_endpoint="${env:AZURE_OPENAI_ENDPOINT}",
    ),
    "parameters": {
        "max_token": 512
    }
}
prompty = Prompty.load(source="path/to/prompty.prompty", model=model_config)
result = prompty(input_a=1, input_b=2)

# Override model config with created connection
from promptflow.core._model_configuration import AzureOpenAIModelConfiguration
model_config = {
    "api": "chat",
    "configuration": AzureOpenAIModelConfiguration(
        connection="azure_open_ai_connection",
        azure_deployment="gpt-35-turbo",
    ),
    "parameters": {
        "max_token": 512
    }
}
prompty = Prompty.load(source="path/to/prompty.prompty", model=model_config)
result = prompty(input_a=1, input_b=2)
__call__(*args, **kwargs)#

Calling flow as a function, the inputs should be provided with key word arguments. Returns the output of the prompty.

The retry mechanism for prompty execution initiates when a retryable error is detected, including LLM response errors such as InternalServerError (>=500), RateLimitError (429), and UnprocessableEntityError (422). It is designed to retry up to 10 times. Each retry interval grows exponentially, with the wait time not exceeding 60 seconds. The aggregate waiting period for all retries is approximately 400 seconds.

The function call throws UserErrorException: if the flow is not valid or the inputs are not valid. SystemErrorException: if the flow execution failed due to unexpected executor error.

Parameters:
  • args – positional arguments are not supported.

  • kwargs – flow inputs with key word arguments.

Returns:

estimate_token_count(*args, **kwargs)#

Estimate the token count. LLM will reject the request when prompt token + response token is greater than the maximum number of tokens supported by the model. It is used to estimate the number of total tokens in this round of chat.

Parameters:
  • args – positional arguments are not supported.

  • kwargs – prompty inputs with key word arguments.

Returns:

Estimate total token count

Return type:

int

classmethod load(source: Union[str, PathLike], **kwargs) Prompty#

Direct load non-dag flow from prompty file.

Parameters:

source (Union[PathLike, str]) – The local prompt file. Must be a path to a local file. If the source is a path, it will be open and read. An exception is raised if the file does not exist.

Returns:

A Prompty object

Return type:

Prompty

render(*args, **kwargs)#

Render the prompt content.

Parameters:
  • args – positional arguments are not supported.

  • kwargs – prompty inputs with key word arguments.

Returns:

Prompt content

Return type:

str

class promptflow.core.ToolProvider(*args, **kwargs)#

Bases: ABC

The base class of tool class.

classmethod get_initialize_inputs()#
classmethod get_required_initialize_inputs()#
promptflow.core.log_metric(key, value, variant_id=None)#

Log a metric for current promptflow run.

Parameters:
  • key (str) – Metric name.

  • value (float) – Metric value.

  • variant_id (str) – Variant id for the metric.

promptflow.core.tool(func=None, *, name: Optional[str] = None, description: Optional[str] = None, type: Optional[str] = None, input_settings=None, streaming_option_parameter: Optional[str] = None, **kwargs) Callable#

Decorator for tool functions. The decorated function will be registered as a tool and can be used in a flow.

Parameters:
  • name (str) – The tool name.

  • description (str) – The tool description.

  • type (str) – The tool type.

  • input_settings (Dict[str, promptflow.entities.InputSetting]) – Dict of input setting.

Returns:

The decorated function.

Return type:

Callable