LLM#

Introduction#

Prompt flow LLM tool enables you to leverage widely used large language models like OpenAI or Azure OpenAI (AOAI) for natural language processing.

Prompt flow provides a few different LLM APIs:

  • Completion: OpenAI’s completion models generate text based on provided prompts.

  • Chat: OpenAI’s chat models facilitate interactive conversations with text-based inputs and responses.

[!NOTE] We now remove the embedding option from LLM tool api setting. You can use embedding api with Embedding tool.

Prerequisite#

Create OpenAI resources:

Connections#

Setup connections to provisioned resources in prompt flow.

Type

Name

API KEY

API Type

API Version

OpenAI

Required

Required

-

-

AzureOpenAI

Required

Required

Required

Required

Inputs#

Text Completion#

Name

Type

Description

Required

prompt

string

text prompt that the language model will complete

Yes

model, deployment_name

string

the language model to use

Yes

max_tokens

integer

the maximum number of tokens to generate in the completion. Default is 16.

No

temperature

float

the randomness of the generated text. Default is 1.

No

stop

list

the stopping sequence for the generated text. Default is null.

No

suffix

string

text appended to the end of the completion

No

top_p

float

the probability of using the top choice from the generated tokens. Default is 1.

No

logprobs

integer

the number of log probabilities to generate. Default is null.

No

echo

boolean

value that indicates whether to echo back the prompt in the response. Default is false.

No

presence_penalty

float

value that controls the model’s behavior with regards to repeating phrases. Default is 0.

No

frequency_penalty

float

value that controls the model’s behavior with regards to generating rare phrases. Default is 0.

No

best_of

integer

the number of best completions to generate. Default is 1.

No

logit_bias

dictionary

the logit bias for the language model. Default is empty dictionary.

No

Chat#

Name

Type

Description

Required

prompt

string

text prompt that the language model will response

Yes

model, deployment_name

string

the language model to use

Yes

max_tokens

integer

the maximum number of tokens to generate in the response. Default is inf.

No

temperature

float

the randomness of the generated text. Default is 1.

No

stop

list

the stopping sequence for the generated text. Default is null.

No

top_p

float

the probability of using the top choice from the generated tokens. Default is 1.

No

presence_penalty

float

value that controls the model’s behavior with regards to repeating phrases. Default is 0.

No

frequency_penalty

float

value that controls the model’s behavior with regards to generating rare phrases. Default is 0.

No

logit_bias

dictionary

the logit bias for the language model. Default is empty dictionary.

No

function_call

object

value that controls which function is called by the model. Default is null.

No

functions

list

a list of functions the model may generate JSON inputs for. Default is null.

No

response_format

object

an object specifying the format that the model must output. Default is null.

No

Outputs#

API

Return Type

Description

Completion

string

The text of one predicted completion

Chat

string

The text of one response of conversation

How to use LLM Tool?#

  1. Setup and select the connections to OpenAI resources

  2. Configure LLM model api and its parameters

  3. Prepare the Prompt with guidance.