Table of Contents

Class ChatCompletionsClientAgent

Namespace
AutoGen.AzureAIInference
Assembly
AutoGen.AzureAIInference.dll

ChatCompletions client agent. This agent is a thin wrapper around Azure.AI.Inference.ChatCompletionsClient to provide a simple interface for chat completions.

ChatCompletionsClientAgent supports the following message types:

  • MessageEnvelope<T> where T is Azure.AI.Inference.ChatRequestMessage: chat request message.

ChatCompletionsClientAgent returns the following message types:

  • MessageEnvelope<T> where T is Azure.AI.Inference.ChatResponseMessage: chat response message. MessageEnvelope<T> where T is Azure.AI.Inference.StreamingChatCompletionsUpdate: streaming chat completions update.
public class ChatCompletionsClientAgent : IStreamingAgent, IAgent, IAgentMetaInformation
Inheritance
ChatCompletionsClientAgent
Implements
Inherited Members
Extension Methods

Constructors

ChatCompletionsClientAgent(ChatCompletionsClient, string, ChatCompletionsOptions, string)

Create a new instance of ChatCompletionsClientAgent.

public ChatCompletionsClientAgent(ChatCompletionsClient chatCompletionsClient, string name, ChatCompletionsOptions options, string systemMessage = "You are a helpful AI assistant")

Parameters

chatCompletionsClient ChatCompletionsClient

chat completions client

name string

agent name

options ChatCompletionsOptions

chat completion option. The option can't contain messages

systemMessage string

system message

ChatCompletionsClientAgent(ChatCompletionsClient, string, string, string, float, int, int?, ChatCompletionsResponseFormat?, IEnumerable<FunctionDefinition>?)

Create a new instance of ChatCompletionsClientAgent.

public ChatCompletionsClientAgent(ChatCompletionsClient chatCompletionsClient, string name, string modelName, string systemMessage = "You are a helpful AI assistant", float temperature = 0.7, int maxTokens = 1024, int? seed = null, ChatCompletionsResponseFormat? responseFormat = null, IEnumerable<FunctionDefinition>? functions = null)

Parameters

chatCompletionsClient ChatCompletionsClient

chat completions client

name string

agent name

modelName string

model name. e.g. gpt-turbo-3.5

systemMessage string

system message

temperature float

temperature

maxTokens int

max tokens to generated

seed int?

seed to use, set it to enable deterministic output

responseFormat ChatCompletionsResponseFormat

response format, set it to Azure.AI.Inference.ChatCompletionsResponseFormatJSON to enable json mode.

functions IEnumerable<FunctionDefinition>

functions

Properties

Name

public string Name { get; }

Property Value

string

Methods

GenerateReplyAsync(IEnumerable<IMessage>, GenerateReplyOptions?, CancellationToken)

Generate reply

public Task<IMessage> GenerateReplyAsync(IEnumerable<IMessage> messages, GenerateReplyOptions? options = null, CancellationToken cancellationToken = default)

Parameters

messages IEnumerable<IMessage>

conversation history

options GenerateReplyOptions

completion option. If provided, it should override existing option if there's any

cancellationToken CancellationToken

Returns

Task<IMessage>

GenerateStreamingReplyAsync(IEnumerable<IMessage>, GenerateReplyOptions?, CancellationToken)

public IAsyncEnumerable<IMessage> GenerateStreamingReplyAsync(IEnumerable<IMessage> messages, GenerateReplyOptions? options = null, CancellationToken cancellationToken = default)

Parameters

messages IEnumerable<IMessage>
options GenerateReplyOptions
cancellationToken CancellationToken

Returns

IAsyncEnumerable<IMessage>