Class ChatCompletionsClientAgent
- Namespace
- AutoGen.AzureAIInference
- Assembly
- AutoGen.AzureAIInference.dll
ChatCompletions client agent. This agent is a thin wrapper around Azure.AI.Inference.ChatCompletionsClient to provide a simple interface for chat completions.
ChatCompletionsClientAgent supports the following message types:
- MessageEnvelope<T> where T is Azure.AI.Inference.ChatRequestMessage: chat request message.
ChatCompletionsClientAgent returns the following message types:
- MessageEnvelope<T> where T is Azure.AI.Inference.ChatResponseMessage: chat response message. MessageEnvelope<T> where T is Azure.AI.Inference.StreamingChatCompletionsUpdate: streaming chat completions update.
public class ChatCompletionsClientAgent : IStreamingAgent, IAgent, IAgentMetaInformation
- Inheritance
-
ChatCompletionsClientAgent
- Implements
- Inherited Members
- Extension Methods
Constructors
ChatCompletionsClientAgent(ChatCompletionsClient, string, ChatCompletionsOptions, string)
Create a new instance of ChatCompletionsClientAgent.
public ChatCompletionsClientAgent(ChatCompletionsClient chatCompletionsClient, string name, ChatCompletionsOptions options, string systemMessage = "You are a helpful AI assistant")
Parameters
chatCompletionsClient
ChatCompletionsClientchat completions client
name
stringagent name
options
ChatCompletionsOptionschat completion option. The option can't contain messages
systemMessage
stringsystem message
ChatCompletionsClientAgent(ChatCompletionsClient, string, string, string, float, int, int?, ChatCompletionsResponseFormat?, IEnumerable<FunctionDefinition>?)
Create a new instance of ChatCompletionsClientAgent.
public ChatCompletionsClientAgent(ChatCompletionsClient chatCompletionsClient, string name, string modelName, string systemMessage = "You are a helpful AI assistant", float temperature = 0.7, int maxTokens = 1024, int? seed = null, ChatCompletionsResponseFormat? responseFormat = null, IEnumerable<FunctionDefinition>? functions = null)
Parameters
chatCompletionsClient
ChatCompletionsClientchat completions client
name
stringagent name
modelName
stringmodel name. e.g. gpt-turbo-3.5
systemMessage
stringsystem message
temperature
floattemperature
maxTokens
intmax tokens to generated
seed
int?seed to use, set it to enable deterministic output
responseFormat
ChatCompletionsResponseFormatresponse format, set it to Azure.AI.Inference.ChatCompletionsResponseFormatJSON to enable json mode.
functions
IEnumerable<FunctionDefinition>functions
Properties
Name
public string Name { get; }
Property Value
Methods
GenerateReplyAsync(IEnumerable<IMessage>, GenerateReplyOptions?, CancellationToken)
Generate reply
public Task<IMessage> GenerateReplyAsync(IEnumerable<IMessage> messages, GenerateReplyOptions? options = null, CancellationToken cancellationToken = default)
Parameters
messages
IEnumerable<IMessage>conversation history
options
GenerateReplyOptionscompletion option. If provided, it should override existing option if there's any
cancellationToken
CancellationToken
Returns
GenerateStreamingReplyAsync(IEnumerable<IMessage>, GenerateReplyOptions?, CancellationToken)
public IAsyncEnumerable<IMessage> GenerateStreamingReplyAsync(IEnumerable<IMessage> messages, GenerateReplyOptions? options = null, CancellationToken cancellationToken = default)
Parameters
messages
IEnumerable<IMessage>options
GenerateReplyOptionscancellationToken
CancellationToken