Class ChatCompletionsClientAgent
- Namespace
- AutoGen.AzureAIInference
- Assembly
- AutoGen.AzureAIInference.dll
ChatCompletions client agent. This agent is a thin wrapper around Azure.AI.Inference.ChatCompletionsClient to provide a simple interface for chat completions.
ChatCompletionsClientAgent supports the following message types:
- MessageEnvelope<T> where T is Azure.AI.Inference.ChatRequestMessage: chat request message.
ChatCompletionsClientAgent returns the following message types:
- MessageEnvelope<T> where T is Azure.AI.Inference.ChatResponseMessage: chat response message. MessageEnvelope<T> where T is Azure.AI.Inference.StreamingChatCompletionsUpdate: streaming chat completions update.
public class ChatCompletionsClientAgent : IStreamingAgent, IAgent, IAgentMetaInformation
- Inheritance
-
ChatCompletionsClientAgent
- Implements
- Inherited Members
- Extension Methods
Constructors
ChatCompletionsClientAgent(ChatCompletionsClient, string, ChatCompletionsOptions, string)
Create a new instance of ChatCompletionsClientAgent.
public ChatCompletionsClientAgent(ChatCompletionsClient chatCompletionsClient, string name, ChatCompletionsOptions options, string systemMessage = "You are a helpful AI assistant")
Parameters
chatCompletionsClientChatCompletionsClientchat completions client
namestringagent name
optionsChatCompletionsOptionschat completion option. The option can't contain messages
systemMessagestringsystem message
ChatCompletionsClientAgent(ChatCompletionsClient, string, string, string, float, int, int?, ChatCompletionsResponseFormat?, IEnumerable<FunctionDefinition>?)
Create a new instance of ChatCompletionsClientAgent.
public ChatCompletionsClientAgent(ChatCompletionsClient chatCompletionsClient, string name, string modelName, string systemMessage = "You are a helpful AI assistant", float temperature = 0.7, int maxTokens = 1024, int? seed = null, ChatCompletionsResponseFormat? responseFormat = null, IEnumerable<FunctionDefinition>? functions = null)
Parameters
chatCompletionsClientChatCompletionsClientchat completions client
namestringagent name
modelNamestringmodel name. e.g. gpt-turbo-3.5
systemMessagestringsystem message
temperaturefloattemperature
maxTokensintmax tokens to generated
seedint?seed to use, set it to enable deterministic output
responseFormatChatCompletionsResponseFormatresponse format, set it to Azure.AI.Inference.ChatCompletionsResponseFormatJSON to enable json mode.
functionsIEnumerable<FunctionDefinition>functions
Properties
Name
public string Name { get; }
Property Value
Methods
GenerateReplyAsync(IEnumerable<IMessage>, GenerateReplyOptions?, CancellationToken)
Generate reply
public Task<IMessage> GenerateReplyAsync(IEnumerable<IMessage> messages, GenerateReplyOptions? options = null, CancellationToken cancellationToken = default)
Parameters
messagesIEnumerable<IMessage>conversation history
optionsGenerateReplyOptionscompletion option. If provided, it should override existing option if there's any
cancellationTokenCancellationToken
Returns
GenerateStreamingReplyAsync(IEnumerable<IMessage>, GenerateReplyOptions?, CancellationToken)
public IAsyncEnumerable<IMessage> GenerateStreamingReplyAsync(IEnumerable<IMessage> messages, GenerateReplyOptions? options = null, CancellationToken cancellationToken = default)
Parameters
messagesIEnumerable<IMessage>optionsGenerateReplyOptionscancellationTokenCancellationToken