This tutorial shows how to generate response using an IAgent by taking OpenAIChatAgent as an example.
Note
AutoGen.Net provides the following agents to connect to different LLM platforms. Generating responses using these agents is similar to the example shown below.
Note
The complete code example can be found in Chat_With_Agent.cs
Step 1: Install AutoGen
First, install the AutoGen package using the following command:
dotnet add package AutoGen
Step 2: add Using Statements
using AutoGen.Core;
using AutoGen.OpenAI;
using AutoGen.OpenAI.Extension;
Step 3: Create an OpenAIChatAgent
Note
The RegisterMessageConnector method registers an OpenAIChatRequestMessageConnector middleware which converts OpenAI message types to AutoGen message types. This step is necessary when you want to use AutoGen built-in message types like TextMessage, ImageMessage, etc. For more information, see Built-in-messages
var gpt4o = LLMConfiguration.GetOpenAIGPT4o_mini();
var agent = new OpenAIChatAgent(
chatClient: gpt4o,
name: "agent",
systemMessage: "You are a helpful AI assistant")
.RegisterMessageConnector(); // convert OpenAI message to AutoGen message
Step 4: Generate Response
To generate response, you can use one of the overloaded method of SendAsync method. The following code shows how to generate response with text message:
var reply = await agent.SendAsync("Tell me a joke");
reply.Should().BeOfType<TextMessage>();
if (reply is TextMessage textMessage)
{
Console.WriteLine(textMessage.Content);
}
To generate response with chat history, you can pass the chat history to the SendAsync method:
reply = await agent.SendAsync("summarize the conversation", chatHistory: [reply]);
To streamingly generate response, use GenerateStreamingReplyAsync
var question = new TextMessage(Role.User, "Tell me a long joke");
await foreach (var streamingReply in agent.GenerateStreamingReplyAsync([question]))
{
if (streamingReply is TextMessageUpdate textMessageUpdate)
{
Console.WriteLine(textMessageUpdate.Content);
}
}