Table of Contents

The following example shows how to connect to third-party OpenAI API using OpenAIChatAgent.

Overview

A lot of LLM applications/platforms support spinning up a chat server that is compatible with OpenAI API, such as LM Studio, Ollama, Mistral etc. This means that you can connect to these servers using the OpenAIChatAgent.

Note

Some platforms might not support all the features of OpenAI API. For example, Ollama does not support function call when using it's openai API according to its document (as of 2024/05/07). That means some of the features of OpenAI API might not work as expected when using these platforms with the OpenAIChatAgent. Please refer to the platform's documentation for more information.

Prerequisites

  • Install the following packages:
dotnet add package AutoGen.OpenAI --version AUTOGEN_VERSION
  • Spin up a chat server that is compatible with OpenAI API. The following example uses Ollama as the chat server, and llama3 as the llm model.
ollama serve

Steps

  • Import the required namespaces:
using AutoGen.Core;
using AutoGen.OpenAI.Extension;
using OpenAI;
  • Create a CustomHttpClientHandler class.

The CustomHttpClientHandler class is used to customize the HttpClientHandler. In this example, we override the SendAsync method to redirect the request to local Ollama server, which is running on http://localhost:11434.

  • Create an OpenAIChatAgent instance and connect to the third-party API.

Then create an OpenAIChatAgent instance and connect to the OpenAI API from Ollama. You can customize the transport behavior of OpenAIClient by passing a customized HttpClientTransport instance. In the customized HttpClientTransport instance, we pass the CustomHttpClientHandler we just created which redirects all openai chat requests to the local Ollama server.

// api-key is not required for local server
// so you can use any string here
var openAIClient = new OpenAIClient("api-key", new OpenAIClientOptions
{
    Endpoint = new Uri("http://localhost:11434/v1/"), // remember to add /v1/ at the end to connect to Ollama openai server
});
var model = "llama3";

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient(model),
    name: "assistant",
    systemMessage: "You are a helpful assistant designed to output JSON.",
    seed: 0)
    .RegisterMessageConnector()
    .RegisterPrintMessage();
  • Chat with the OpenAIChatAgent. Finally, you can start chatting with the agent. In this example, we send a coding question to the agent and get the response.
await agent.SendAsync("Can you write a piece of C# code to calculate 100th of fibonacci?");

Sample Output

The following is the sample output of the code snippet above:

output