Table of Contents

This example shows how to use OllamaAgent to connect to Ollama server and chat with LLaVA model.

To run this example, you need to have an Ollama server running aside and have llama3:latest model installed. For how to setup an Ollama server, please refer to Ollama.

Note

You can find the complete sample code here

Step 1: Install AutoGen.Ollama

First, install the AutoGen.Ollama package using the following command:

dotnet add package AutoGen.Ollama

For how to install from nightly build, please refer to Installation.

Step 2: Add using statement

using AutoGen.Core;
using AutoGen.Ollama.Extension;

Step 3: Create and chat OllamaAgent

In this step, we create an OllamaAgent and connect it to the Ollama server.

using var httpClient = new HttpClient()
{
    BaseAddress = new Uri("http://localhost:11434"),
};

var ollamaAgent = new OllamaAgent(
    httpClient: httpClient,
    name: "ollama",
    modelName: "llama3:latest",
    systemMessage: "You are a helpful AI assistant")
    .RegisterMessageConnector()
    .RegisterPrintMessage();

var reply = await ollamaAgent.SendAsync("Can you write a piece of C# code to calculate 100th of fibonacci?");