Table of Contents

This example shows how to use function call with local LLM models where Ollama as local model provider and LiteLLM proxy server which provides an openai-api compatible interface.

To run this example, the following prerequisites are required:

  • Install Ollama and LiteLLM on your local machine.
  • A local model that supports function call. In this example dolphincoder:latest is used.

Install Ollama and pull dolphincoder:latest model

First, install Ollama by following the instructions on the Ollama website.

After installing Ollama, pull the dolphincoder:latest model by running the following command:

ollama pull dolphincoder:latest

Install LiteLLM and start the proxy server

You can install LiteLLM by following the instructions on the LiteLLM website.

pip install 'litellm[proxy]'

Then, start the proxy server by running the following command:

litellm --model ollama_chat/dolphincoder --port 4000

This will start an openai-api compatible proxy server at http://localhost:4000. You can verify if the server is running by observing the following output in the terminal:

#------------------------------------------------------------#
#                                                            #
#         'The worst thing about this product is...'          #
#        https://github.com/BerriAI/litellm/issues/new        #
#                                                            #
#------------------------------------------------------------#

INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)

Install AutoGen and AutoGen.SourceGenerator

In your project, install the AutoGen and AutoGen.SourceGenerator package using the following command:

dotnet add package AutoGen
dotnet add package AutoGen.SourceGenerator

The AutoGen.SourceGenerator package is used to automatically generate type-safe FunctionContract instead of manually defining them. For more information, please check out Create type-safe function.

And in your project file, enable structural xml document support by setting the GenerateDocumentationFile property to true:

<PropertyGroup>
    <!-- This enables structural xml document support -->
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>

Define WeatherReport function and create FunctionCallMiddleware

Create a public partial class to host the methods you want to use in AutoGen agents. The method has to be a public instance method and its return type must be Task<string>. After the methods are defined, mark them with AutoGen.Core.FunctionAttribute attribute.

public partial class Function
{
    [Function]
    public async Task<string> GetWeatherAsync(string city)
    {
        return await Task.FromResult("The weather in " + city + " is 72 degrees and sunny.");
    }
}

Then create a FunctionCallMiddleware and add the WeatherReport function to the middleware. The middleware will pass the FunctionContract to the agent when generating a response, and process the tool call response when receiving a ToolCallMessage.

var functions = new Function();
var functionMiddleware = new FunctionCallMiddleware(
    functions: [functions.GetWeatherAsyncFunctionContract],
    functionMap: new Dictionary<string, Func<string, Task<string>>>
    {
        { functions.GetWeatherAsyncFunctionContract.Name!, functions.GetWeatherAsyncWrapper },
    });

Create OpenAIChatAgent with GetWeatherReport tool and chat with it

Because LiteLLM proxy server is openai-api compatible, we can use OpenAIChatAgent to connect to it as a third-party openai-api provider. The agent is also registered with a FunctionCallMiddleware which contains the WeatherReport tool. Therefore, the agent can call the WeatherReport tool when generating a response.

var liteLLMUrl = "http://localhost:4000";

// api-key is not required for local server
// so you can use any string here
var openAIClient = new OpenAIClient(new ApiKeyCredential("api-key"), new OpenAIClientOptions
{
    Endpoint = new Uri("http://localhost:4000"),
});

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("dolphincoder:latest"),
    name: "assistant",
    systemMessage: "You are a helpful AI assistant")
    .RegisterMessageConnector()
    .RegisterMiddleware(functionMiddleware)
    .RegisterPrintMessage();

var reply = await agent.SendAsync("what's the weather in new york");

The reply from the agent will similar to the following:

AggregateMessage from assistant
--------------------
ToolCallMessage:
ToolCallMessage from assistant
--------------------
- GetWeatherAsync: {"city": "new york"}
--------------------

ToolCallResultMessage:
ToolCallResultMessage from assistant
--------------------
- GetWeatherAsync: The weather in new york is 72 degrees and sunny.
--------------------