Models#
In many cases, agents need access to LLM model services such as OpenAI, Azure OpenAI, or local models. Since there are many different providers with different APIs, autogen-core
implements a protocol for model clients and autogen-ext
implements a set of model clients for popular model services. AgentChat can use these model clients to interact with model services.
OpenAI#
To access OpenAI models, install the openai
extension, which allows you to use the OpenAIChatCompletionClient
.
pip install 'autogen-ext[openai]==0.4.0.dev8'
You will also need to obtain an API key from OpenAI.
from autogen_ext.models import OpenAIChatCompletionClient
opneai_model_client = OpenAIChatCompletionClient(
model="gpt-4o-2024-08-06",
# api_key="sk-...", # Optional if you have an OPENAI_API_KEY environment variable set.
)
To test the model client, you can use the following code:
from autogen_core.components.models import UserMessage
result = await opneai_model_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(result)
CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)
Note
You can use this client with models hosted on OpenAI-compatible endpoints, however, we have not tested this functionality.
See OpenAIChatCompletionClient
for more information.
Azure OpenAI#
Similarly, install the azure
and openai
extensions to use the AzureOpenAIChatCompletionClient
.
pip install 'autogen-ext[openai,azure]==0.4.0.dev8'
To use the client, you need to provide your deployment id, Azure Cognitive Services endpoint, api version, and model capabilities. For authentication, you can either provide an API key or an Azure Active Directory (AAD) token credential.
The following code snippet shows how to use AAD authentication. The identity used must be assigned the Cognitive Services OpenAI User role.
from autogen_ext.models import AzureOpenAIChatCompletionClient
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
# Create the token provider
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
az_model_client = AzureOpenAIChatCompletionClient(
azure_deployment="{your-azure-deployment}",
model="{model-name, such as gpt-4o}",
api_version="2024-06-01",
azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/",
azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication.
# api_key="sk-...", # For key-based authentication.
)
See here for how to use the Azure client directly or for more information.
Local Models#
We are working on it. Stay tuned!