autogen_ext.agents.azure#

class AzureAIAgent(name: str, description: str, project_client: AIProjectClient, deployment_name: str, instructions: str, tools: Iterable[Literal['file_search', 'code_interpreter', 'bing_grounding', 'azure_ai_search', 'azure_function', 'sharepoint_grounding'] | BingGroundingToolDefinition | CodeInterpreterToolDefinition | SharepointToolDefinition | AzureAISearchToolDefinition | FileSearchToolDefinition | AzureFunctionToolDefinition | Tool | Callable[[...], Any] | Callable[[...], Awaitable[Any]]] | None = None, agent_id: str | None = None, thread_id: str | None = None, metadata: Dict[str, str] | None = None, response_format: _types.AgentsApiResponseFormatOption | None = None, temperature: float | None = None, tool_resources: models.ToolResources | None = None, top_p: float | None = None)[source]#

Bases: BaseChatAgent

Azure AI Assistant agent for AutoGen.

Installation:

pip install "autogen-ext[azure]"  # For Azure AI Foundry Agent Service

This agent leverages the Azure AI Assistant API to create AI assistants with capabilities like:

  • Code interpretation and execution

  • Grounding with Bing search

  • File handling and search

  • Custom function calling

  • Multi-turn conversations

The agent integrates with AutoGen’s messaging system, providing a seamless way to use Azure AI capabilities within the AutoGen framework. It supports tools like code interpreter, file search, and various grounding mechanisms.

Agent name must be a valid Python identifier:
  1. It must start with a letter (A-Z, a-z) or an underscore (_).

  2. It can only contain letters, digits (0-9), or underscores.

  3. It cannot be a Python keyword.

  4. It cannot contain spaces or special characters.

  5. It cannot start with a digit.

Check here on how to create a new secured agent with user-managed identity: https://learn.microsoft.com/en-us/azure/ai-services/agents/how-to/virtual-networks

Examples

Use the AzureAIAgent to create an agent grounded with Bing:

import asyncio
import os

from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.agents.azure._azure_ai_agent import AzureAIAgent
from azure.ai.projects.aio import AIProjectClient
from azure.identity.aio import DefaultAzureCredential
import azure.ai.projects.models as models
import dotenv


async def bing_example():
    credential = DefaultAzureCredential()

    async with AIProjectClient.from_connection_string(  # type: ignore
        credential=credential, conn_str=os.getenv("AI_PROJECT_CONNECTION_STRING", "")
    ) as project_client:
        conn = await project_client.connections.get(connection_name=os.getenv("BING_CONNECTION_NAME", ""))

        bing_tool = models.BingGroundingTool(conn.id)
        agent_with_bing_grounding = AzureAIAgent(
            name="bing_agent",
            description="An AI assistant with Bing grounding",
            project_client=project_client,
            deployment_name="gpt-4o",
            instructions="You are a helpful assistant.",
            tools=bing_tool.definitions,
            metadata={"source": "AzureAIAgent"},
        )

        result = await agent_with_bing_grounding.on_messages(
            messages=[TextMessage(content="What is Microsoft's annual leave policy?", source="user")],
            cancellation_token=CancellationToken(),
            message_limit=5,
        )
        print(result)


if __name__ == "__main__":
    dotenv.load_dotenv()
    asyncio.run(bing_example())

Use the AzureAIAgent to create an agent with file search capability:

import asyncio
import os
import tempfile
import urllib.request

import dotenv
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.agents.azure._azure_ai_agent import AzureAIAgent
from azure.ai.projects.aio import AIProjectClient
from azure.identity.aio import DefaultAzureCredential


async def file_search_example():
    # Download README.md from GitHub
    readme_url = "https://raw.githubusercontent.com/microsoft/autogen/refs/heads/main/README.md"
    temp_file = None

    try:
        # Create a temporary file to store the downloaded README
        temp_file = tempfile.NamedTemporaryFile(delete=False, suffix=".md")
        urllib.request.urlretrieve(readme_url, temp_file.name)
        print(f"Downloaded README.md to {temp_file.name}")

        credential = DefaultAzureCredential()
        async with AIProjectClient.from_connection_string(  # type: ignore
            credential=credential, conn_str=os.getenv("AI_PROJECT_CONNECTION_STRING", "")
        ) as project_client:
            agent_with_file_search = AzureAIAgent(
                name="file_search_agent",
                description="An AI assistant with file search capabilities",
                project_client=project_client,
                deployment_name="gpt-4o",
                instructions="You are a helpful assistant.",
                tools=["file_search"],
                metadata={"source": "AzureAIAgent"},
            )

            ct: CancellationToken = CancellationToken()
            # Use the downloaded README file for file search
            await agent_with_file_search.on_upload_for_file_search(
                file_paths=[temp_file.name],
                vector_store_name="file_upload_index",
                vector_store_metadata={"source": "AzureAIAgent"},
                cancellation_token=ct,
            )
            result = await agent_with_file_search.on_messages(
                messages=[
                    TextMessage(content="Hello, what is AutoGen and what capabilities does it have?", source="user")
                ],
                cancellation_token=ct,
                message_limit=5,
            )
            print(result)
    finally:
        # Clean up the temporary file
        if temp_file and os.path.exists(temp_file.name):
            os.unlink(temp_file.name)
            print(f"Removed temporary file {temp_file.name}")


if __name__ == "__main__":
    dotenv.load_dotenv()
    asyncio.run(file_search_example())

Use the AzureAIAgent to create an agent with code interpreter capability:

import asyncio
import os

import dotenv
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.agents.azure._azure_ai_agent import AzureAIAgent
from azure.ai.projects.aio import AIProjectClient
from azure.identity.aio import DefaultAzureCredential


async def code_interpreter_example():
    credential = DefaultAzureCredential()
    async with AIProjectClient.from_connection_string(  # type: ignore
        credential=credential, conn_str=os.getenv("AI_PROJECT_CONNECTION_STRING", "")
    ) as project_client:
        agent_with_code_interpreter = AzureAIAgent(
            name="code_interpreter_agent",
            description="An AI assistant with code interpreter capabilities",
            project_client=project_client,
            deployment_name="gpt-4o",
            instructions="You are a helpful assistant.",
            tools=["code_interpreter"],
            metadata={"source": "AzureAIAgent"},
        )

        await agent_with_code_interpreter.on_upload_for_code_interpreter(
            file_paths="/workspaces/autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/data/nifty_500_quarterly_results.csv",
            cancellation_token=CancellationToken(),
        )

        result = await agent_with_code_interpreter.on_messages(
            messages=[
                TextMessage(
                    content="Aggregate the number of stocks per industry and give me a markdown table as a result?",
                    source="user",
                )
            ],
            cancellation_token=CancellationToken(),
        )

        print(result)


if __name__ == "__main__":
    dotenv.load_dotenv()
    asyncio.run(code_interpreter_example())
property agent_id: str#
property deployment_name: str#
property description: str#

The description of the agent. This is used by team to make decisions about which agents to use. The description should describe the agent’s capabilities and how to interact with it.

async handle_text_message(content: str, cancellation_token: CancellationToken | None = None) None[source]#

Handle a text message by adding it to the conversation thread.

Parameters:
  • content (str) – The text content of the message

  • cancellation_token (CancellationToken) – Token for cancellation handling

Returns:

None

property instructions: str#
async load_state(state: Mapping[str, Any]) None[source]#

Load a previously saved state into this agent.

This method deserializes and restores a previously saved agent state, setting up the agent to continue a previous conversation or session.

Parameters:

state (Mapping[str, Any]) – The previously saved state dictionary

async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken | None = None, message_limit: int = 1) Response[source]#

Process incoming messages and return a response from the Azure AI agent.

This method is the primary entry point for interaction with the agent. It delegates to on_messages_stream and returns the final response.

Parameters:
  • messages (Sequence[ChatMessage]) – The messages to process

  • cancellation_token (CancellationToken) – Token for cancellation handling

  • message_limit (int, optional) – Maximum number of messages to retrieve from the thread

Returns:

Response – The agent’s response, including the chat message and any inner events

Raises:

AssertionError – If the stream doesn’t return a final result

async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken | None = None, message_limit: int = 1, sleep_interval: float = 0.5) AsyncGenerator[Annotated[ToolCallRequestEvent | ToolCallExecutionEvent | MemoryQueryEvent | UserInputRequestedEvent | ModelClientStreamingChunkEvent | ThoughtEvent | SelectSpeakerEvent | CodeGenerationEvent | CodeExecutionEvent, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Response, None][source]#

Process incoming messages and yield streaming responses from the Azure AI agent.

This method handles the complete interaction flow with the Azure AI agent: 1. Processing input messages 2. Creating and monitoring a run 3. Handling tool calls and their results 4. Retrieving and returning the agent’s final response

The method yields events during processing (like tool calls) and finally yields the complete Response with the agent’s message.

Parameters:
  • messages (Sequence[ChatMessage]) – The messages to process

  • cancellation_token (CancellationToken) – Token for cancellation handling

  • message_limit (int, optional) – Maximum number of messages to retrieve from the thread

  • sleep_interval (float, optional) – Time to sleep between polling for run status

Yields:

AgentEvent | ChatMessage | Response – Events during processing and the final response

Raises:

ValueError – If the run fails or no message is received from the assistant

async on_reset(cancellation_token: CancellationToken) None[source]#

Reset the agent’s conversation by creating a new thread.

This method allows for resetting a conversation without losing the agent definition or capabilities. It creates a new thread for fresh conversations.

Note: Currently the Azure AI Agent API has no support for deleting messages, so a new thread is created instead.

Parameters:

cancellation_token (CancellationToken) – Token for cancellation handling

async on_upload_for_code_interpreter(file_paths: str | Iterable[str], cancellation_token: CancellationToken | None = None, sleep_interval: float = 0.5) None[source]#

Upload files to be used with the code interpreter tool.

This method uploads files for the agent’s code interpreter tool and updates the thread’s tool resources to include these files.

Parameters:
  • file_paths (str | Iterable[str]) – Path(s) to file(s) to upload

  • cancellation_token (Optional[CancellationToken]) – Token for cancellation handling

  • sleep_interval (float) – Time to sleep between polling for file status

Raises:

ValueError – If file upload fails or the agent doesn’t have code interpreter capability

Upload files to be used with the file search tool.

This method handles uploading files for the file search capability, creating a vector store if necessary, and updating the agent’s configuration to use the vector store.

Parameters:
  • file_paths (str | Iterable[str]) – Path(s) to file(s) to upload

  • cancellation_token (CancellationToken) – Token for cancellation handling

  • vector_store_name (Optional[str]) – Name to assign to the vector store if creating a new one

  • data_sources (Optional[List[models.VectorStoreDataSource]]) – Additional data sources for the vector store

  • expires_after (Optional[models.VectorStoreExpirationPolicy]) – Expiration policy for vector store content

  • chunking_strategy (Optional[models.VectorStoreChunkingStrategyRequest]) – Strategy for chunking file content

  • vector_store_metadata (Optional[Dict[str, str]]) – Additional metadata for the vector store

  • vector_store_polling_sleep_interval (float) – Time to sleep between polling for vector store status

Raises:

ValueError – If file search is not enabled for this agent or file upload fails

property produced_message_types: Sequence[type[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]]]#

The types of messages that the assistant agent produces.

async save_state() Mapping[str, Any][source]#

Save the current state of the agent for future restoration.

This method serializes the agent’s state including IDs for the agent, thread, messages, and associated resources like vector stores and uploaded files.

Returns:

Mapping[str, Any] – A dictionary containing the serialized state data

property thread_id: str#
property tools: List[ToolDefinition]#

Get the list of tools available to the agent.

Returns:

List[models.ToolDefinition] – The list of tool definitions.