autogen_ext.agents.openai#
- class OpenAIAgent(name: str, description: str, client: AsyncOpenAI | AsyncAzureOpenAI, model: str, instructions: str, tools: List[Tool] | None = None, temperature: float | None = 1, max_output_tokens: int | None = None, json_mode: bool = False, store: bool = True, truncation: str = 'disabled')[source]#
Bases:
BaseChatAgent
,Component
[OpenAIAgentConfig
]An agent implementation that uses the OpenAI Responses API to generate responses.
Installation:
pip install "autogen-ext[openai]" # pip install "autogen-ext[openai,azure]" # For Azure OpenAI Assistant
This agent leverages the Responses API to generate responses with capabilities like:
Custom function calling
Multi-turn conversations
Example
from openai import AsyncOpenAI from autogen_core import CancellationToken from autogen_ext.agents.openai import OpenAIAgent from autogen_agentchat.messages import TextMessage async def example(): cancellation_token = CancellationToken() client = AsyncOpenAI() agent = OpenAIAgent( name="Simple Agent", description="A simple OpenAI agent using the Responses API", client=client, model="gpt-4.1", instructions="You are a helpful assistant.", ) response = await agent.on_messages([TextMessage(source="user", content="Hello!")], cancellation_token) print(response)
asyncio.run(example())
TODO: Add support for advanced features (vector store, multimodal, etc.) in future PRs.
- component_config_schema#
alias of
OpenAIAgentConfig
- component_provider_override: ClassVar[str | None] = 'autogen_ext.agents.openai.OpenAIAgent'#
Override the provider string for the component. This should be used to prevent internal module names being a part of the module name.
- async delete_assistant(assistant_id: str) Dict[str, Any] [source]#
Delete an assistant by its ID using the OpenAI API.
- Parameters:
assistant_id (str) – The ID of the assistant to delete.
- Returns:
Dict[str, Any] – The deletion status object (e.g., {“id”: …, “object”: “assistant.deleted”, “deleted”: true}).
Example
import asyncio from typing import Dict, Any from autogen_ext.agents.openai import OpenAIAgent from openai import AsyncOpenAI async def example() -> None: client = AsyncOpenAI() agent = OpenAIAgent( name="test_agent", description="Test agent", client=client, model="gpt-4", instructions="You are a helpful assistant.", ) result: Dict[str, Any] = await agent.delete_assistant("asst_abc123") print(result) asyncio.run(example())
- async list_assistants(after: str | None = None, before: str | None = None, limit: int | None = 20, order: str | None = 'desc') Dict[str, Any] [source]#
List all assistants using the OpenAI API.
- Parameters:
after (Optional[str]) – Cursor for pagination (fetch after this assistant ID).
before (Optional[str]) – Cursor for pagination (fetch before this assistant ID).
limit (Optional[int]) – Number of assistants to return (1-100, default 20).
order (Optional[str]) – ‘asc’ or ‘desc’ by created_at (default ‘desc’).
- Returns:
Dict[str, Any] – The OpenAI API response containing: - object: ‘list’ - data: List of assistant objects - first_id: str - last_id: str - has_more: bool
Example
import asyncio from typing import Dict, Any from autogen_ext.agents.openai import OpenAIAgent from openai import AsyncOpenAI async def example() -> None: client = AsyncOpenAI() agent = OpenAIAgent( name="test_agent", description="Test agent", client=client, model="gpt-4", instructions="You are a helpful assistant.", ) assistants: Dict[str, Any] = await agent.list_assistants(limit=5) print(assistants) asyncio.run(example())
- async load_state(state: Mapping[str, Any]) None [source]#
Restore agent from saved state. Default implementation for stateless agents.
- async modify_assistant(assistant_id: str, name: str | None = None, description: str | None = None, instructions: str | None = None, metadata: Dict[str, Any] | None = None, model: str | None = None, reasoning_effort: str | None = None, response_format: str | None = None, temperature: float | None = None, tool_resources: Dict[str, Any] | None = None, tools: List[Any] | None = None, top_p: float | None = None, **kwargs: Any) Dict[str, Any] [source]#
Modify (update) an assistant by its ID using the OpenAI API.
- Parameters:
assistant_id (str) – The ID of the assistant to update.
name (Optional[str]) – New name for the assistant.
description (Optional[str]) – New description.
instructions (Optional[str]) – New instructions.
metadata (Optional[dict]) – New metadata.
model (Optional[str]) – New model.
reasoning_effort (Optional[str]) – New reasoning effort.
response_format (Optional[str]) – New response format.
temperature (Optional[float]) – New temperature.
tool_resources (Optional[dict]) – New tool resources.
tools (Optional[list]) – New tools.
top_p (Optional[float]) – New top_p value.
**kwargs – Additional keyword arguments.
- Returns:
Dict[str, Any] – The updated assistant object.
Example
import asyncio from typing import Dict, Any from autogen_ext.agents.openai import OpenAIAgent from openai import AsyncOpenAI async def example() -> None: client = AsyncOpenAI() agent = OpenAIAgent( name="test_agent", description="Test agent", client=client, model="gpt-4", instructions="You are a helpful assistant.", ) updated: Dict[str, Any] = await agent.modify_assistant( assistant_id="asst_123", instructions="You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.", tools=[{"type": "file_search"}], tool_resources={"file_search": {"vector_store_ids": []}}, ) print(updated) asyncio.run(example())
- async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [source]#
Handles incoming messages and returns a response.
Note
Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[Annotated[ToolCallRequestEvent | ToolCallExecutionEvent | MemoryQueryEvent | UserInputRequestedEvent | ModelClientStreamingChunkEvent | ThoughtEvent | SelectSpeakerEvent | CodeGenerationEvent | CodeExecutionEvent, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage | Response, None] [source]#
Handles incoming messages and returns a stream of messages and and the final item is the response. The base implementation in
BaseChatAgent
simply callson_messages()
and yields the messages in the response.Note
Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.
- async on_reset(cancellation_token: CancellationToken) None [source]#
Resets the agent to its initialization state.
- property produced_message_types: Sequence[Type[TextMessage] | Type[MultiModalMessage] | Type[StopMessage] | Type[ToolCallSummaryMessage] | Type[HandoffMessage]]#
Return the types of messages that this agent can produce.
- async retrieve_assistant(assistant_id: str) Dict[str, Any] [source]#
Retrieve a single assistant by its ID using the OpenAI API.
- Parameters:
assistant_id (str) – The ID of the assistant to retrieve.
- Returns:
Dict[str, Any] – The assistant object.
Example
import asyncio from typing import Dict, Any from autogen_ext.agents.openai import OpenAIAgent from openai import AsyncOpenAI async def example() -> None: client = AsyncOpenAI() agent = OpenAIAgent( name="test_agent", description="Test agent", client=client, model="gpt-4", instructions="You are a helpful assistant.", ) assistant: Dict[str, Any] = await agent.retrieve_assistant("asst_abc123") print(assistant) asyncio.run(example())
- class OpenAIAssistantAgent(name: str, description: str, client: AsyncOpenAI | AsyncAzureOpenAI, model: str, instructions: str, tools: Iterable[Literal['code_interpreter', 'file_search'] | Tool | Callable[[...], Any] | Callable[[...], Awaitable[Any]]] | None = None, assistant_id: str | None = None, thread_id: str | None = None, metadata: Dict[str, str] | None = None, response_format: Literal['auto'] | ResponseFormatText | ResponseFormatJSONObject | ResponseFormatJSONSchema | None = None, temperature: float | None = None, tool_resources: ToolResources | None = None, top_p: float | None = None)[source]#
Bases:
BaseChatAgent
An agent implementation that uses the Assistant API to generate responses.
Installation:
pip install "autogen-ext[openai]" # pip install "autogen-ext[openai,azure]" # For Azure OpenAI Assistant
This agent leverages the Assistant API to create AI assistants with capabilities like:
Code interpretation and execution
File handling and search
Custom function calling
Multi-turn conversations
The agent maintains a thread of conversation and can use various tools including
Code interpreter: For executing code and working with files
File search: For searching through uploaded documents
Custom functions: For extending capabilities with user-defined tools
Key Features:
Supports multiple file formats including code, documents, images
Can handle up to 128 tools per assistant
Maintains conversation context in threads
Supports file uploads for code interpreter and search
Vector store integration for efficient file search
Automatic file parsing and embedding
You can use an existing thread or assistant by providing the thread_id or assistant_id parameters.
Examples
Use the assistant to analyze data in a CSV file:
from openai import AsyncOpenAI from autogen_core import CancellationToken import asyncio from autogen_ext.agents.openai import OpenAIAssistantAgent from autogen_agentchat.messages import TextMessage async def example(): cancellation_token = CancellationToken() # Create an OpenAI client client = AsyncOpenAI(api_key="your-api-key", base_url="your-base-url") # Create an assistant with code interpreter assistant = OpenAIAssistantAgent( name="Python Helper", description="Helps with Python programming", client=client, model="gpt-4", instructions="You are a helpful Python programming assistant.", tools=["code_interpreter"], ) # Upload files for the assistant to use await assistant.on_upload_for_code_interpreter("data.csv", cancellation_token) # Get response from the assistant response = await assistant.on_messages( [TextMessage(source="user", content="Analyze the data in data.csv")], cancellation_token ) print(response) # Clean up resources await assistant.delete_uploaded_files(cancellation_token) await assistant.delete_assistant(cancellation_token) asyncio.run(example())
Use Azure OpenAI Assistant with AAD authentication:
from openai import AsyncAzureOpenAI import asyncio from azure.identity import DefaultAzureCredential, get_bearer_token_provider from autogen_core import CancellationToken from autogen_ext.agents.openai import OpenAIAssistantAgent from autogen_agentchat.messages import TextMessage async def example(): cancellation_token = CancellationToken() # Create an Azure OpenAI client token_provider = get_bearer_token_provider(DefaultAzureCredential()) client = AsyncAzureOpenAI( azure_deployment="YOUR_AZURE_DEPLOYMENT", api_version="YOUR_API_VERSION", azure_endpoint="YOUR_AZURE_ENDPOINT", azure_ad_token_provider=token_provider, ) # Create an assistant with code interpreter assistant = OpenAIAssistantAgent( name="Python Helper", description="Helps with Python programming", client=client, model="gpt-4o", instructions="You are a helpful Python programming assistant.", tools=["code_interpreter"], ) # Get response from the assistant response = await assistant.on_messages([TextMessage(source="user", content="Hello.")], cancellation_token) print(response) # Clean up resources await assistant.delete_assistant(cancellation_token) asyncio.run(example())
- Parameters:
name (str) – Name of the assistant
description (str) – Description of the assistant’s purpose
client (AsyncOpenAI | AsyncAzureOpenAI) – OpenAI client or Azure OpenAI client instance
model (str) – Model to use (e.g. “gpt-4”)
instructions (str) – System instructions for the assistant
tools (Optional[Iterable[Union[Literal["code_interpreter", "file_search"], Tool | Callable[..., Any] | Callable[..., Awaitable[Any]]]]]) – Tools the assistant can use
assistant_id (Optional[str]) – ID of existing assistant to use
thread_id (Optional[str]) – ID of existing thread to use
metadata (Optional[Dict[str, str]]) – Additional metadata for the assistant.
response_format (Optional[AssistantResponseFormatOptionParam]) – Response format settings
temperature (Optional[float]) – Temperature for response generation
tool_resources (Optional[ToolResources]) – Additional tool configuration
top_p (Optional[float]) – Top p sampling parameter
- async delete_assistant(cancellation_token: CancellationToken) None [source]#
Delete the assistant if it was created by this instance.
- async delete_uploaded_files(cancellation_token: CancellationToken) None [source]#
Delete all files that were uploaded by this agent instance.
- async delete_vector_store(cancellation_token: CancellationToken) None [source]#
Delete the vector store if it was created by this instance.
- async handle_incoming_message(message: BaseChatMessage, cancellation_token: CancellationToken) None [source]#
Handle regular text messages by adding them to the thread.
- async load_state(state: Mapping[str, Any]) None [source]#
Restore agent from saved state. Default implementation for stateless agents.
- property messages: AsyncMessages#
- async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [source]#
Handle incoming messages and return a response.
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None] [source]#
Handle incoming messages and return a response.
- async on_reset(cancellation_token: CancellationToken) None [source]#
Handle reset command by deleting new messages and runs since initialization.
- async on_upload_for_code_interpreter(file_paths: str | Iterable[str], cancellation_token: CancellationToken) None [source]#
Handle file uploads for the code interpreter.
- async on_upload_for_file_search(file_paths: str | Iterable[str], cancellation_token: CancellationToken) None [source]#
Handle file uploads for file search.
- property produced_message_types: Sequence[type[BaseChatMessage]]#
The types of messages that the assistant agent produces.
- property runs: AsyncRuns#
- async save_state() Mapping[str, Any] [source]#
Export state. Default implementation for stateless agents.
- property threads: AsyncThreads#