autogen_agentchat.agents#

This module initializes various pre-defined agents provided by the package. BaseChatAgent is the base class for all agents in AgentChat.

class AssistantAgent(name: str, model_client: ChatCompletionClient, *, tools: List[BaseTool[Any, Any] | Callable[[...], Any] | Callable[[...], Awaitable[Any]]] | None = None, handoffs: List[Handoff | str] | None = None, model_context: ChatCompletionContext | None = None, description: str = 'An agent that provides assistance with ability to use tools.', system_message: str | None = 'You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.', model_client_stream: bool = False, reflect_on_tool_use: bool = False, tool_call_summary_format: str = '{result}', memory: Sequence[Memory] | None = None)[source]#

Bases: BaseChatAgent, Component[AssistantAgentConfig]

An agent that provides assistance with tool use.

The on_messages() returns a Response in which chat_message is the final response message.

The on_messages_stream() creates an async generator that produces the inner messages as they are created, and the Response object as the last item before closing the generator.

Attention

The caller must only pass the new messages to the agent on each call to the on_messages() or on_messages_stream() method. The agent maintains its state between calls to these methods. Do not pass the entire conversation history to the agent on each call.

Warning

The assistant agent is not thread-safe or coroutine-safe. It should not be shared between multiple tasks or coroutines, and it should not call its methods concurrently.

The following diagram shows how the assistant agent works:

../../_images/assistant-agent.svg

Tool call behavior:

  • If the model returns no tool call, then the response is immediately returned as a TextMessage in chat_message.

  • When the model returns tool calls, they will be executed right away:
    • When reflect_on_tool_use is False (default), the tool call results are returned as a ToolCallSummaryMessage in chat_message. tool_call_summary_format can be used to customize the tool call summary.

    • When reflect_on_tool_use is True, the another model inference is made using the tool calls and results, and the text response is returned as a TextMessage in chat_message.

  • If the model returns multiple tool calls, they will be executed concurrently. To disable parallel tool calls you need to configure the model client. For example, set parallel_tool_calls=False for OpenAIChatCompletionClient and AzureOpenAIChatCompletionClient.

Tip

By default, the tool call results are returned as response when tool calls are made. So it is recommended to pay attention to the formatting of the tools return values, especially if another agent is expecting them in a specific format. Use tool_call_summary_format to customize the tool call summary, if needed.

Hand off behavior:

  • If a handoff is triggered, a HandoffMessage will be returned in chat_message.

  • If there are tool calls, they will also be executed right away before returning the handoff.

  • The tool calls and results are passed to the target agent through context.

Note

If multiple handoffs are detected, only the first handoff is executed. To avoid this, disable parallel tool calls in the model client configuration.

Limit context size sent to the model:

You can limit the number of messages sent to the model by setting the model_context parameter to a BufferedChatCompletionContext. This will limit the number of recent messages sent to the model and can be useful when the model has a limit on the number of tokens it can process. You can also create your own model context by subclassing ChatCompletionContext.

Streaming mode:

The assistant agent can be used in streaming mode by setting model_client_stream=True. In this mode, the on_messages_stream() and BaseChatAgent.run_stream() methods will also yield ModelClientStreamingChunkEvent messages as the model client produces chunks of response. The chunk messages will not be included in the final response’s inner messages.

Parameters:
  • name (str) – The name of the agent.

  • model_client (ChatCompletionClient) – The model client to use for inference.

  • tools (List[BaseTool[Any, Any] | Callable[..., Any] | Callable[..., Awaitable[Any]]] | None, optional) – The tools to register with the agent.

  • handoffs (List[HandoffBase | str] | None, optional) – The handoff configurations for the agent, allowing it to transfer to other agents by responding with a HandoffMessage. The transfer is only executed when the team is in Swarm. If a handoff is a string, it should represent the target agent’s name.

  • model_context (ChatCompletionContext | None, optional) – The model context for storing and retrieving LLMMessage. It can be preloaded with initial messages. The initial messages will be cleared when the agent is reset.

  • description (str, optional) – The description of the agent.

  • system_message (str, optional) – The system message for the model. If provided, it will be prepended to the messages in the model context when making an inference. Set to None to disable.

  • model_client_stream (bool, optional) – If True, the model client will be used in streaming mode. on_messages_stream() and BaseChatAgent.run_stream() methods will also yield ModelClientStreamingChunkEvent messages as the model client produces chunks of response. Defaults to False.

  • reflect_on_tool_use (bool, optional) – If True, the agent will make another model inference using the tool call and result to generate a response. If False, the tool call result will be returned as the response. Defaults to False.

  • tool_call_summary_format (str, optional) – The format string used to create a tool call summary for every tool call result. Defaults to “{result}”. When reflect_on_tool_use is False, a concatenation of all the tool call summaries, separated by a new line character (’n’) will be returned as the response. Available variables: {tool_name}, {arguments}, {result}. For example, “{tool_name}: {result}” will create a summary like “tool_name: result”.

  • memory (Sequence[Memory] | None, optional) – The memory store to use for the agent. Defaults to None.

Raises:
  • ValueError – If tool names are not unique.

  • ValueError – If handoff names are not unique.

  • ValueError – If handoff names are not unique from tool names.

  • ValueError – If maximum number of tool iterations is less than 1.

Examples

Example 1: basic agent

The following example demonstrates how to create an assistant agent with a model client and generate a response to a simple task.

import asyncio
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage


async def main() -> None:
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o",
        # api_key = "your_openai_api_key"
    )
    agent = AssistantAgent(name="assistant", model_client=model_client)

    response = await agent.on_messages(
        [TextMessage(content="What is the capital of France?", source="user")], CancellationToken()
    )
    print(response)


asyncio.run(main())

Example 2: model client token streaming

This example demonstrates how to create an assistant agent with a model client and generate a token stream by setting model_client_stream=True.

import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken


async def main() -> None:
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o",
        # api_key = "your_openai_api_key"
    )
    agent = AssistantAgent(
        name="assistant",
        model_client=model_client,
        model_client_stream=True,
    )

    stream = agent.on_messages_stream(
        [TextMessage(content="Name two cities in North America.", source="user")], CancellationToken()
    )
    async for message in stream:
        print(message)


asyncio.run(main())
source='assistant' models_usage=None content='Two' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' cities' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' North' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' America' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' are' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' New' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' York' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' City' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' the' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' United' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' States' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' and' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' Toronto' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' Canada' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='.' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' TERMIN' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='ATE' type='ModelClientStreamingChunkEvent'
Response(chat_message=TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content='Two cities in North America are New York City in the United States and Toronto in Canada. TERMINATE', type='TextMessage'), inner_messages=[])

Example 3: agent with tools

The following example demonstrates how to create an assistant agent with a model client and a tool, generate a stream of messages for a task, and print the messages to the console using Console.

The tool is a simple function that returns the current time. Under the hood, the function is wrapped in a FunctionTool and used with the agent’s model client. The doc string of the function is used as the tool description, the function name is used as the tool name, and the function signature including the type hints is used as the tool arguments.

import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken


async def get_current_time() -> str:
    return "The current time is 12:00 PM."


async def main() -> None:
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o",
        # api_key = "your_openai_api_key"
    )
    agent = AssistantAgent(name="assistant", model_client=model_client, tools=[get_current_time])

    await Console(
        agent.on_messages_stream(
            [TextMessage(content="What is the current time?", source="user")], CancellationToken()
        )
    )


asyncio.run(main())

Example 4: agent with structured output and tool

The following example demonstrates how to create an assistant agent with a model client configured to use structured output and a tool. Note that you need to use FunctionTool to create the tool and the strict=True is required for structured output mode. Because the model is configured to use structured output, the output reflection response will be a JSON formatted string.

import asyncio
from typing import Literal

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_core.tools import FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel


# Define the structured output format.
class AgentResponse(BaseModel):
    thoughts: str
    response: Literal["happy", "sad", "neutral"]


# Define the function to be called as a tool.
def sentiment_analysis(text: str) -> str:
    """Given a text, return the sentiment."""
    return "happy" if "happy" in text else "sad" if "sad" in text else "neutral"


# Create a FunctionTool instance with `strict=True`,
# which is required for structured output mode.
tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True)

# Create an OpenAIChatCompletionClient instance that uses the structured output format.
model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
    response_format=AgentResponse,  # type: ignore
)

# Create an AssistantAgent instance that uses the tool and model client.
agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    tools=[tool],
    system_message="Use the tool to analyze sentiment.",
    reflect_on_tool_use=True,  # Use reflection to have the agent generate a formatted response.
)


async def main() -> None:
    stream = agent.on_messages_stream([TextMessage(content="I am happy today!", source="user")], CancellationToken())
    await Console(stream)


asyncio.run(main())
---------- assistant ----------
[FunctionCall(id='call_tIZjAVyKEDuijbBwLY6RHV2p', arguments='{"text":"I am happy today!"}', name='sentiment_analysis')]
---------- assistant ----------
[FunctionExecutionResult(content='happy', call_id='call_tIZjAVyKEDuijbBwLY6RHV2p', is_error=False)]
---------- assistant ----------
{"thoughts":"The user expresses a clear positive emotion by stating they are happy today, suggesting an upbeat mood.","response":"happy"}

Example 5: agent with bounded model context

The following example shows how to use a BufferedChatCompletionContext that only keeps the last 2 messages (1 user + 1 assistant). Bounded model context is useful when the model has a limit on the number of tokens it can process.

import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_core.model_context import BufferedChatCompletionContext
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main() -> None:
    # Create a model client.
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o-mini",
        # api_key = "your_openai_api_key"
    )

    # Create a model context that only keeps the last 2 messages (1 user + 1 assistant).
    model_context = BufferedChatCompletionContext(buffer_size=2)

    # Create an AssistantAgent instance with the model client and context.
    agent = AssistantAgent(
        name="assistant",
        model_client=model_client,
        model_context=model_context,
        system_message="You are a helpful assistant.",
    )

    response = await agent.on_messages(
        [TextMessage(content="Name two cities in North America.", source="user")], CancellationToken()
    )
    print(response.chat_message.content)  # type: ignore

    response = await agent.on_messages(
        [TextMessage(content="My favorite color is blue.", source="user")], CancellationToken()
    )
    print(response.chat_message.content)  # type: ignore

    response = await agent.on_messages(
        [TextMessage(content="Did I ask you any question?", source="user")], CancellationToken()
    )
    print(response.chat_message.content)  # type: ignore


asyncio.run(main())
Two cities in North America are New York City and Toronto.
That's great! Blue is often associated with calmness and serenity. Do you have a specific shade of blue that you like, or any particular reason why it's your favorite?
No, you didn't ask a question. I apologize for any misunderstanding. If you have something specific you'd like to discuss or ask, feel free to let me know!

Example 6: agent with memory

The following example shows how to use a list-based memory with the assistant agent. The memory is preloaded with some initial content. Under the hood, the memory is used to update the model context before making an inference, using the update_context() method.

import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_core.memory import ListMemory, MemoryContent
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main() -> None:
    # Create a model client.
    model_client = OpenAIChatCompletionClient(
        model="gpt-4o-mini",
        # api_key = "your_openai_api_key"
    )

    # Create a list-based memory with some initial content.
    memory = ListMemory()
    await memory.add(MemoryContent(content="User likes pizza.", mime_type="text/plain"))
    await memory.add(MemoryContent(content="User dislikes cheese.", mime_type="text/plain"))

    # Create an AssistantAgent instance with the model client and memory.
    agent = AssistantAgent(
        name="assistant",
        model_client=model_client,
        memory=[memory],
        system_message="You are a helpful assistant.",
    )

    response = await agent.on_messages(
        [TextMessage(content="One idea for a dinner.", source="user")], CancellationToken()
    )
    print(response.chat_message.content)  # type: ignore


asyncio.run(main())
How about making a delicious pizza without cheese? You can create a flavorful veggie pizza with a variety of toppings. Here's a quick idea:

**Veggie Tomato Sauce Pizza**
- Start with a pizza crust (store-bought or homemade).
- Spread a layer of marinara or tomato sauce evenly over the crust.
- Top with your favorite vegetables like bell peppers, mushrooms, onions, olives, and spinach.
- Add some protein if you’d like, such as grilled chicken or pepperoni (ensure it's cheese-free).
- Sprinkle with herbs like oregano and basil, and maybe a drizzle of olive oil.
- Bake according to the crust instructions until the edges are golden and the veggies are cooked.

Serve it with a side salad or some garlic bread to complete the meal! Enjoy your dinner!

Example 7: agent with `o1-mini`

The following example shows how to use o1-mini model with the assistant agent.

import asyncio
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage


async def main() -> None:
    model_client = OpenAIChatCompletionClient(
        model="o1-mini",
        # api_key = "your_openai_api_key"
    )
    # The system message is not supported by the o1 series model.
    agent = AssistantAgent(name="assistant", model_client=model_client, system_message=None)

    response = await agent.on_messages(
        [TextMessage(content="What is the capital of France?", source="user")], CancellationToken()
    )
    print(response)


asyncio.run(main())

Note

The o1-preview and o1-mini models do not support system message and function calling. So the system_message should be set to None and the tools and handoffs should not be set. See o1 beta limitations for more details.

Example 8: agent using reasoning model with custom model context.

The following example shows how to use a reasoning model (DeepSeek R1) with the assistant agent. The model context is used to filter out the thought field from the assistant message.

import asyncio
from typing import List

from autogen_agentchat.agents import AssistantAgent
from autogen_core.model_context import UnboundedChatCompletionContext
from autogen_core.models import AssistantMessage, LLMMessage, ModelFamily
from autogen_ext.models.ollama import OllamaChatCompletionClient


class ReasoningModelContext(UnboundedChatCompletionContext):
    """A model context for reasoning models."""

    async def get_messages(self) -> List[LLMMessage]:
        messages = await super().get_messages()
        # Filter out thought field from AssistantMessage.
        messages_out: List[LLMMessage] = []
        for message in messages:
            if isinstance(message, AssistantMessage):
                message.thought = None
            messages_out.append(message)
        return messages_out


# Create an instance of the model client for DeepSeek R1 hosted locally on Ollama.
model_client = OllamaChatCompletionClient(
    model="deepseek-r1:8b",
    model_info={
        "vision": False,
        "function_calling": False,
        "json_output": False,
        "family": ModelFamily.R1,
    },
)

agent = AssistantAgent(
    "reasoning_agent",
    model_client=model_client,
    model_context=ReasoningModelContext(),  # Use the custom model context.
)


async def run_reasoning_agent() -> None:
    result = await agent.run(task="What is the capital of France?")
    print(result)


asyncio.run(run_reasoning_agent())
component_config_schema#

alias of AssistantAgentConfig

component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.AssistantAgent'#

Override the provider string for the component. This should be used to prevent internal module names being a part of the module name.

async load_state(state: Mapping[str, Any]) None[source]#

Load the state of the assistant agent

async on_messages(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

Note

Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.

async on_messages_stream(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) AsyncGenerator[Annotated[ToolCallRequestEvent | ToolCallExecutionEvent | MemoryQueryEvent | UserInputRequestedEvent | ModelClientStreamingChunkEvent | ThoughtEvent, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Response, None][source]#

Process the incoming messages with the assistant agent and yield events/responses as they happen.

async on_reset(cancellation_token: CancellationToken) None[source]#

Reset the assistant agent to its initialization state.

property produced_message_types: Sequence[type[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]]]#

The types of messages that the agent produces in the Response.chat_message field. They must be ChatMessage types.

async save_state() Mapping[str, Any][source]#

Save the current state of the assistant agent.

class BaseChatAgent(name: str, description: str)[source]#

Bases: ChatAgent, ABC, ComponentBase[BaseModel]

Base class for a chat agent.

This abstract class provides a base implementation for a ChatAgent. To create a new chat agent, subclass this class and implement the on_messages(), on_reset(), and produced_message_types. If streaming is required, also implement the on_messages_stream() method.

An agent is considered stateful and maintains its state between calls to the on_messages() or on_messages_stream() methods. The agent should store its state in the agent instance. The agent should also implement the on_reset() method to reset the agent to its initialization state.

Note

The caller should only pass the new messages to the agent on each call to the on_messages() or on_messages_stream() method. Do not pass the entire conversation history to the agent on each call. This design principle must be followed when creating a new agent.

async close() None[source]#

Called when the runtime is closed

component_type: ClassVar[ComponentType] = 'agent'#

The logical type of the component.

property description: str#

The description of the agent. This is used by team to make decisions about which agents to use. The description should describe the agent’s capabilities and how to interact with it.

async load_state(state: Mapping[str, Any]) None[source]#

Restore agent from saved state. Default implementation for stateless agents.

property name: str#

The name of the agent. This is used by team to uniquely identify the agent. It should be unique within the team.

abstract async on_messages(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

Note

Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.

async on_messages_stream(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) AsyncGenerator[Annotated[ToolCallRequestEvent | ToolCallExecutionEvent | MemoryQueryEvent | UserInputRequestedEvent | ModelClientStreamingChunkEvent | ThoughtEvent, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Response, None][source]#

Handles incoming messages and returns a stream of messages and and the final item is the response. The base implementation in BaseChatAgent simply calls on_messages() and yields the messages in the response.

Note

Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.

abstract async on_reset(cancellation_token: CancellationToken) None[source]#

Resets the agent to its initialization state.

abstract property produced_message_types: Sequence[type[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]]]#

The types of messages that the agent produces in the Response.chat_message field. They must be ChatMessage types.

async run(*, task: str | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] | None = None, cancellation_token: CancellationToken | None = None) TaskResult[source]#

Run the agent with the given task and return the result.

async run_stream(*, task: str | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] | None = None, cancellation_token: CancellationToken | None = None) AsyncGenerator[Annotated[ToolCallRequestEvent | ToolCallExecutionEvent | MemoryQueryEvent | UserInputRequestedEvent | ModelClientStreamingChunkEvent | ThoughtEvent, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | TaskResult, None][source]#

Run the agent with the given task and return a stream of messages and the final task result as the last item in the stream.

async save_state() Mapping[str, Any][source]#

Export state. Default implementation for stateless agents.

class CodeExecutorAgent(name: str, code_executor: CodeExecutor, *, description: str = 'A computer terminal that performs no other action than running Python scripts (provided to it quoted in ```python code blocks), or sh shell scripts (provided to it quoted in ```sh code blocks).', sources: Sequence[str] | None = None)[source]#

Bases: BaseChatAgent, Component[CodeExecutorAgentConfig]

An agent that extracts and executes code snippets found in received messages and returns the output.

It is typically used within a team with another agent that generates code snippets to be executed.

Parameters:
  • name – The name of the agent.

  • code_executor – The CodeExecutor responsible for executing code received in messages (DockerCommandLineCodeExecutor recommended. See example below)

  • description (optional) – The description of the agent.

  • sources (optional) – Check only messages from the specified agents for the code to execute.

Note

It is recommended that the CodeExecutorAgent agent uses a Docker container to execute code. This ensures that model-generated code is executed in an isolated environment. To use Docker, your environment must have Docker installed and running. Follow the installation instructions for Docker.

Note

The code executor only processes code that is properly formatted in markdown code blocks using triple backticks. For example:

```python
print("Hello World")
```

# or

```sh
echo "Hello World"
```

In this example, we show how to set up a CodeExecutorAgent agent that uses the DockerCommandLineCodeExecutor to execute code snippets in a Docker container. The work_dir parameter indicates where all executed files are first saved locally before being executed in the Docker container.

import asyncio
from autogen_agentchat.agents import CodeExecutorAgent
from autogen_agentchat.messages import TextMessage
from autogen_ext.code_executors.docker import DockerCommandLineCodeExecutor
from autogen_core import CancellationToken


async def run_code_executor_agent() -> None:
    # Create a code executor agent that uses a Docker container to execute code.
    code_executor = DockerCommandLineCodeExecutor(work_dir="coding")
    await code_executor.start()
    code_executor_agent = CodeExecutorAgent("code_executor", code_executor=code_executor)

    # Run the agent with a given code snippet.
    task = TextMessage(
        content='''Here is some code
```python
print('Hello world')
```
''',
        source="user",
    )
    response = await code_executor_agent.on_messages([task], CancellationToken())
    print(response.chat_message)

    # Stop the code executor.
    await code_executor.stop()


asyncio.run(run_code_executor_agent())
classmethod _from_config(config: CodeExecutorAgentConfig) Self[source]#

Create a new instance of the component from a configuration object.

Parameters:

config (T) – The configuration object.

Returns:

Self – The new instance of the component.

_to_config() CodeExecutorAgentConfig[source]#

Dump the configuration that would be requite to create a new instance of a component matching the configuration of this instance.

Returns:

T – The configuration of the component.

component_config_schema#

alias of CodeExecutorAgentConfig

component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.CodeExecutorAgent'#

Override the provider string for the component. This should be used to prevent internal module names being a part of the module name.

async on_messages(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

Note

Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.

async on_reset(cancellation_token: CancellationToken) None[source]#

It it’s a no-op as the code executor agent has no mutable state.

property produced_message_types: Sequence[type[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]]]#

The types of messages that the code executor agent produces.

class SocietyOfMindAgent(name: str, team: Team, model_client: ChatCompletionClient, *, description: str = DEFAULT_DESCRIPTION, instruction: str = DEFAULT_INSTRUCTION, response_prompt: str = DEFAULT_RESPONSE_PROMPT)[source]#

Bases: BaseChatAgent, Component[SocietyOfMindAgentConfig]

An agent that uses an inner team of agents to generate responses.

Each time the agent’s on_messages() or on_messages_stream() method is called, it runs the inner team of agents and then uses the model client to generate a response based on the inner team’s messages. Once the response is generated, the agent resets the inner team by calling Team.reset().

Parameters:
  • name (str) – The name of the agent.

  • team (Team) – The team of agents to use.

  • model_client (ChatCompletionClient) – The model client to use for preparing responses.

  • description (str, optional) – The description of the agent.

  • instruction (str, optional) – The instruction to use when generating a response using the inner team’s messages. Defaults to DEFAULT_INSTRUCTION. It assumes the role of ‘system’.

  • response_prompt (str, optional) – The response prompt to use when generating a response using the inner team’s messages. Defaults to DEFAULT_RESPONSE_PROMPT. It assumes the role of ‘system’.

Example:

import asyncio
from autogen_agentchat.ui import Console
from autogen_agentchat.agents import AssistantAgent, SocietyOfMindAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination


async def main() -> None:
    model_client = OpenAIChatCompletionClient(model="gpt-4o")

    agent1 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a writer, write well.")
    agent2 = AssistantAgent(
        "assistant2",
        model_client=model_client,
        system_message="You are an editor, provide critical feedback. Respond with 'APPROVE' if the text addresses all feedbacks.",
    )
    inner_termination = TextMentionTermination("APPROVE")
    inner_team = RoundRobinGroupChat([agent1, agent2], termination_condition=inner_termination)

    society_of_mind_agent = SocietyOfMindAgent("society_of_mind", team=inner_team, model_client=model_client)

    agent3 = AssistantAgent(
        "assistant3", model_client=model_client, system_message="Translate the text to Spanish."
    )
    team = RoundRobinGroupChat([society_of_mind_agent, agent3], max_turns=2)

    stream = team.run_stream(task="Write a short story with a surprising ending.")
    await Console(stream)


asyncio.run(main())
DEFAULT_DESCRIPTION = 'An agent that uses an inner team of agents to generate responses.'#

The default description for a SocietyOfMindAgent.

Type:

str

DEFAULT_INSTRUCTION = 'Earlier you were asked to fulfill a request. You and your team worked diligently to address that request. Here is a transcript of that conversation:'#

The default instruction to use when generating a response using the inner team’s messages. The instruction will be prepended to the inner team’s messages when generating a response using the model. It assumes the role of ‘system’.

Type:

str

DEFAULT_RESPONSE_PROMPT = 'Output a standalone response to the original request, without mentioning any of the intermediate discussion.'#

The default response prompt to use when generating a response using the inner team’s messages. It assumes the role of ‘system’.

Type:

str

classmethod _from_config(config: SocietyOfMindAgentConfig) Self[source]#

Create a new instance of the component from a configuration object.

Parameters:

config (T) – The configuration object.

Returns:

Self – The new instance of the component.

_to_config() SocietyOfMindAgentConfig[source]#

Dump the configuration that would be requite to create a new instance of a component matching the configuration of this instance.

Returns:

T – The configuration of the component.

component_config_schema#

alias of SocietyOfMindAgentConfig

component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.SocietyOfMindAgent'#

Override the provider string for the component. This should be used to prevent internal module names being a part of the module name.

async load_state(state: Mapping[str, Any]) None[source]#

Restore agent from saved state. Default implementation for stateless agents.

async on_messages(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

Note

Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.

async on_messages_stream(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) AsyncGenerator[Annotated[ToolCallRequestEvent | ToolCallExecutionEvent | MemoryQueryEvent | UserInputRequestedEvent | ModelClientStreamingChunkEvent | ThoughtEvent, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Response, None][source]#

Handles incoming messages and returns a stream of messages and and the final item is the response. The base implementation in BaseChatAgent simply calls on_messages() and yields the messages in the response.

Note

Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.

async on_reset(cancellation_token: CancellationToken) None[source]#

Resets the agent to its initialization state.

property produced_message_types: Sequence[type[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]]]#

The types of messages that the agent produces in the Response.chat_message field. They must be ChatMessage types.

async save_state() Mapping[str, Any][source]#

Export state. Default implementation for stateless agents.

class UserProxyAgent(name: str, *, description: str = 'A human user', input_func: Callable[[str], str] | Callable[[str, CancellationToken | None], Awaitable[str]] | None = None)[source]#

Bases: BaseChatAgent, Component[UserProxyAgentConfig]

An agent that can represent a human user through an input function.

This agent can be used to represent a human user in a chat system by providing a custom input function.

Note

Using UserProxyAgent puts a running team in a temporary blocked state until the user responds. So it is important to time out the user input function and cancel using the CancellationToken if the user does not respond. The input function should also handle exceptions and return a default response if needed.

For typical use cases that involve slow human responses, it is recommended to use termination conditions such as HandoffTermination or SourceMatchTermination to stop the running team and return the control to the application. You can run the team again with the user input. This way, the state of the team can be saved and restored when the user responds.

See Human-in-the-loop for more information.

Parameters:
  • name (str) – The name of the agent.

  • description (str, optional) – A description of the agent.

  • input_func (Optional[Callable[[str], str]], Callable[[str, Optional[CancellationToken]], Awaitable[str]]) – A function that takes a prompt and returns a user input string.

For examples of integrating with web and UI frameworks, see the following:

Example

Simple usage case:

import asyncio
from autogen_core import CancellationToken
from autogen_agentchat.agents import UserProxyAgent
from autogen_agentchat.messages import TextMessage


async def simple_user_agent():
    agent = UserProxyAgent("user_proxy")
    response = await asyncio.create_task(
        agent.on_messages(
            [TextMessage(content="What is your name? ", source="user")],
            cancellation_token=CancellationToken(),
        )
    )
    print(f"Your name is {response.chat_message.content}")

Example

Cancellable usage case:

import asyncio
from typing import Any
from autogen_core import CancellationToken
from autogen_agentchat.agents import UserProxyAgent
from autogen_agentchat.messages import TextMessage


token = CancellationToken()
agent = UserProxyAgent("user_proxy")


async def timeout(delay: float):
    await asyncio.sleep(delay)


def cancellation_callback(task: asyncio.Task[Any]):
    token.cancel()


async def cancellable_user_agent():
    try:
        timeout_task = asyncio.create_task(timeout(3))
        timeout_task.add_done_callback(cancellation_callback)
        agent_task = asyncio.create_task(
            agent.on_messages(
                [TextMessage(content="What is your name? ", source="user")],
                cancellation_token=token,
            )
        )
        response = await agent_task
        print(f"Your name is {response.chat_message.content}")
    except Exception as e:
        print(f"Exception: {e}")
    except BaseException as e:
        print(f"BaseException: {e}")
class InputRequestContext[source]#

Bases: object

classmethod request_id() str[source]#
classmethod _from_config(config: UserProxyAgentConfig) Self[source]#

Create a new instance of the component from a configuration object.

Parameters:

config (T) – The configuration object.

Returns:

Self – The new instance of the component.

_to_config() UserProxyAgentConfig[source]#

Dump the configuration that would be requite to create a new instance of a component matching the configuration of this instance.

Returns:

T – The configuration of the component.

component_config_schema#

alias of UserProxyAgentConfig

component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.UserProxyAgent'#

Override the provider string for the component. This should be used to prevent internal module names being a part of the module name.

component_type: ClassVar[ComponentType] = 'agent'#

The logical type of the component.

async on_messages(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

Note

Agents are stateful and the messages passed to this method should be the new messages since the last call to this method. The agent should maintain its state between calls to this method. For example, if the agent needs to remember the previous messages to respond to the current message, it should store the previous messages in the agent state.

async on_messages_stream(messages: Sequence[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], cancellation_token: CancellationToken) AsyncGenerator[Annotated[ToolCallRequestEvent | ToolCallExecutionEvent | MemoryQueryEvent | UserInputRequestedEvent | ModelClientStreamingChunkEvent | ThoughtEvent, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')] | Response, None][source]#

Handle incoming messages by requesting user input.

async on_reset(cancellation_token: CancellationToken | None = None) None[source]#

Reset agent state.

property produced_message_types: Sequence[type[Annotated[TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]]]#

Message types this agent can produce.