autogen_agentchat.agents#

class autogen_agentchat.agents.AssistantAgent(name: str, model_client: ChatCompletionClient, *, tools: List[Tool | Callable[[...], Any] | Callable[[...], Awaitable[Any]]] | None = None, handoffs: List[Handoff | str] | None = None, description: str = 'An agent that provides assistance with ability to use tools.', system_message: str = "You are a helpful AI assistant. Solve tasks using your tools. Reply with 'TERMINATE' when the task has been completed.")[source]#

Bases: BaseChatAgent

An agent that provides assistance with tool use.

It responds with a StopMessage when ‘terminate’ is detected in the response.

Parameters:
  • name (str) – The name of the agent.

  • model_client (ChatCompletionClient) – The model client to use for inference.

  • tools (List[Tool | Callable[..., Any] | Callable[..., Awaitable[Any]]] | None, optional) – The tools to register with the agent.

  • handoffs (List[Handoff | str] | None, optional) – The handoff configurations for the agent, allowing it to transfer to other agents by responding with a HandoffMessage. If a handoff is a string, it should represent the target agent’s name.

  • description (str, optional) – The description of the agent.

  • system_message (str, optional) – The system message for the model.

Raises:
  • ValueError – If tool names are not unique.

  • ValueError – If handoff names are not unique.

  • ValueError – If handoff names are not unique from tool names.

Examples

The following example demonstrates how to create an assistant agent with a model client and generate a response to a simple task.

import asyncio
from autogen_core.base import CancellationToken
from autogen_ext.models import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage


async def main() -> None:
    model_client = OpenAIChatCompletionClient(model="gpt-4o")
    agent = AssistantAgent(name="assistant", model_client=model_client)

    response = await agent.on_messages(
        [TextMessage(content="What is the capital of France?", source="user")], CancellationToken()
    )
    print(response)


asyncio.run(main())

The following example demonstrates how to create an assistant agent with a model client and a tool, and generate a stream of messages for a task.

import asyncio
from autogen_ext.models import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core.base import CancellationToken


async def get_current_time() -> str:
    return "The current time is 12:00 PM."


async def main() -> None:
    model_client = OpenAIChatCompletionClient(model="gpt-4o")
    agent = AssistantAgent(name="assistant", model_client=model_client, tools=[get_current_time])

    stream = agent.on_messages_stream(
        [TextMessage(content="What is the current time?", source="user")], CancellationToken()
    )

    async for message in stream:
        print(message)


asyncio.run(main())
async on_messages(messages: Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

async on_messages_stream(messages: Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage], cancellation_token: CancellationToken) AsyncGenerator[TextMessage | MultiModalMessage | StopMessage | HandoffMessage | ToolCallMessage | ToolCallResultMessage | Response, None][source]#

Handles incoming messages and returns a stream of messages and and the final item is the response. The base implementation in BaseChatAgent simply calls on_messages() and yields the messages in the response.

async on_reset(cancellation_token: CancellationToken) None[source]#

Reset the assistant agent to its initialization state.

property produced_message_types: List[type[TextMessage | MultiModalMessage | StopMessage | HandoffMessage]]#

The types of messages that the assistant agent produces.

class autogen_agentchat.agents.BaseChatAgent(name: str, description: str)[source]#

Bases: ChatAgent, ABC

Base class for a chat agent.

property description: str#

The description of the agent. This is used by team to make decisions about which agents to use. The description should describe the agent’s capabilities and how to interact with it.

property name: str#

The name of the agent. This is used by team to uniquely identify the agent. It should be unique within the team.

abstract async on_messages(messages: Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

async on_messages_stream(messages: Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage], cancellation_token: CancellationToken) AsyncGenerator[TextMessage | MultiModalMessage | StopMessage | HandoffMessage | ToolCallMessage | ToolCallResultMessage | Response, None][source]#

Handles incoming messages and returns a stream of messages and and the final item is the response. The base implementation in BaseChatAgent simply calls on_messages() and yields the messages in the response.

abstract async on_reset(cancellation_token: CancellationToken) None[source]#

Resets the agent to its initialization state.

abstract property produced_message_types: List[type[TextMessage | MultiModalMessage | StopMessage | HandoffMessage]]#

The types of messages that the agent produces.

async run(*, task: str | TextMessage | MultiModalMessage | None = None, cancellation_token: CancellationToken | None = None) TaskResult[source]#

Run the agent with the given task and return the result.

async run_stream(*, task: str | TextMessage | MultiModalMessage | None = None, cancellation_token: CancellationToken | None = None) AsyncGenerator[TextMessage | MultiModalMessage | StopMessage | HandoffMessage | ToolCallMessage | ToolCallResultMessage | TaskResult, None][source]#

Run the agent with the given task and return a stream of messages and the final task result as the last item in the stream.

class autogen_agentchat.agents.CodeExecutorAgent(name: str, code_executor: CodeExecutor, *, description: str = 'A computer terminal that performs no other action than running Python scripts (provided to it quoted in ```python code blocks), or sh shell scripts (provided to it quoted in ```sh code blocks).')[source]#

Bases: BaseChatAgent

An agent that executes code snippets and report the results.

async on_messages(messages: Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

async on_reset(cancellation_token: CancellationToken) None[source]#

It it’s a no-op as the code executor agent has no mutable state.

property produced_message_types: List[type[TextMessage | MultiModalMessage | StopMessage | HandoffMessage]]#

The types of messages that the code executor agent produces.

class autogen_agentchat.agents.CodingAssistantAgent(name: str, model_client: ChatCompletionClient, *, description: str = 'A helpful and general-purpose AI assistant that has strong language skills, Python skills, and Linux command line skills.', system_message: str = 'You are a helpful AI assistant.\nSolve tasks using your coding and language skills.\nIn the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.\n    1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.\n    2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.\nSolve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\nWhen using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can\'t modify your code. So do not suggest incomplete code which requires users to modify. Don\'t use a code block if it\'s not intended to be executed by the user.\nIf you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don\'t include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use \'print\' function for the output when relevant. Check the execution result returned by the user.\nIf the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can\'t be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.\nWhen you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.\nReply "TERMINATE" in the end when code has been executed and task is complete.')[source]#

Bases: AssistantAgent

[DEPRECATED] An agent that provides coding assistance using an LLM model client.

It responds with a StopMessage when ‘terminate’ is detected in the response.

pydantic model autogen_agentchat.agents.Handoff[source]#

Bases: BaseModel

Handoff configuration for AssistantAgent.

Show JSON schema
{
   "title": "Handoff",
   "description": "Handoff configuration for :class:`AssistantAgent`.",
   "type": "object",
   "properties": {
      "target": {
         "title": "Target",
         "type": "string"
      },
      "description": {
         "default": null,
         "title": "Description",
         "type": "string"
      },
      "name": {
         "default": null,
         "title": "Name",
         "type": "string"
      },
      "message": {
         "default": null,
         "title": "Message",
         "type": "string"
      }
   },
   "required": [
      "target"
   ]
}

Fields:
  • description (str)

  • message (str)

  • name (str)

  • target (str)

Validators:
  • set_defaults » all fields

field description: str = None#

The description of the handoff such as the condition under which it should happen and the target agent’s ability. If not provided, it is generated from the target agent’s name.

Validated by:
  • set_defaults

field message: str = None#

The message to the target agent. If not provided, it is generated from the target agent’s name.

Validated by:
  • set_defaults

field name: str = None#

The name of this handoff configuration. If not provided, it is generated from the target agent’s name.

Validated by:
  • set_defaults

field target: str [Required]#

The name of the target agent to handoff to.

Validated by:
  • set_defaults

validator set_defaults  »  all fields[source]#
property handoff_tool: Tool#

Create a handoff tool from this handoff configuration.

class autogen_agentchat.agents.SocietyOfMindAgent(name: str, team: Team, model_client: ChatCompletionClient, *, description: str = 'An agent that uses an inner team of agents to generate responses.', task_prompt: str = '{transcript}\nContinue.', response_prompt: str = 'Here is a transcript of conversation so far:\n{transcript}\n\\Provide a response to the original request.')[source]#

Bases: BaseChatAgent

An agent that uses an inner team of agents to generate responses.

Each time the agent’s on_messages() or on_messages_stream() method is called, it runs the inner team of agents and then uses the model client to generate a response based on the inner team’s messages. Once the response is generated, the agent resets the inner team by calling Team.reset().

Parameters:
  • name (str) – The name of the agent.

  • team (Team) – The team of agents to use.

  • model_client (ChatCompletionClient) – The model client to use for preparing responses.

  • description (str, optional) – The description of the agent.

Example:

import asyncio
from autogen_agentchat.agents import AssistantAgent, SocietyOfMindAgent
from autogen_ext.models import OpenAIChatCompletionClient
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.task import MaxMessageTermination


async def main() -> None:
    model_client = OpenAIChatCompletionClient(model="gpt-4o")

    agent1 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a helpful assistant.")
    agent2 = AssistantAgent("assistant2", model_client=model_client, system_message="You are a helpful assistant.")
    inner_termination = MaxMessageTermination(3)
    inner_team = RoundRobinGroupChat([agent1, agent2], termination_condition=inner_termination)

    society_of_mind_agent = SocietyOfMindAgent("society_of_mind", team=inner_team, model_client=model_client)

    agent3 = AssistantAgent("assistant3", model_client=model_client, system_message="You are a helpful assistant.")
    agent4 = AssistantAgent("assistant4", model_client=model_client, system_message="You are a helpful assistant.")
    outter_termination = MaxMessageTermination(10)
    team = RoundRobinGroupChat([society_of_mind_agent, agent3, agent4], termination_condition=outter_termination)

    stream = team.run_stream(task="Tell me a one-liner joke.")
    async for message in stream:
        print(message)


asyncio.run(main())
async on_messages(messages: Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage], cancellation_token: CancellationToken) Response[source]#

Handles incoming messages and returns a response.

async on_messages_stream(messages: Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage], cancellation_token: CancellationToken) AsyncGenerator[TextMessage | MultiModalMessage | StopMessage | HandoffMessage | ToolCallMessage | ToolCallResultMessage | Response, None][source]#

Handles incoming messages and returns a stream of messages and and the final item is the response. The base implementation in BaseChatAgent simply calls on_messages() and yields the messages in the response.

async on_reset(cancellation_token: CancellationToken) None[source]#

Resets the agent to its initialization state.

property produced_message_types: List[type[TextMessage | MultiModalMessage | StopMessage | HandoffMessage]]#

The types of messages that the agent produces.

class autogen_agentchat.agents.ToolUseAssistantAgent(name: str, model_client: ChatCompletionClient, registered_tools: List[Tool | Callable[[...], Any] | Callable[[...], Awaitable[Any]]], *, description: str = 'An agent that provides assistance with ability to use tools.', system_message: str = "You are a helpful AI assistant. Solve tasks using your tools. Reply with 'TERMINATE' when the task has been completed.")[source]#

Bases: AssistantAgent

[DEPRECATED] An agent that provides assistance with tool use.

It responds with a StopMessage when ‘terminate’ is detected in the response.

Parameters:
  • name (str) – The name of the agent.

  • model_client (ChatCompletionClient) – The model client to use for inference.

  • registered_tools (List[Tool | Callable[..., Any] | Callable[..., Awaitable[Any]]) – The tools to register with the agent.

  • description (str, optional) – The description of the agent.

  • system_message (str, optional) – The system message for the model.