autogen_agentchat.teams#
- class autogen_agentchat.teams.BaseGroupChat(participants: List[ChatAgent], group_chat_manager_class: type[BaseGroupChatManager], termination_condition: TerminationCondition | None = None)[source]#
-
The base class for group chat teams.
To implement a group chat team, first create a subclass of
BaseGroupChatManager
and then create a subclass ofBaseGroupChat
that uses the group chat manager.- async reset() None [source]#
Reset the team and its participants to their initial state.
The team must be stopped before it can be reset.
- Raises:
RuntimeError – If the team has not been initialized or is currently running.
Example using the
RoundRobinGroupChat
team:import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.task import MaxMessageTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_ext.models import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent1 = AssistantAgent("Assistant1", model_client=model_client) agent2 = AssistantAgent("Assistant2", model_client=model_client) termination = MaxMessageTermination(3) team = RoundRobinGroupChat([agent1, agent2], termination_condition=termination) stream = team.run_stream(task="Count from 1 to 10, respond one at a time.") async for message in stream: print(message) # Reset the team. await team.reset() stream = team.run_stream(task="Count from 1 to 10, respond one at a time.") async for message in stream: print(message) asyncio.run(main())
- async run(*, task: str | TextMessage | MultiModalMessage | None = None, cancellation_token: CancellationToken | None = None) TaskResult [source]#
Run the team and return the result. The base implementation uses
run_stream()
to run the team and then returns the final result. Once the team is stopped, the termination condition is reset.Example using the
RoundRobinGroupChat
team:import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.task import MaxMessageTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_ext.models import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent1 = AssistantAgent("Assistant1", model_client=model_client) agent2 = AssistantAgent("Assistant2", model_client=model_client) termination = MaxMessageTermination(3) team = RoundRobinGroupChat([agent1, agent2], termination_condition=termination) result = await team.run(task="Count from 1 to 10, respond one at a time.") print(result) # Run the team again without a task to continue the previous task. result = await team.run() print(result) asyncio.run(main())
- async run_stream(*, task: str | TextMessage | MultiModalMessage | None = None, cancellation_token: CancellationToken | None = None) AsyncGenerator[TextMessage | MultiModalMessage | StopMessage | HandoffMessage | ToolCallMessage | ToolCallResultMessage | TaskResult, None] [source]#
Run the team and produces a stream of messages and the final result of the type
TaskResult
as the last item in the stream. Once the team is stopped, the termination condition is reset.Example using the
RoundRobinGroupChat
team:import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.task import MaxMessageTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_ext.models import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent1 = AssistantAgent("Assistant1", model_client=model_client) agent2 = AssistantAgent("Assistant2", model_client=model_client) termination = MaxMessageTermination(3) team = RoundRobinGroupChat([agent1, agent2], termination_condition=termination) stream = team.run_stream(task="Count from 1 to 10, respond one at a time.") async for message in stream: print(message) # Run the team again without a task to continue the previous task. stream = team.run_stream() async for message in stream: print(message) asyncio.run(main())
- class autogen_agentchat.teams.RoundRobinGroupChat(participants: List[ChatAgent], termination_condition: TerminationCondition | None = None)[source]#
Bases:
BaseGroupChat
A team that runs a group chat with participants taking turns in a round-robin fashion to publish a message to all.
If a single participant is in the team, the participant will be the only speaker.
- Parameters:
participants (List[BaseChatAgent]) – The participants in the group chat.
termination_condition (TerminationCondition, optional) – The termination condition for the group chat. Defaults to None. Without a termination condition, the group chat will run indefinitely.
- Raises:
ValueError – If no participants are provided or if participant names are not unique.
Examples:
A team with one participant with tools:
import asyncio from autogen_ext.models import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.task import TextMentionTermination async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") async def get_weather(location: str) -> str: return f"The weather in {location} is sunny." assistant = AssistantAgent( "Assistant", model_client=model_client, tools=[get_weather], ) termination = TextMentionTermination("TERMINATE") team = RoundRobinGroupChat([assistant], termination_condition=termination) stream = team.run_stream("What's the weather in New York?") async for message in stream: print(message) asyncio.run(main())
A team with multiple participants:
import asyncio from autogen_ext.models import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.task import TextMentionTermination async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent1 = AssistantAgent("Assistant1", model_client=model_client) agent2 = AssistantAgent("Assistant2", model_client=model_client) termination = TextMentionTermination("TERMINATE") team = RoundRobinGroupChat([agent1, agent2], termination_condition=termination) stream = team.run_stream("Tell me some jokes.") async for message in stream: print(message) asyncio.run(main())
- class autogen_agentchat.teams.SelectorGroupChat(participants: List[ChatAgent], model_client: ChatCompletionClient, *, termination_condition: TerminationCondition | None = None, selector_prompt: str = 'You are in a role play game. The following roles are available:\n{roles}.\nRead the following conversation. Then select the next role from {participants} to play. Only return the role.\n\n{history}\n\nRead the above conversation. Then select the next role from {participants} to play. Only return the role.\n', allow_repeated_speaker: bool = False, selector_func: Callable[[Sequence[TextMessage | MultiModalMessage | StopMessage | HandoffMessage | ToolCallMessage | ToolCallResultMessage]], str | None] | None = None)[source]#
Bases:
BaseGroupChat
A group chat team that have participants takes turn to publish a message to all, using a ChatCompletion model to select the next speaker after each message.
- Parameters:
participants (List[ChatAgent]) – The participants in the group chat, must have unique names and at least two participants.
model_client (ChatCompletionClient) – The ChatCompletion model client used to select the next speaker.
termination_condition (TerminationCondition, optional) – The termination condition for the group chat. Defaults to None. Without a termination condition, the group chat will run indefinitely.
selector_prompt (str, optional) – The prompt template to use for selecting the next speaker. Must contain ‘{roles}’, ‘{participants}’, and ‘{history}’ to be filled in.
allow_repeated_speaker (bool, optional) – Whether to allow the same speaker to be selected consecutively. Defaults to False.
selector_func (Callable[[Sequence[AgentMessage]], str | None], optional) – A custom selector function that takes the conversation history and returns the name of the next speaker. If provided, this function will be used to override the model to select the next speaker. If the function returns None, the model will be used to select the next speaker.
- Raises:
ValueError – If the number of participants is less than two or if the selector prompt is invalid.
Examples:
A team with multiple participants:
import asyncio from autogen_ext.models import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import SelectorGroupChat from autogen_agentchat.task import TextMentionTermination async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") async def lookup_hotel(location: str) -> str: return f"Here are some hotels in {location}: hotel1, hotel2, hotel3." async def lookup_flight(origin: str, destination: str) -> str: return f"Here are some flights from {origin} to {destination}: flight1, flight2, flight3." async def book_trip() -> str: return "Your trip is booked!" travel_advisor = AssistantAgent( "Travel_Advisor", model_client, tools=[book_trip], description="Helps with travel planning.", ) hotel_agent = AssistantAgent( "Hotel_Agent", model_client, tools=[lookup_hotel], description="Helps with hotel booking.", ) flight_agent = AssistantAgent( "Flight_Agent", model_client, tools=[lookup_flight], description="Helps with flight booking.", ) termination = TextMentionTermination("TERMINATE") team = SelectorGroupChat( [travel_advisor, hotel_agent, flight_agent], model_client=model_client, termination_condition=termination, ) stream = team.run_stream("Book a 3-day trip to new york.") async for message in stream: print(message) asyncio.run(main())
A team with a custom selector function:
import asyncio from autogen_ext.models import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import SelectorGroupChat from autogen_agentchat.task import TextMentionTermination async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") def check_caculation(x: int, y: int, answer: int) -> str: if x + y == answer: return "Correct!" else: return "Incorrect!" agent1 = AssistantAgent( "Agent1", model_client, description="For calculation", system_message="Calculate the sum of two numbers", ) agent2 = AssistantAgent( "Agent2", model_client, tools=[check_caculation], description="For checking calculation", system_message="Check the answer and respond with 'Correct!' or 'Incorrect!'", ) def selector_func(messages): if len(messages) == 1 or messages[-1].content == "Incorrect!": return "Agent1" if messages[-1].source == "Agent1": return "Agent2" return None termination = TextMentionTermination("Correct!") team = SelectorGroupChat( [agent1, agent2], model_client=model_client, selector_func=selector_func, termination_condition=termination, ) stream = team.run_stream("What is 1 + 1?") async for message in stream: print(message) asyncio.run(main())
- class autogen_agentchat.teams.Swarm(participants: List[ChatAgent], termination_condition: TerminationCondition | None = None)[source]#
Bases:
BaseGroupChat
A group chat team that selects the next speaker based on handoff message only.
The first participant in the list of participants is the initial speaker. The next speaker is selected based on the
HandoffMessage
message sent by the current speaker. If no handoff message is sent, the current speaker continues to be the speaker.- Parameters:
participants (List[ChatAgent]) – The agents participating in the group chat. The first agent in the list is the initial speaker.
termination_condition (TerminationCondition, optional) – The termination condition for the group chat. Defaults to None. Without a termination condition, the group chat will run indefinitely.
Examples
import asyncio from autogen_ext.models import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.teams import Swarm from autogen_agentchat.task import MaxMessageTermination async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent1 = AssistantAgent( "Alice", model_client=model_client, handoffs=["Bob"], system_message="You are Alice and you only answer questions about yourself.", ) agent2 = AssistantAgent( "Bob", model_client=model_client, system_message="You are Bob and your birthday is on 1st January." ) termination = MaxMessageTermination(3) team = Swarm([agent1, agent2], termination_condition=termination) stream = team.run_stream("What is bob's birthday?") async for message in stream: print(message) asyncio.run(main())