Quick Start#
AgentChat API, introduced in AutoGen 0.4x, offers a similar level of abstraction as the default Agent classes in AutoGen 0.2x.
Installation#
Install the autogen-agentchat
package using pip:
pip install autogen-agentchat==0.4.0dev0
Note
For further installation instructions, please refer to the package information.
Creating a Simple Agent Team#
The following example illustrates creating a simple agent team with two agents that interact to solve a task.
CodingAssistantAgent
that generates responses using an LLM model.CodeExecutorAgent
that executes code snippets and returns the output.
Because the CodeExecutorAgent
uses a Docker command-line code executor to execute code snippets,
you need to have Docker installed and running on your machine.
The task is to “Create a plot of NVIDIA and TESLA stock returns YTD from 2024-01-01 and save it to ‘nvidia_tesla_2024_ytd.png’.”
import asyncio
import logging
from autogen_agentchat import EVENT_LOGGER_NAME
from autogen_agentchat.agents import CodeExecutorAgent, CodingAssistantAgent
from autogen_agentchat.logging import ConsoleLogHandler
from autogen_agentchat.teams import RoundRobinGroupChat, StopMessageTermination
from autogen_ext.code_executor.docker_executor import DockerCommandLineCodeExecutor
from autogen_core.components.models import OpenAIChatCompletionClient
logger = logging.getLogger(EVENT_LOGGER_NAME)
logger.addHandler(ConsoleLogHandler())
logger.setLevel(logging.INFO)
async def main() -> None:
async with DockerCommandLineCodeExecutor(work_dir="coding") as code_executor:
code_executor_agent = CodeExecutorAgent("code_executor", code_executor=code_executor)
coding_assistant_agent = CodingAssistantAgent(
"coding_assistant", model_client=OpenAIChatCompletionClient(model="gpt-4o", api_key="YOUR_API_KEY")
)
group_chat = RoundRobinGroupChat([coding_assistant_agent, code_executor_agent])
result = await group_chat.run(
task="Create a plot of NVDIA and TSLA stock returns YTD from 2024-01-01 and save it to 'nvidia_tesla_2024_ytd.png'.",
termination_condition=StopMessageTermination(),
)
asyncio.run(main())
from autogen.coding import DockerCommandLineCodeExecutor
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
llm_config = {"model": "gpt-4o", "api_type": "openai", "api_key": "YOUR_API_KEY"}
code_executor = DockerCommandLineCodeExecutor(work_dir="coding")
assistant = AssistantAgent("assistant", llm_config=llm_config)
code_executor_agent = UserProxyAgent(
"code_executor_agent",
code_execution_config={"executor": code_executor},
)
result = code_executor_agent.initiate_chat(
assistant,
message="Create a plot of NVIDIA and TESLA stock returns YTD from 2024-01-01 and save it to 'nvidia_tesla_2024_ytd.png'.",
)
code_executor.stop()
Tip
AgentChat in v0.4x provides similar abstractions to the default agents in v0.2x. The CodingAssistantAgent
and CodeExecutorAgent
in v0.4x are equivalent to the AssistantAgent
and UserProxyAgent
with code execution in v0.2x.
If you are exploring migrating your code from AutoGen 0.2x to 0.4x, the following are some key differences to consider:
In v0.4x, agent interactions are managed by
Teams
(e.g.,RoundRobinGroupChat
), replacing direct chat initiation.v0.4x uses async/await syntax for improved performance and scalability.
Configuration in v0.4x is more modular, with separate components for code execution and LLM clients.