{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Agents\n", "\n", "AutoGen AgentChat provides a set of preset Agents, each with variations in how an agent might respond to messages.\n", "All agents share the following attributes and methods:\n", "\n", "- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.name`: The unique name of the agent.\n", "- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.description`: The description of the agent in text.\n", "- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: Send the agent a sequence of {py:class}`~autogen_agentchat.messages.ChatMessage` get a {py:class}`~autogen_agentchat.base.Response`. **It is important to note that agents are expected to be stateful and this method is expected to be called with new messages, not the complete history**.\n", "- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`: Same as {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` but returns an iterator of {py:class}`~autogen_agentchat.messages.AgentEvent` or {py:class}`~autogen_agentchat.messages.ChatMessage` followed by a {py:class}`~autogen_agentchat.base.Response` as the last item.\n", "- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_reset`: Reset the agent to its initial state.\n", "- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run` and {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream`: convenience methods that call {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` and {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` respectively but offer the same interface as [Teams](./teams.ipynb).\n", "\n", "See {py:mod}`autogen_agentchat.messages` for more information on AgentChat message types.\n", "\n", "\n", "## Assistant Agent\n", "\n", "{py:class}`~autogen_agentchat.agents.AssistantAgent` is a built-in agent that\n", "uses a language model and has the ability to use tools." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from autogen_agentchat.agents import AssistantAgent\n", "from autogen_agentchat.messages import TextMessage\n", "from autogen_agentchat.ui import Console\n", "from autogen_core import CancellationToken\n", "from autogen_ext.models.openai import OpenAIChatCompletionClient" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Define a tool that searches the web for information.\n", "async def web_search(query: str) -> str:\n", " \"\"\"Find information on the web\"\"\"\n", " return \"AutoGen is a programming framework for building multi-agent applications.\"\n", "\n", "\n", "# Create an agent that uses the OpenAI GPT-4o model.\n", "model_client = OpenAIChatCompletionClient(\n", " model=\"gpt-4o\",\n", " # api_key=\"YOUR_API_KEY\",\n", ")\n", "agent = AssistantAgent(\n", " name=\"assistant\",\n", " model_client=model_client,\n", " tools=[web_search],\n", " system_message=\"Use tools to solve tasks.\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Getting Responses\n", "\n", "We can use the {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages` method to get the agent response to a given message.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15), content=[FunctionCall(id='call_hqVC7UJUPhKaiJwgVKkg66ak', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_hqVC7UJUPhKaiJwgVKkg66ak')])]\n", "source='assistant' models_usage=RequestUsage(prompt_tokens=92, completion_tokens=14) content='AutoGen is a programming framework designed for building multi-agent applications.'\n" ] } ], "source": [ "async def assistant_run() -> None:\n", " response = await agent.on_messages(\n", " [TextMessage(content=\"Find information on AutoGen\", source=\"user\")],\n", " cancellation_token=CancellationToken(),\n", " )\n", " print(response.inner_messages)\n", " print(response.chat_message)\n", "\n", "\n", "# Use asyncio.run(assistant_run()) when running in a script.\n", "await assistant_run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The call to the {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages` method\n", "returns a {py:class}`~autogen_agentchat.base.Response`\n", "that contains the agent's final response in the {py:attr}`~autogen_agentchat.base.Response.chat_message` attribute,\n", "as well as a list of inner messages in the {py:attr}`~autogen_agentchat.base.Response.inner_messages` attribute,\n", "which stores the agent's \"thought process\" that led to the final response.\n", "\n", "```{note}\n", "It is important to note that {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages`\n", "will update the internal state of the agent -- it will add the messages to the agent's\n", "history. So you should call this method with new messages.\n", "**You should not repeatedly call this method with the same messages or the complete history.**\n", "```\n", "\n", "```{note}\n", "Unlike in v0.2 AgentChat, the tools are executed by the same agent directly within\n", "the same call to {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages`.\n", "By default, the agent will return the result of the tool call as the final response.\n", "```\n", "\n", "You can also call the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run` method, which is a convenience method that calls {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`. \n", "It follows the same interface as [Teams](./teams.ipynb) and returns a {py:class}`~autogen_agentchat.base.TaskResult` object." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Streaming Messages\n", "\n", "We can also stream each message as it is generated by the agent by using the\n", "{py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages_stream` method,\n", "and use {py:class}`~autogen_agentchat.ui.Console` to print the messages\n", "as they appear to the console." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "---------- assistant ----------\n", "[FunctionCall(id='call_fSp5iTGVm2FKw5NIvfECSqNd', arguments='{\"query\":\"AutoGen information\"}', name='web_search')]\n", "[Prompt tokens: 61, Completion tokens: 16]\n", "---------- assistant ----------\n", "[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_fSp5iTGVm2FKw5NIvfECSqNd')]\n", "---------- assistant ----------\n", "AutoGen is a programming framework designed for building multi-agent applications. If you need more detailed information or specific aspects about AutoGen, feel free to ask!\n", "[Prompt tokens: 93, Completion tokens: 32]\n", "---------- Summary ----------\n", "Number of inner messages: 2\n", "Total prompt tokens: 154\n", "Total completion tokens: 48\n", "Duration: 4.30 seconds\n" ] } ], "source": [ "async def assistant_run_stream() -> None:\n", " # Option 1: read each message from the stream (as shown in the previous example).\n", " # async for message in agent.on_messages_stream(\n", " # [TextMessage(content=\"Find information on AutoGen\", source=\"user\")],\n", " # cancellation_token=CancellationToken(),\n", " # ):\n", " # print(message)\n", "\n", " # Option 2: use Console to print all messages as they appear.\n", " await Console(\n", " agent.on_messages_stream(\n", " [TextMessage(content=\"Find information on AutoGen\", source=\"user\")],\n", " cancellation_token=CancellationToken(),\n", " )\n", " )\n", "\n", "\n", "# Use asyncio.run(assistant_run_stream()) when running in a script.\n", "await assistant_run_stream()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages_stream` method\n", "returns an asynchronous generator that yields each inner message generated by the agent,\n", "with the final item being the response message in the {py:attr}`~autogen_agentchat.base.Response.chat_message` attribute.\n", "\n", "From the messages, you can observe that the assistant agent utilized the `web_search` tool to\n", "gather information and responded based on the search results.\n", "\n", "You can also use {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream` to get the same streaming behavior as {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`. It follows the same interface as [Teams](./teams.ipynb)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Tools\n", "\n", "Large Language Models (LLMs) are typically limited to generating text or code responses. \n", "However, many complex tasks benefit from the ability to use external tools that perform specific actions,\n", "such as fetching data from APIs or databases.\n", "\n", "To address this limitation, modern LLMs can now accept a list of available tool schemas \n", "(descriptions of tools and their arguments) and generate a tool call message. \n", "This capability is known as **Tool Calling** or **Function Calling** and \n", "is becoming a popular pattern in building intelligent agent-based applications.\n", "Refer to the documentation from [OpenAI](https://platform.openai.com/docs/guides/function-calling) \n", "and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use) for more information about tool calling in LLMs.\n", "\n", "In AgentChat, the {py:class}`~autogen_agentchat.agents.AssistantAgent` can use tools to perform specific actions.\n", "The `web_search` tool is one such tool that allows the assistant agent to search the web for information.\n", "A custom tool can be a Python function or a subclass of the {py:class}`~autogen_core.tools.BaseTool`.\n", "\n", "By default, when {py:class}`~autogen_agentchat.agents.AssistantAgent` executes a tool,\n", "it will return the tool's output as a string in {py:class}`~autogen_agentchat.messages.ToolCallSummaryMessage` in its response.\n", "If your tool does not return a well-formed string in natural language, you\n", "can add a reflection step to have the model summarize the tool's output,\n", "by setting the `reflect_on_tool_use=True` parameter in the {py:class}`~autogen_agentchat.agents.AssistantAgent` constructor." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Langchain Tools\n", "\n", "In addition to custom tools, you can also use tools from the Langchain library\n", "by wrapping them in {py:class}`~autogen_ext.tools.langchain.LangChainToolAdapter`." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "---------- assistant ----------\n", "[FunctionCall(id='call_BEYRkf53nBS1G2uG60wHP0zf', arguments='{\"query\":\"df[\\'Age\\'].mean()\"}', name='python_repl_ast')]\n", "[Prompt tokens: 111, Completion tokens: 22]\n", "---------- assistant ----------\n", "[FunctionExecutionResult(content='29.69911764705882', call_id='call_BEYRkf53nBS1G2uG60wHP0zf')]\n", "---------- assistant ----------\n", "29.69911764705882\n", "---------- Summary ----------\n", "Number of inner messages: 2\n", "Total prompt tokens: 111\n", "Total completion tokens: 22\n", "Duration: 0.62 seconds\n" ] }, { "data": { "text/plain": [ "Response(chat_message=ToolCallSummaryMessage(source='assistant', models_usage=None, content='29.69911764705882', type='ToolCallSummaryMessage'), inner_messages=[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=111, completion_tokens=22), content=[FunctionCall(id='call_BEYRkf53nBS1G2uG60wHP0zf', arguments='{\"query\":\"df[\\'Age\\'].mean()\"}', name='python_repl_ast')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='29.69911764705882', call_id='call_BEYRkf53nBS1G2uG60wHP0zf')], type='ToolCallExecutionEvent')])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd\n", "from autogen_ext.tools.langchain import LangChainToolAdapter\n", "from langchain_experimental.tools.python.tool import PythonAstREPLTool\n", "\n", "df = pd.read_csv(\"https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv\")\n", "tool = LangChainToolAdapter(PythonAstREPLTool(locals={\"df\": df}))\n", "model_client = OpenAIChatCompletionClient(model=\"gpt-4o\")\n", "agent = AssistantAgent(\n", " \"assistant\", tools=[tool], model_client=model_client, system_message=\"Use the `df` variable to access the dataset.\"\n", ")\n", "await Console(\n", " agent.on_messages_stream(\n", " [TextMessage(content=\"What's the average age of the passengers?\", source=\"user\")], CancellationToken()\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Parallel Tool Calls\n", "\n", "Some models support parallel tool calls, which can be useful for tasks that require multiple tools to be called simultaneously.\n", "By default, if the model client produces multiple tool calls, {py:class}`~autogen_agentchat.agents.AssistantAgent`\n", "will call the tools in parallel.\n", "\n", "You may want to disable parallel tool calls when the tools have side effects that may interfere with each other, or,\n", "when agent behavior needs to be consistent across different models.\n", "This should be done at the model client level.\n", "\n", "For {py:class}`~autogen_ext.models.openai.OpenAIChatCompletionClient` and {py:class}`~autogen_ext.models.openai.AzureOpenAIChatCompletionClient`,\n", "set `parallel_tool_calls=False` to disable parallel tool calls." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_client_no_parallel_tool_call = OpenAIChatCompletionClient(\n", " model=\"gpt-4o\",\n", " parallel_tool_calls=False, # type: ignore\n", ")\n", "agent_no_parallel_tool_call = AssistantAgent(\n", " name=\"assistant\",\n", " model_client=model_client_no_parallel_tool_call,\n", " tools=[web_search],\n", " system_message=\"Use tools to solve tasks.\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Structured Output\n", "\n", "Structured output allows models to return structured JSON text with pre-defined schema\n", "provided by the application. Different from JSON-mode, the schema can be provided\n", "as a [Pydantic BaseModel](https://docs.pydantic.dev/latest/concepts/models/)\n", "class, which can also be used to validate the output. \n", "\n", "```{note}\n", "Structured output is only available for models that support it. It also\n", "requires the model client to support structured output as well.\n", "Currently, the {py:class}`~autogen_ext.models.openai.OpenAIChatCompletionClient`\n", "and {py:class}`~autogen_ext.models.openai.AzureOpenAIChatCompletionClient`\n", "support structured output.\n", "```\n", "\n", "Structured output is also useful for incorporating Chain-of-Thought\n", "reasoning in the agent's responses.\n", "See the example below for how to use structured output with the assistant agent." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "---------- user ----------\n", "I am happy.\n", "---------- assistant ----------\n", "{\"thoughts\":\"The user explicitly states that they are happy.\",\"response\":\"happy\"}\n" ] }, { "data": { "text/plain": [ "TaskResult(messages=[TextMessage(source='user', models_usage=None, content='I am happy.', type='TextMessage'), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=89, completion_tokens=18), content='{\"thoughts\":\"The user explicitly states that they are happy.\",\"response\":\"happy\"}', type='TextMessage')], stop_reason=None)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Literal\n", "\n", "from pydantic import BaseModel\n", "\n", "\n", "# The response format for the agent as a Pydantic base model.\n", "class AgentResponse(BaseModel):\n", " thoughts: str\n", " response: Literal[\"happy\", \"sad\", \"neutral\"]\n", "\n", "\n", "# Create an agent that uses the OpenAI GPT-4o model with the custom response format.\n", "model_client = OpenAIChatCompletionClient(\n", " model=\"gpt-4o\",\n", " response_format=AgentResponse, # type: ignore\n", ")\n", "agent = AssistantAgent(\n", " \"assistant\",\n", " model_client=model_client,\n", " system_message=\"Categorize the input as happy, sad, or neutral following the JSON format.\",\n", ")\n", "\n", "await Console(agent.run_stream(task=\"I am happy.\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Streaming Tokens\n", "\n", "You can stream the tokens generated by the model client by setting `model_client_stream=True`.\n", "This will cause the agent to yield {py:class}`~autogen_agentchat.messages.ModelClientStreamingChunkEvent` messages\n", "in {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` and {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream`.\n", "\n", "The underlying model API must support streaming tokens for this to work.\n", "Please check with your model provider to see if this is supported." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "source='assistant' models_usage=None content='Two' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' cities' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' South' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' America' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' are' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' Buenos' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' Aires' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' Argentina' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' and' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' São' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' Paulo' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' Brazil' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content='.' type='ModelClientStreamingChunkEvent'\n", "Response(chat_message=TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content='Two cities in South America are Buenos Aires in Argentina and São Paulo in Brazil.', type='TextMessage'), inner_messages=[])\n" ] } ], "source": [ "model_client = OpenAIChatCompletionClient(model=\"gpt-4o\")\n", "\n", "streaming_assistant = AssistantAgent(\n", " name=\"assistant\",\n", " model_client=model_client,\n", " system_message=\"You are a helpful assistant.\",\n", " model_client_stream=True, # Enable streaming tokens.\n", ")\n", "\n", "# Use an async function and asyncio.run() in a script.\n", "async for message in streaming_assistant.on_messages_stream( # type: ignore\n", " [TextMessage(content=\"Name two cities in South America\", source=\"user\")],\n", " cancellation_token=CancellationToken(),\n", "):\n", " print(message)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see the streaming chunks in the output above.\n", "The chunks are generated by the model client and are yielded by the agent as they are received.\n", "The final response, the concatenation of all the chunks, is yielded right after the last chunk.\n", "\n", "Similarly, {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream` will also yield the same streaming chunks,\n", "followed by a full text message right after the last chunk." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "source='user' models_usage=None content='Name two cities in North America.' type='TextMessage'\n", "source='assistant' models_usage=None content='Two' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' cities' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' North' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' America' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' are' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' New' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' York' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' City' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' the' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' United' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' States' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' and' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' Toronto' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content=' Canada' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=None content='.' type='ModelClientStreamingChunkEvent'\n", "source='assistant' models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0) content='Two cities in North America are New York City in the United States and Toronto in Canada.' type='TextMessage'\n", "TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Name two cities in North America.', type='TextMessage'), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content='Two cities in North America are New York City in the United States and Toronto in Canada.', type='TextMessage')], stop_reason=None)\n" ] } ], "source": [ "async for message in streaming_assistant.run_stream(task=\"Name two cities in North America.\"): # type: ignore\n", " print(message)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Model Context\n", "\n", "{py:class}`~autogen_agentchat.agents.AssistantAgent` has a `model_context`\n", "parameter that can be used to pass in a {py:class}`~autogen_core.model_context.ChatCompletionContext`\n", "object. This allows the agent to use different model contexts, such as\n", "{py:class}`~autogen_core.model_context.BufferedChatCompletionContext` to\n", "limit the context sent to the model.\n", "\n", "By default, {py:class}`~autogen_agentchat.agents.AssistantAgent` uses\n", "the {py:class}`~autogen_core.model_context.UnboundedChatCompletionContext`\n", "which sends the full conversation history to the model. To limit the context\n", "to the last `n` messages, you can use the {py:class}`~autogen_core.model_context.BufferedChatCompletionContext`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from autogen_core.model_context import BufferedChatCompletionContext\n", "\n", "# Create an agent that uses only the last 5 messages in the context to generate responses.\n", "agent = AssistantAgent(\n", " name=\"assistant\",\n", " model_client=model_client,\n", " tools=[web_search],\n", " system_message=\"Use tools to solve tasks.\",\n", " model_context=BufferedChatCompletionContext(buffer_size=5), # Only use the last 5 messages in the context.\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Other Preset Agents\n", "\n", "The following preset agents are available:\n", "\n", "- {py:class}`~autogen_agentchat.agents.UserProxyAgent`: An agent that takes user input returns it as responses.\n", "- {py:class}`~autogen_agentchat.agents.CodeExecutorAgent`: An agent that can execute code.\n", "- {py:class}`~autogen_ext.agents.openai.OpenAIAssistantAgent`: An agent that is backed by an OpenAI Assistant, with ability to use custom tools.\n", "- {py:class}`~autogen_ext.agents.web_surfer.MultimodalWebSurfer`: A multi-modal agent that can search the web and visit web pages for information.\n", "- {py:class}`~autogen_ext.agents.file_surfer.FileSurfer`: An agent that can search and browse local files for information.\n", "- {py:class}`~autogen_ext.agents.video_surfer.VideoSurfer`: An agent that can watch videos for information." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next Step\n", "\n", "Having explored the usage of the {py:class}`~autogen_agentchat.agents.AssistantAgent`, we can now proceed to the next section to learn about the teams feature in AgentChat.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<!-- ## CodingAssistantAgent\n", "\n", "Generates responses (text and code) using an LLM upon receipt of a message. It takes a `system_message` argument that defines or sets the tone for how the agent's LLM should respond. \n", "\n", "```python\n", "\n", "writing_assistant_agent = CodingAssistantAgent(\n", " name=\"writing_assistant_agent\",\n", " system_message=\"You are a helpful assistant that solve tasks by generating text responses and code.\",\n", " model_client=model_client,\n", ")\n", "`\n", "\n", "We can explore or test the behavior of the agent by sending a message to it using the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method. \n", "\n", "```python\n", "result = await writing_assistant_agent.on_messages(\n", " messages=[\n", " TextMessage(content=\"What is the weather right now in France?\", source=\"user\"),\n", " ],\n", " cancellation_token=CancellationToken(),\n", ")\n", "print(result) -->" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.7" } }, "nbformat": 4, "nbformat_minor": 2 }