{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Managing State \n", "\n", "So far, we have discussed how to build components in a multi-agent application - agents, teams, termination conditions. In many cases, it is useful to save the state of these components to disk and load them back later. This is particularly useful in a web application where stateless endpoints respond to requests and need to load the state of the application from persistent storage.\n", "\n", "In this notebook, we will discuss how to save and load the state of agents, teams, and termination conditions. \n", " \n", "\n", "## Saving and Loading Agents\n", "\n", "We can get the state of an agent by calling {py:meth}`~autogen_agentchat.agents.AssistantAgent.save_state` method on \n", "an {py:class}`~autogen_agentchat.agents.AssistantAgent`. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "In Tanganyika's embrace so wide and deep, \n", "Ancient waters cradle secrets they keep, \n", "Echoes of time where horizons sleep. \n" ] } ], "source": [ "from autogen_agentchat.agents import AssistantAgent\n", "from autogen_agentchat.conditions import MaxMessageTermination\n", "from autogen_agentchat.messages import TextMessage\n", "from autogen_agentchat.teams import RoundRobinGroupChat\n", "from autogen_agentchat.ui import Console\n", "from autogen_core import CancellationToken\n", "from autogen_ext.models.openai import OpenAIChatCompletionClient\n", "\n", "assistant_agent = AssistantAgent(\n", " name=\"assistant_agent\",\n", " system_message=\"You are a helpful assistant\",\n", " model_client=OpenAIChatCompletionClient(\n", " model=\"gpt-4o-2024-08-06\",\n", " # api_key=\"YOUR_API_KEY\",\n", " ),\n", ")\n", "\n", "# Use asyncio.run(...) when running in a script.\n", "response = await assistant_agent.on_messages(\n", " [TextMessage(content=\"Write a 3 line poem on lake tangayika\", source=\"user\")], CancellationToken()\n", ")\n", "print(response.chat_message.content)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'type': 'AssistantAgentState', 'version': '1.0.0', 'llm_messages': [{'content': 'Write a 3 line poem on lake tangayika', 'source': 'user', 'type': 'UserMessage'}, {'content': \"In Tanganyika's embrace so wide and deep, \\nAncient waters cradle secrets they keep, \\nEchoes of time where horizons sleep. \", 'source': 'assistant_agent', 'type': 'AssistantMessage'}]}\n" ] } ], "source": [ "agent_state = await assistant_agent.save_state()\n", "print(agent_state)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The last line of the poem was: \"Echoes of time where horizons sleep.\"\n" ] } ], "source": [ "new_assistant_agent = AssistantAgent(\n", " name=\"assistant_agent\",\n", " system_message=\"You are a helpful assistant\",\n", " model_client=OpenAIChatCompletionClient(\n", " model=\"gpt-4o-2024-08-06\",\n", " ),\n", ")\n", "await new_assistant_agent.load_state(agent_state)\n", "\n", "# Use asyncio.run(...) when running in a script.\n", "response = await new_assistant_agent.on_messages(\n", " [TextMessage(content=\"What was the last line of the previous poem you wrote\", source=\"user\")], CancellationToken()\n", ")\n", "print(response.chat_message.content)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{note}\n", "For {py:class}`~autogen_agentchat.agents.AssistantAgent`, its state consists of the model_context.\n", "If your write your own custom agent, consider overriding the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.save_state` and {py:meth}`~autogen_agentchat.agents.BaseChatAgent.load_state` methods to customize the behavior. The default implementations save and load an empty state.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Saving and Loading Teams \n", "\n", "We can get the state of a team by calling `save_state` method on the team and load it back by calling `load_state` method on the team. \n", "\n", "When we call `save_state` on a team, it saves the state of all the agents in the team.\n", "\n", "We will begin by creating a simple {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat` team with a single agent and ask it to write a poem. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "---------- user ----------\n", "Write a beautiful poem 3-line about lake tangayika\n", "---------- assistant_agent ----------\n", "In Tanganyika's gleam, beneath the azure skies, \n", "Whispers of ancient waters, in tranquil guise, \n", "Nature's mirror, where dreams and serenity lie.\n", "[Prompt tokens: 29, Completion tokens: 34]\n", "---------- Summary ----------\n", "Number of messages: 2\n", "Finish reason: Maximum number of messages 2 reached, current message count: 2\n", "Total prompt tokens: 29\n", "Total completion tokens: 34\n", "Duration: 0.71 seconds\n" ] } ], "source": [ "# Define a team.\n", "assistant_agent = AssistantAgent(\n", " name=\"assistant_agent\",\n", " system_message=\"You are a helpful assistant\",\n", " model_client=OpenAIChatCompletionClient(\n", " model=\"gpt-4o-2024-08-06\",\n", " ),\n", ")\n", "agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))\n", "\n", "# Run the team and stream messages to the console.\n", "stream = agent_team.run_stream(task=\"Write a beautiful poem 3-line about lake tangayika\")\n", "\n", "# Use asyncio.run(...) when running in a script.\n", "await Console(stream)\n", "\n", "# Save the state of the agent team.\n", "team_state = await agent_team.save_state()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we reset the team (simulating instantiation of the team), and ask the question `What was the last line of the poem you wrote?`, we see that the team is unable to accomplish this as there is no reference to the previous run." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "---------- user ----------\n", "What was the last line of the poem you wrote?\n", "---------- assistant_agent ----------\n", "I'm sorry, but I am unable to recall or access previous interactions, including any specific poem I may have composed in our past conversations. If you like, I can write a new poem for you.\n", "[Prompt tokens: 28, Completion tokens: 40]\n", "---------- Summary ----------\n", "Number of messages: 2\n", "Finish reason: Maximum number of messages 2 reached, current message count: 2\n", "Total prompt tokens: 28\n", "Total completion tokens: 40\n", "Duration: 0.70 seconds\n" ] }, { "data": { "text/plain": [ "TaskResult(messages=[TextMessage(source='user', models_usage=None, content='What was the last line of the poem you wrote?', type='TextMessage'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=28, completion_tokens=40), content=\"I'm sorry, but I am unable to recall or access previous interactions, including any specific poem I may have composed in our past conversations. If you like, I can write a new poem for you.\", type='TextMessage')], stop_reason='Maximum number of messages 2 reached, current message count: 2')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await agent_team.reset()\n", "stream = agent_team.run_stream(task=\"What was the last line of the poem you wrote?\")\n", "await Console(stream)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we load the state of the team and ask the same question. We see that the team is able to accurately return the last line of the poem it wrote." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'type': 'TeamState', 'version': '1.0.0', 'agent_states': {'group_chat_manager/a55364ad-86fd-46ab-9449-dcb5260b1e06': {'type': 'RoundRobinManagerState', 'version': '1.0.0', 'message_thread': [{'source': 'user', 'models_usage': None, 'content': 'Write a beautiful poem 3-line about lake tangayika', 'type': 'TextMessage'}, {'source': 'assistant_agent', 'models_usage': {'prompt_tokens': 29, 'completion_tokens': 34}, 'content': \"In Tanganyika's gleam, beneath the azure skies, \\nWhispers of ancient waters, in tranquil guise, \\nNature's mirror, where dreams and serenity lie.\", 'type': 'TextMessage'}], 'current_turn': 0, 'next_speaker_index': 0}, 'collect_output_messages/a55364ad-86fd-46ab-9449-dcb5260b1e06': {}, 'assistant_agent/a55364ad-86fd-46ab-9449-dcb5260b1e06': {'type': 'ChatAgentContainerState', 'version': '1.0.0', 'agent_state': {'type': 'AssistantAgentState', 'version': '1.0.0', 'llm_messages': [{'content': 'Write a beautiful poem 3-line about lake tangayika', 'source': 'user', 'type': 'UserMessage'}, {'content': \"In Tanganyika's gleam, beneath the azure skies, \\nWhispers of ancient waters, in tranquil guise, \\nNature's mirror, where dreams and serenity lie.\", 'source': 'assistant_agent', 'type': 'AssistantMessage'}]}, 'message_buffer': []}}, 'team_id': 'a55364ad-86fd-46ab-9449-dcb5260b1e06'}\n", "---------- user ----------\n", "What was the last line of the poem you wrote?\n", "---------- assistant_agent ----------\n", "The last line of the poem I wrote is: \n", "\"Nature's mirror, where dreams and serenity lie.\"\n", "[Prompt tokens: 86, Completion tokens: 22]\n", "---------- Summary ----------\n", "Number of messages: 2\n", "Finish reason: Maximum number of messages 2 reached, current message count: 2\n", "Total prompt tokens: 86\n", "Total completion tokens: 22\n", "Duration: 0.96 seconds\n" ] }, { "data": { "text/plain": [ "TaskResult(messages=[TextMessage(source='user', models_usage=None, content='What was the last line of the poem you wrote?', type='TextMessage'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=86, completion_tokens=22), content='The last line of the poem I wrote is: \\n\"Nature\\'s mirror, where dreams and serenity lie.\"', type='TextMessage')], stop_reason='Maximum number of messages 2 reached, current message count: 2')" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(team_state)\n", "\n", "# Load team state.\n", "await agent_team.load_state(team_state)\n", "stream = agent_team.run_stream(task=\"What was the last line of the poem you wrote?\")\n", "await Console(stream)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Persisting State (File or Database)\n", "\n", "In many cases, we may want to persist the state of the team to disk (or a database) and load it back later. State is a dictionary that can be serialized to a file or written to a database." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "---------- user ----------\n", "What was the last line of the poem you wrote?\n", "---------- assistant_agent ----------\n", "The last line of the poem I wrote is: \n", "\"Nature's mirror, where dreams and serenity lie.\"\n", "[Prompt tokens: 86, Completion tokens: 22]\n", "---------- Summary ----------\n", "Number of messages: 2\n", "Finish reason: Maximum number of messages 2 reached, current message count: 2\n", "Total prompt tokens: 86\n", "Total completion tokens: 22\n", "Duration: 0.72 seconds\n" ] }, { "data": { "text/plain": [ "TaskResult(messages=[TextMessage(source='user', models_usage=None, content='What was the last line of the poem you wrote?', type='TextMessage'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=86, completion_tokens=22), content='The last line of the poem I wrote is: \\n\"Nature\\'s mirror, where dreams and serenity lie.\"', type='TextMessage')], stop_reason='Maximum number of messages 2 reached, current message count: 2')" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import json\n", "## save state to disk\n", "\n", "with open(\"coding/team_state.json\", \"w\") as f:\n", " json.dump(team_state, f)\n", "\n", "## load state from disk\n", "with open(\"coding/team_state.json\", \"r\") as f:\n", " team_state = json.load(f)\n", "\n", "new_agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))\n", "await new_agent_team.load_state(team_state)\n", "stream = new_agent_team.run_stream(task=\"What was the last line of the poem you wrote?\")\n", "await Console(stream)" ] } ], "metadata": { "kernelspec": { "display_name": "agnext", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 2 }