{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# User Approval for Tool Execution using Intervention Handler\n", "\n", "This cookbook shows how to intercept the tool execution using\n", "an intervention hanlder, and prompt the user for permission to execute the tool." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "from dataclasses import dataclass\n", "from typing import Any, List\n", "\n", "from autogen_core.application import SingleThreadedAgentRuntime\n", "from autogen_core.base import AgentId, AgentType, MessageContext\n", "from autogen_core.base.intervention import DefaultInterventionHandler, DropMessage\n", "from autogen_core.components import FunctionCall, RoutedAgent, message_handler\n", "from autogen_core.components.models import (\n", " ChatCompletionClient,\n", " LLMMessage,\n", " SystemMessage,\n", " UserMessage,\n", ")\n", "from autogen_core.components.tool_agent import ToolAgent, ToolException, tool_agent_caller_loop\n", "from autogen_core.components.tools import PythonCodeExecutionTool, ToolSchema\n", "from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n", "from autogen_ext.models import OpenAIChatCompletionClient" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's define a simple message type that carries a string content." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "@dataclass\n", "class Message:\n", " content: str" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a simple tool use agent that is capable of using tools through a\n", "{py:class}`~autogen_core.components.tool_agent.ToolAgent`." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "class ToolUseAgent(RoutedAgent):\n", " \"\"\"An agent that uses tools to perform tasks. It executes the tools\n", " by itself by sending the tool execution task to a ToolAgent.\"\"\"\n", "\n", " def __init__(\n", " self,\n", " description: str,\n", " system_messages: List[SystemMessage],\n", " model_client: ChatCompletionClient,\n", " tool_schema: List[ToolSchema],\n", " tool_agent_type: AgentType,\n", " ) -> None:\n", " super().__init__(description)\n", " self._model_client = model_client\n", " self._system_messages = system_messages\n", " self._tool_schema = tool_schema\n", " self._tool_agent_id = AgentId(type=tool_agent_type, key=self.id.key)\n", "\n", " @message_handler\n", " async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n", " \"\"\"Handle a user message, execute the model and tools, and returns the response.\"\"\"\n", " session: List[LLMMessage] = [UserMessage(content=message.content, source=\"User\")]\n", " # Use the tool agent to execute the tools, and get the output messages.\n", " output_messages = await tool_agent_caller_loop(\n", " self,\n", " tool_agent_id=self._tool_agent_id,\n", " model_client=self._model_client,\n", " input_messages=session,\n", " tool_schema=self._tool_schema,\n", " cancellation_token=ctx.cancellation_token,\n", " )\n", " # Extract the final response from the output messages.\n", " final_response = output_messages[-1].content\n", " assert isinstance(final_response, str)\n", " return Message(content=final_response)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The tool use agent sends tool call requests to the tool agent to execute tools,\n", "so we can intercept the messages sent by the tool use agent to the tool agent\n", "to prompt the user for permission to execute the tool.\n", "\n", "Let's create an intervention handler that intercepts the messages and prompts\n", "user for before allowing the tool execution." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "class ToolInterventionHandler(DefaultInterventionHandler):\n", " async def on_send(self, message: Any, *, sender: AgentId | None, recipient: AgentId) -> Any | type[DropMessage]:\n", " if isinstance(message, FunctionCall):\n", " # Request user prompt for tool execution.\n", " user_input = input(\n", " f\"Function call: {message.name}\\nArguments: {message.arguments}\\nDo you want to execute the tool? (y/n): \"\n", " )\n", " if user_input.strip().lower() != \"y\":\n", " raise ToolException(content=\"User denied tool execution.\", call_id=message.id)\n", " return message" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can create a runtime with the intervention handler registered." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "# Create the runtime with the intervention handler.\n", "runtime = SingleThreadedAgentRuntime(intervention_handlers=[ToolInterventionHandler()])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we will use a tool for Python code execution.\n", "First, we create a Docker-based command-line code executor\n", "using {py:class}`~autogen_core.components.code_executor.docker_executorCommandLineCodeExecutor`,\n", "and then use it to instantiate a built-in Python code execution tool\n", "{py:class}`~autogen_core.components.tools.PythonCodeExecutionTool`\n", "that runs code in a Docker container." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "# Create the docker executor for the Python code execution tool.\n", "docker_executor = DockerCommandLineCodeExecutor()\n", "\n", "# Create the Python code execution tool.\n", "python_tool = PythonCodeExecutionTool(executor=docker_executor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Register the agents with tools and tool schema." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AgentType(type='tool_enabled_agent')" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Register agents.\n", "tool_agent_type = await ToolAgent.register(\n", " runtime,\n", " \"tool_executor_agent\",\n", " lambda: ToolAgent(\n", " description=\"Tool Executor Agent\",\n", " tools=[python_tool],\n", " ),\n", ")\n", "await ToolUseAgent.register(\n", " runtime,\n", " \"tool_enabled_agent\",\n", " lambda: ToolUseAgent(\n", " description=\"Tool Use Agent\",\n", " system_messages=[SystemMessage(\"You are a helpful AI Assistant. Use your tools to solve problems.\")],\n", " model_client=OpenAIChatCompletionClient(model=\"gpt-4o-mini\"),\n", " tool_schema=[python_tool.schema],\n", " tool_agent_type=tool_agent_type,\n", " ),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the agents by starting the runtime and sending a message to the tool use agent.\n", "The intervention handler will prompt you for permission to execute the tool." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The output of the code is: **Hello, World!**\n" ] } ], "source": [ "# Start the runtime and the docker executor.\n", "await docker_executor.start()\n", "runtime.start()\n", "\n", "# Send a task to the tool user.\n", "response = await runtime.send_message(\n", " Message(\"Run the following Python code: print('Hello, World!')\"), AgentId(\"tool_enabled_agent\", \"default\")\n", ")\n", "print(response.content)\n", "\n", "# Stop the runtime and the docker executor.\n", "await runtime.stop()\n", "await docker_executor.stop()" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 2 }