Tracing with AutoGen#

Authored by:  Avatar AvatarOpen on GitHub

AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature here.

This notebook is modified based on autogen agent chat example.

Learning Objectives - Upon completing this tutorial, you should be able to:

  • Trace LLM (OpenAI) Calls and visualize the trace of your application.

Requirements#

AutoGen requires Python>=3.8. To run this notebook example, please install required dependencies:

%%capture --no-stderr
%pip install -r ./requirements.txt

Set your API endpoint#

You can create the config file named OAI_CONFIG_LIST.json from example file: OAI_CONFIG_LIST.json.example.

Below code use the config_list_from_json function loads a list of configurations from an environment variable or a json file.

import autogen

# please ensure you have a json config file
env_or_file = "OAI_CONFIG_LIST.json"

# filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.

# gpt4
# config_list = autogen.config_list_from_json(
#     env_or_file,
#     filter_dict={
#         "model": ["gpt-4", "gpt-4-0314", "gpt4", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314"],
#     },
# )

# gpt35
config_list = autogen.config_list_from_json(
    env_or_file,
    filter_dict={
        "model": {
            "gpt-35-turbo",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
        },
    },
)

Construct agents#

import os

os.environ["AUTOGEN_USE_DOCKER"] = "False"

llm_config = {"config_list": config_list, "cache_seed": 42}
user_proxy = autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A human admin.",
    code_execution_config={
        "last_n_messages": 2,
        "work_dir": "groupchat",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    human_input_mode="TERMINATE",
)
coder = autogen.AssistantAgent(
    name="Coder",
    llm_config=llm_config,
)
pm = autogen.AssistantAgent(
    name="Product_manager",
    system_message="Creative in software product ideas.",
    llm_config=llm_config,
)
groupchat = autogen.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

Start chat with promptflow trace#

from promptflow.tracing import start_trace

# start a trace session, and print a url for user to check trace
# traces will be collected into below collection name
start_trace(collection="autogen-groupchat")

Open the url you get in start_trace output, when running below code, you will be able to see new traces in the UI.

from opentelemetry import trace
import json


tracer = trace.get_tracer("my_tracer")
# Create a root span
with tracer.start_as_current_span("autogen") as span:
    message = "Find a latest paper about gpt-4 on arxiv and find its potential applications in software."
    user_proxy.initiate_chat(
        manager,
        message=message,
        clear_history=True,
    )
    span.set_attribute("custom", "custom attribute value")
    # recommend to store inputs and outputs as events
    span.add_event(
        "promptflow.function.inputs", {"payload": json.dumps(dict(message=message))}
    )
    span.add_event(
        "promptflow.function.output", {"payload": json.dumps(user_proxy.last_message())}
    )
# type exit to terminate the chat

Next steps#

By now you’ve successfully tracing LLM calls in your app using prompt flow.

You can check out more examples:

  • Trace your flow: using promptflow @trace to structurally tracing your app and do evaluation on it with batch run.