Tracing with Custom OpenTelemetry Collector#
In certain scenario you might want to user your own OpenTelemetry Collector and keep your dependency mimimal.
In such case you can avoid the dependency of promptflow-devkit which provides the default collector from promptflow, and only depdent on promptflow-tracing,
Learning Objectives - Upon completing this tutorial, you should be able to:
Trace LLM (OpenAI) Calls using Custom OpenTelemetry Collector.
0. Install dependent packages#
%%capture --no-stderr
%pip install -r ./requirements.txt
1. Set up an OpenTelemetry collector#
Implement a simple collector that print the traces to stdout.
import threading
from http.server import BaseHTTPRequestHandler, HTTPServer
from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import (
ExportTraceServiceRequest,
)
class OTLPCollector(BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers["Content-Length"])
post_data = self.rfile.read(content_length)
traces_request = ExportTraceServiceRequest()
traces_request.ParseFromString(post_data)
print("Received a POST request with data:")
print(traces_request)
self.send_response(200, "Traces received")
self.end_headers()
self.wfile.write(b"Data received and printed to stdout.\n")
def run_server(port: int):
server_address = ("", port)
httpd = HTTPServer(server_address, OTLPCollector)
httpd.serve_forever()
def start_server(port: int):
server_thread = threading.Thread(target=run_server, args=(port,))
server_thread.daemon = True
server_thread.start()
print(f"Server started on port {port}. Access http://localhost:{port}/")
return server_thread
# invoke the collector service, serving on OTLP port
start_server(port=4318)
2. Trace your application with tracing#
Assume we already have a Python function that calls OpenAI API
from llm import my_llm_tool
deployment_name = "gpt-35-turbo-16k"
Call start_trace()
, and configure the OTLP exporter to above collector.
from promptflow.tracing import start_trace
start_trace()
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor
tracer_provider = trace.get_tracer_provider()
otlp_span_exporter = OTLPSpanExporter()
tracer_provider.add_span_processor(BatchSpanProcessor(otlp_span_exporter))
Visualize traces in the stdout.
result = my_llm_tool(
prompt="Write a simple Hello, world! python program that displays the greeting message. Output code only.",
deployment_name=deployment_name,
)
result
# view the traces under this cell