Tracing with LangChain apps#
The tracing capability provided by Prompt flow is built on top of OpenTelemetry that gives you complete observability over your LLM applications. And there is already a rich set of OpenTelemetry instrumentation packages available in OpenTelemetry Eco System.
In this example we will demo how to use opentelemetry-instrumentation-langchain package provided by Traceloop to instrument LangChain apps.
Learning Objectives - Upon completing this tutorial, you should be able to:
Trace
LangChain
applications and visualize the trace of your application in prompt flow.
Requirements#
To run this notebook example, please install required dependencies:
%%capture --no-stderr
%pip install -r ./requirements.txt
Start tracing LangChain using promptflow#
Start trace using promptflow.start_trace
, click the printed url to view the trace ui.
from promptflow.tracing import start_trace
# start a trace session, and print a url for user to check trace
start_trace()
By default, opentelemetry-instrumentation-langchain
instrumentation logs prompts, completions, and embeddings to span attributes. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
# enable langchain instrumentation
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
instrumentor = LangchainInstrumentor()
if not instrumentor.is_instrumented_by_opentelemetry:
instrumentor.instrument()
Run a simple LangChain#
Below is an example targeting an AzureOpenAI resource. Please configure you API_KEY
using an .env
file, see ../.env.example
.
import os
from langchain.chat_models import AzureChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate
from langchain.chains import LLMChain
from dotenv import load_dotenv
if "AZURE_OPENAI_API_KEY" not in os.environ:
# load environment variables from .env file
load_dotenv()
llm = AzureChatOpenAI(
deployment_name=os.environ["CHAT_DEPLOYMENT_NAME"],
openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
openai_api_type="azure",
openai_api_version="2023-07-01-preview",
temperature=0,
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are world class technical documentation writer."),
("user", "{input}"),
]
)
chain = LLMChain(llm=llm, prompt=prompt, output_key="metrics")
chain({"input": "What is ChatGPT?"})
You should be able to see traces of the chain in promptflow UI now. Check the cell with start_trace
on the trace UI url.
Next steps#
By now you’ve successfully tracing LLM calls in your app using prompt flow.
You can check out more examples:
Trace your flow: using promptflow @trace to structurally tracing your app and do evaluation on it with batch run.