LangChain / LangGraph

Integrate Orq.ai with LangChain and LangGraph using OpenTelemetry

Getting Started

LangChain and LangGraph are powerful frameworks for building AI applications with chains, agents, and complex workflows. Integrate with Orq.ai to gain complete observability into your LLM calls, chain execution, tool usage, and agent behavior.

Prerequisites

Before you begin, ensure you have:

  • An Orq.ai account and API key
  • LangChain and/or LangGraph installed in your project
  • Python 3.8+

Install Dependencies

# Core LangChain packages
pip install langchain langchain-openai langgraph

# Core OpenTelemetry packages (required for all integration methods)
pip install opentelemetry-sdk opentelemetry-exporter-otlp

Configure Orq.ai

Set up your environment variables to connect to Orq.ai's OpenTelemetry collector:

Unix/Linux/macOS:

export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.orq.ai/v2/otel"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <ORQ_API_KEY>"
export OTEL_RESOURCE_ATTRIBUTES="service.name=langchain-app,service.version=1.0.0"

Windows (PowerShell):

$env:OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.orq.ai/v2/otel"
$env:OTEL_EXPORTER_OTLP_HEADERS = "Authorization=Bearer <ORQ_API_KEY>"
$env:OTEL_RESOURCE_ATTRIBUTES = "service.name=langchain-app,service.version=1.0.0"

Using .env file:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.orq.ai/v2/otel
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer <ORQ_API_KEY>
OTEL_RESOURCE_ATTRIBUTES=service.name=langchain-app,service.version=1.0.0

Integrations

Choose your preferred OpenTelemetry framework for collecting traces:

OpenLit

Best for: Quick setup with automatic instrumentation

pip install openlit
import openlit
import os

# Initialize OpenLit with Orq.ai endpoint
openlit.init(
    otlp_endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"),
    otlp_headers=os.getenv("OTEL_EXPORTER_OTLP_HEADERS")
)

# Your LangChain code works normally - tracing is automatic!
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Translate to French: {text}")
chain = prompt | llm

result = chain.invoke({"text": "Good morning"})
print(result.content)

Logfire

Best for: Rich visualization and Pydantic ecosystem integration

pip install logfire
import logfire
import os

# Configure Logfire
logfire.configure()

# Set up environment variables for LangChain integration
os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_TRACING"] = "true"

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Translate to French: {text}")
chain = prompt | llm

result = chain.invoke({"text": "Good morning"})
print(result.content)

MLFlow

Best for: ML experimentation and model lifecycle management

pip install mlflow
import mlflow
import os

# Configure MLflow tracking
mlflow.set_tracking_uri(os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"))

# Enable automatic LangChain logging
mlflow.langchain.autolog()

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

with mlflow.start_run():
    llm = ChatOpenAI(model="gpt-4o-mini")
    prompt = ChatPromptTemplate.from_template("Translate to French: {text}")
    chain = prompt | llm

    result = chain.invoke({"text": "Good morning"})
    print(result.content)

OpenInference

Best for: Arize ecosystem integration and multi-language support

pip install openinference-instrumentation-langchain
from openinference.instrumentation.langchain import LangChainInstrumentor
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import BatchSpanProcessor
import os

# Initialize OpenTelemetry
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(
    BatchSpanProcessor(
        OTLPSpanExporter(
            endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"),
            headers={"Authorization": f"Bearer {os.getenv('ORQ_API_KEY')}"}
        )
    )
)
trace.set_tracer_provider(tracer_provider)

# Instrument LangChain
LangChainInstrumentor().instrument()

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Translate to French: {text}")
chain = prompt | llm

result = chain.invoke({"text": "Good morning"})
print(result.content)

Manual OpenTelemetry

Best for: Full control and custom instrumentation

pip install opentelemetry-instrumentation-requests
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
import os

# Initialize OpenTelemetry
resource = Resource.create({
    "service.name": os.getenv("OTEL_SERVICE_NAME", "langchain-app"),
})
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("langchain")

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Translate to French: {text}")
chain = prompt | llm

with tracer.start_as_current_span("langchain.invoke", attributes={"model": "gpt-4o-mini"}):
    result = chain.invoke({"text": "Good morning"})
    print(result.content)

Examples

Basic Chain Example

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

# Setup your chosen integration above, then use LangChain normally
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Translate to {language}: {text}")
chain = prompt | llm

result = chain.invoke({
    "language": "Spanish",
    "text": "Hello, how are you today?"
})
print(result.content)

LangGraph Agent Example

from langgraph.graph import StateGraph, END
from typing import TypedDict
from langchain_openai import ChatOpenAI

class AgentState(TypedDict):
    messages: list
    next_action: str

def reasoning_node(state: AgentState):
    llm = ChatOpenAI(model="gpt-4o-mini")
    # Your agent logic here
    return {"messages": state["messages"], "next_action": "complete"}

def action_node(state: AgentState):
    # Execute actions based on reasoning
    return {"messages": state["messages"], "next_action": "end"}

# Build the graph
workflow = StateGraph(AgentState)
workflow.add_node("reasoning", reasoning_node)
workflow.add_node("action", action_node)
workflow.add_edge("reasoning", "action")
workflow.add_edge("action", END)
workflow.set_entry_point("reasoning")

app = workflow.compile()

# Run the agent
result = app.invoke({
    "messages": ["Analyze the latest market trends"],
    "next_action": "start"
})

Tool Usage Example

from langchain.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain.prompts import ChatPromptTemplate

@tool
def get_current_weather(location: str) -> str:
    """Get the current weather for a location."""
    # Your weather API logic here
    return f"The weather in {location} is sunny and 75°F"

llm = ChatOpenAI(model="gpt-4o-mini")
tools = [get_current_weather]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}"),
    ("assistant", "{agent_scratchpad}")
])

agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

result = agent_executor.invoke({
    "input": "What's the weather like in San Francisco?"
})

Error Handling with Custom Spans

from opentelemetry import trace

tracer = trace.get_tracer("langchain-custom")

with tracer.start_as_current_span("langchain.error_handling") as span:
    try:
        # Your LangChain operation
        chain = prompt | llm
        result = chain.invoke({"text": "Test message"})

        # Add success attributes
        span.set_attribute("operation.success", True)
        span.set_attribute("response.length", len(result.content))

    except Exception as e:
        # Record the exception
        span.record_exception(e)
        span.set_attribute("operation.success", False)
        span.set_attribute("error.type", type(e).__name__)
        raise

Next Steps

Verify traces: Check your Orq.ai dashboard to see incoming LangChain traces ✅ Add custom attributes: Enhance traces with business-specific metadata like user IDs or request types ✅ Set up alerts: Configure monitoring for chain failures or performance degradation ✅ Optimize performance: Use trace data to identify slow chains and optimize prompts ✅ Monitor costs: Track token usage and costs across different models and chains

Related Documentation

Support