Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trulayer.ai/llms.txt

Use this file to discover all available pages before exploring further.

Install

pip install trulayer langchain

Instrument

import os
import trulayer

trulayer.init(api_key=os.environ["TRULAYER_API_KEY"], project_name="my-app")
trulayer.instrument_langchain()
After instrumentation every chain, agent, LLM call, and tool invocation is captured as a span in the current trace — no per-callsite code changes required.

Minimal example

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([("user", "{question}")])
chain = prompt | ChatOpenAI(model="gpt-4o-mini")

chain.invoke({"question": "What is the capital of France?"})
# A trace appears in the dashboard with one `chain` span and one `llm` child.

What gets captured

  • chain spans for every Runnableinvoke, batch, stream
  • llm spans for every model call (OpenAI, Anthropic, Bedrock, etc.) with inputs, outputs, and token counts
  • tool spans for every @tool / StructuredTool invocation with arguments and return value
  • retriever spans for vector-store queries with the retrieved documents attached as output
Nested runnables produce a nested span tree, so the dashboard waterfall mirrors the shape of your chain.