Skip to main content
TruLayer integrates with the major LLM SDKs and orchestration frameworks via auto-instrumentation. Once instrumented, every call these frameworks make becomes a span in the active trace — no further code changes.

Tier 1 (supported at V1 Phase 1)

FrameworkLanguage(s)Helper
OpenAIPython, TypeScriptinstrument_openai(client) / instrumentOpenAI(client)
AnthropicPython, TypeScriptinstrument_anthropic(client) / instrumentAnthropic(client)
Vercel AI SDKTypeScriptinstrumentVercelAI()
LlamaIndexPythoninstrument_llamaindex()
PydanticAIPythoninstrument_pydantic_ai()

Tier 2 (supported at V1 Phase 2)

FrameworkLanguage(s)Helper
LangChainPython, TypeScriptinstrument_langchain() / instrumentLangChain()
CrewAIPythoninstrument_crewai()
MastraTypeScriptinstrumentMastra()
DSPyPythoninstrument_dspy()
HaystackPythoninstrument_haystack()
AutoGenPythoninstrument_autogen()

Not supported? Use manual instrumentation.

If your framework isn’t listed, you can still get full tracing — wrap calls with trace() and span() manually. See Traces and spans. Or open a feature request — we prioritise based on demand.

How auto-instrumentation works

Each helper monkey-patches the framework’s client methods to emit a span before each call and record the result after. The patch is:
  • Reversible — call uninstrument_*() / uninstrumentLangChain() to restore the original methods
  • Idempotent — calling instrument_*() twice is a no-op
  • Thread/async-safe — spans attach to the active trace via async-local context
  • Non-blocking — span emission is buffered; no added latency on the hot path

Versioning

Integrations target specific major versions of each framework. Incompatible upgrades are called out in the SDK CHANGELOG. Run the SDK’s own test suite against your framework version if you’re on something exotic.