Tier 1 (supported at V1 Phase 1)
| Framework | Language(s) | Helper |
|---|---|---|
| OpenAI | Python, TypeScript | instrument_openai(client) / instrumentOpenAI(client) |
| Anthropic | Python, TypeScript | instrument_anthropic(client) / instrumentAnthropic(client) |
| Vercel AI SDK | TypeScript | instrumentVercelAI() |
| LlamaIndex | Python | instrument_llamaindex() |
| PydanticAI | Python | instrument_pydantic_ai() |
Tier 2 (supported at V1 Phase 2)
| Framework | Language(s) | Helper |
|---|---|---|
| LangChain | Python, TypeScript | instrument_langchain() / instrumentLangChain() |
| CrewAI | Python | instrument_crewai() |
| Mastra | TypeScript | instrumentMastra() |
| DSPy | Python | instrument_dspy() |
| Haystack | Python | instrument_haystack() |
| AutoGen | Python | instrument_autogen() |
Not supported? Use manual instrumentation.
If your framework isn’t listed, you can still get full tracing — wrap calls withtrace() and span() manually. See Traces and spans.
Or open a feature request — we prioritise based on demand.
How auto-instrumentation works
Each helper monkey-patches the framework’s client methods to emit a span before each call and record the result after. The patch is:- Reversible — call
uninstrument_*()/uninstrumentLangChain()to restore the original methods - Idempotent — calling
instrument_*()twice is a no-op - Thread/async-safe — spans attach to the active trace via async-local context
- Non-blocking — span emission is buffered; no added latency on the hot path