Skip to main content
This guide gets you from zero to a traced LLM call visible in the dashboard in under 10 minutes.

Prerequisites

  • A TruLayer account — sign up free (no credit card required on the Free tier)
  • One of:
    • Python 3.11+ for the Python SDK
    • Node.js 18+ (or Bun, or an Edge runtime) for the TypeScript SDK
  • An LLM provider API key — this quickstart uses OpenAI

1. Create an API key

After signing up, go to Settings → API keys in the dashboard and click Create key. Copy the key — it starts with tl_ and is shown only once.
API keys are sensitive. Store them in environment variables, not in source code. Keys are tenant-scoped — they grant ingest and read access to every project in your organisation.
export TRULAYER_API_KEY=tl_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export OPENAI_API_KEY=sk-...

2. Install the SDK

pip install trulayer openai

3. Send your first trace

The example below makes one OpenAI call and ships the trace (prompt, response, latency, token counts, model) to TruLayer. It uses auto-instrumentation so you don’t have to wrap calls manually.
import os
from openai import OpenAI
import trulayer

trulayer.init(
    api_key=os.environ["TRULAYER_API_KEY"],
    project="quickstart",
)

client = OpenAI()
trulayer.instrument_openai(client)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Give me a one-sentence definition of observability."}],
)

print(response.choices[0].message.content)
In short-lived scripts, always call flush() (TS) or let the process exit naturally after trulayer.init() — the SDK batches traces and needs a moment to ship them before exit.

4. View the trace

Open the dashboard and navigate to Traces. Your trace should appear within a few seconds, showing:
  • The input prompt
  • The full response
  • Latency, token counts, and cost
  • The model name
  • Any errors (there shouldn’t be any yet)
Click the trace to see the span waterfall — one span per LLM call, with full request and response captured.

5. Next steps

Add more auto-instrumentation

Patch Anthropic, LangChain, or the Vercel AI SDK with a single function call.

Wrap custom code

Create traces and spans manually for retrieval, tool calls, or your own business logic.

Collect feedback

Attach user thumbs-up/down to traces and feed it back into your evals.

Run an evaluation

Score traces for correctness, hallucination, or any custom metric.
If the trace doesn’t appear, double-check TRULAYER_API_KEY, make sure the process didn’t exit before the background flush completed, and check the status page. Still stuck? Email [email protected].