Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trulayer.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Feedback page is where end-user reactions to your product — thumbs up / down, free-text notes, inline edits — surface next to the traces that produced them. See the Feedback concept for the underlying data model.

How feedback is captured

Feedback is attached to a trace via the SDK or a direct API call.
trulayer.get_client().feedback(
    trace_id=result.trace_id,
    label="bad",             # "good" | "bad" | free-form string
    comment="Wrong answer — the capital is Paris, not Lyon.",
    user_id="u_42",
)
Typical UI patterns:
  • Thumbs-up / thumbs-down buttons under every assistant message → label = "good" | "bad".
  • Free-text follow-up after a negative click → comment.
  • Inline edit — user corrects the response → send the edited text as comment with label = "edit".
The SDK returns the trace id from every trace() block — plumb it through your response payload so the client can reference it when feedback arrives.

Reviewing feedback

The feedback inbox lists every feedback event, newest first. Columns:
  • Timestamp
  • Trace — click to open the trace detail view
  • Label — color-coded (good green, bad red, custom labels neutral)
  • Comment — truncated; hover to see the full text
  • Useruser_id if provided, linked to the user timeline view
  • Session — linked to the session replay

Filters

  • Label — narrow to bad, edit, or any custom label.
  • Project / environment / model — inherited from the trace.
  • Has comment — show only feedback with free-text context.
  • User — one user’s entire feedback history.
  • Time range.

Linking feedback to traces

Every feedback row has a View trace button that opens the span waterfall with the feedback pinned to the header. From there you can:
  • Copy the trace id for a bug report.
  • Add the trace to a dataset for an eval regression run (Add to dataset from the trace detail toolbar).
  • Jump to the Evals page to run a correctness evaluator against the trace and compare the score to the user’s label.

Common workflows

  • Weekly quality review. Filter label = bad AND has_comment = true over the last 7 days, triage top 20, push each into the correctness dataset.
  • Regression signal. When bad-feedback rate spikes (see the Metrics page), filter feedback by the new time window to find the root issue.
  • User outreach. Group by user_id to find users with repeated bad feedback — prime candidates for a qualitative follow-up.