The Feedback page is where end-user reactions to your product — thumbs up / down, free-text notes, inline edits — surface next to the traces that produced them. See the Feedback concept for the underlying data model.Documentation Index
Fetch the complete documentation index at: https://docs.trulayer.ai/llms.txt
Use this file to discover all available pages before exploring further.
How feedback is captured
Feedback is attached to a trace via the SDK or a direct API call.- Thumbs-up / thumbs-down buttons under every assistant message →
label = "good" | "bad". - Free-text follow-up after a negative click →
comment. - Inline edit — user corrects the response → send the edited text as
commentwithlabel = "edit".
trace() block — plumb it through your response payload so the client can reference it when feedback arrives.
Reviewing feedback
The feedback inbox lists every feedback event, newest first. Columns:- Timestamp
- Trace — click to open the trace detail view
- Label — color-coded (
goodgreen,badred, custom labels neutral) - Comment — truncated; hover to see the full text
- User —
user_idif provided, linked to the user timeline view - Session — linked to the session replay
Filters
- Label — narrow to
bad,edit, or any custom label. - Project / environment / model — inherited from the trace.
- Has comment — show only feedback with free-text context.
- User — one user’s entire feedback history.
- Time range.
Linking feedback to traces
Every feedback row has a View trace button that opens the span waterfall with the feedback pinned to the header. From there you can:- Copy the trace id for a bug report.
- Add the trace to a dataset for an eval regression run (Add to dataset from the trace detail toolbar).
- Jump to the Evals page to run a correctness evaluator against the trace and compare the score to the user’s label.
Common workflows
- Weekly quality review. Filter
label = bad AND has_comment = trueover the last 7 days, triage top 20, push each into the correctness dataset. - Regression signal. When bad-feedback rate spikes (see the Metrics page), filter feedback by the new time window to find the root issue.
- User outreach. Group by
user_idto find users with repeated bad feedback — prime candidates for a qualitative follow-up.