added

Tracing

AI workflows can feel like a black box—when something goes wrong, it’s hard to know why. Tracing changes that by giving you full visibility into every step of your workflow. Instead of guessing why an LLM output is wrong, you can quickly check every step in the workflow—saving time and reducing frustration.

With this release, you can inspect all events that occur within a trace, including:

  • Retrieval – See which knowledge chunks were fetched.
  • Embedding & Reranking – Understand how inputs are processed and prioritized.
  • LLM Calls – Track prompts, responses, and latency.
  • Evaluation & Guardrails – Ensure quality control in real time.
  • Cache Usage – Spot inefficiencies in repeated queries.
  • Fallbacks & Retries – Detect when your system auto-recovers from failures.

This level of observability helps teams debug faster, optimize workflows, and make data-driven improvements.


Example of a trace from a RAG bot that has two evals

Example of a trace from a RAG bot that has 1 evaluator


Billing impact - Event count

With the introduction of Traces, all the events seen in the overview will count towards the number of events. This has direct impact on the billing.

An example: A chat request with 2 evaluators was historically counted as 1 request but will now be counted as 3 events.