Skip to main content

Analytics Dashboard Deep Dive

Your analytics dashboard is the command center for understanding how your agents perform, where customers struggle, and what drives business results. The dashboard is organized into four tabs — Overview, Conversations, Insights, and Reports — each designed to surface a different layer of intelligence from your agent interactions.


Global Controls

Before diving into individual tabs, two controls at the top of the page apply to everything you see across all four tabs:

  • Date range picker — Constrain all data to a specific window (today, last 7 days, last 30 days, or a custom range).
  • Agent filter — Narrow the dashboard to a single agent or view all agents at once.

These filters persist as you move between tabs, so the same time period and agent selection is always in scope.


Always-Visible Metrics

Four TrendCards sit above the tab bar, permanently visible regardless of which tab you are on:

CardWhat it shows
Total ConversationsNumber of customer interactions in the selected date range
Avg Response TimeHow quickly your agents reply to each customer message
Success RatePercentage of conversations that reached a successful outcome
Active Agents / TurnsActive agent count alongside total conversation turns

Directly below the TrendCards you will find two usage progress bars that show how many messages and how much storage your account has consumed relative to your plan limits. These update in real time and give you an early warning before you approach a plan ceiling.


Tab 1: Overview

The Overview tab gives you a visual narrative of how your agents have been performing across the selected date range. It is always accessible — no subscription gate.

What you see

The OverviewPanel component occupies this tab. It renders:

  • Time-series charts — Line or bar charts plotting conversation volume, response times, and success rates day by day (or hour by hour for short ranges). Use these to spot spikes, drops, and recurring patterns.
  • Agent performance comparisons — Side-by-side metrics for every agent in your account, so you can immediately see which agents are thriving and which need attention.
  • Trend lines — Smoothed overlays on the time-series charts that make long-term direction visible even when day-to-day numbers are noisy.

How to use it

Start here every morning. A quick scan of the trend lines tells you whether performance improved or declined since yesterday. If a specific agent shows a dip in success rate, switch to the Conversations tab (filtered to that agent) to read through recent interactions and understand why.


Tab 2: Conversations

The Conversations tab is where you get closest to the actual customer experience. It gives you a filterable, sortable table of every conversation your agents have had, and lets you open any chat in a detail drawer for a full inspection. It is always accessible — no subscription gate.

Conversation Table

The table displays one row per conversation with the following columns:

ColumnDescription
TitleAuto-generated summary title for the conversation
AgentWhich agent handled the chat
TurnsNumber of message exchanges
EmotionDominant emotion detected (joy, sadness, anger, fear, surprise)
SentimentOverall positive / neutral / negative tone
EngagementHow actively the customer participated
Prediction ScoreModel confidence that the conversation achieved its goal
StatusOpen, resolved, escalated, etc.
ActionsQuick-access buttons for that row

A NEW badge appears on the tab label whenever there are conversations you have not yet opened, so unread chats are always easy to spot.

Filtering and search options above the table:

  • Search bar — Full-text search across conversation titles and content.
  • Filter by Agent — Isolate conversations from a single agent.
  • Filter by Client — Track an individual customer's journey.
  • Filter by Emotion — Surface conversations where customers expressed a specific emotion (joy / sadness / anger / fear / surprise).
  • Filter by Date range — Override the global date picker for this table specifically.

You can combine any number of filters for precise slicing.

Chat Detail Drawer

Click any row to open a side panel with two tabs: Conversation and Info.


Conversation Tab

The Conversation tab renders the full message thread using virtualized MessageCards — meaning even very long chats scroll smoothly. Each assistant message includes several layers of detail beneath the response text:

Eval Score Strip

Directly below each assistant message you will see a row of colored badges — the Eval Score Strip. Each badge represents one evaluator and is color-coded by how well that turn performed:

EvaluatorWhat it measures
RelevanceDid the response actually address the customer's question?
FaithfulnessIs the answer grounded in your knowledge base and facts?
ToxicityDid the response contain harmful or inappropriate language?
PII LeakageDid the response expose personally identifiable information?
CompletenessDid the response fully answer the question, without omissions?
Prompt InjectionDid the response resist any adversarial user input?

Use these scores to instantly identify which specific turns in a conversation were weak, without having to re-read the entire thread.

Pipeline Trace

Below the Eval Score Strip, each assistant message shows an expandable pipeline trace — a step-by-step view of what happened internally before the response was generated. The trace follows the agent's execution pipeline:

PLAN → GROUNDING → EXECUTE → GROUND_CHECK → RESPOND

Expanding a span reveals:

  • Timing — How long that step took (in milliseconds).
  • Inputs — What data entered that stage.
  • Outputs — What was produced and passed to the next stage.

The pipeline trace is invaluable for debugging unexpected responses. If the answer was wrong, the trace shows you exactly where the problem originated — whether the grounding step failed to retrieve the right context, whether the execution step called the wrong tool, or whether the ground-check caught an issue that was then handled downstream.

Annotation Button (Teach Your Agent)

Each assistant message has a thumbs-down icon in the corner. Clicking it opens an annotation modal where you can flag an incorrect response and record what the correct response should have been. See the Annotation / Teach Your Agent section below for the full workflow.

Manual Response Input

At the bottom of the conversation panel there is a text field you can use to send a manual message when AI is disabled for that conversation. This is your direct line to the customer when you take over from the agent.

AI Toggle

A switch at the top of the conversation panel lets you enable or disable AI for this specific conversation. Disabling AI puts the conversation into Manual Mode so your team can respond directly without the agent intervening.


Info Tab

The Info tab loads structured data from the server for the selected conversation and presents it in six sections:

1. Summary A narrative paragraph summarizing what the conversation was about, what the customer needed, and how the interaction resolved. Useful for managers reviewing conversations without reading every message.

2. Lead Captured If your agent is configured to collect lead information, this section shows:

  • The custom lead fields defined for your agent.
  • A complete / incomplete badge indicating whether all required fields were captured.
  • A list of any missing fields so you know what information the customer did not provide.

3. Tool Activity A chronological log of every tool your agent called during the conversation. For each tool execution you can see:

  • Tool name and type.
  • Execution status (success or failure).
  • Duration in milliseconds.
  • Timestamp.
  • Error message if the call failed.

This section is essential for diagnosing agent behavior — it tells you exactly which integrations fired, in what order, and whether they worked.

4. Quality Per-evaluator pass rate bars for the conversation as a whole, aggregating the per-turn scores from the Eval Score Strip into a single view. Color coding follows a consistent scheme:

  • Green — Pass rate 80% or higher.
  • Yellow — Pass rate 50–79%.
  • Red — Pass rate below 50%.

Evaluators shown: Relevance, Faithfulness, Toxicity, PII Leakage, Completeness, Prompt Injection.

5. Annotations All human annotations that team members have submitted for this conversation, including the reason label (wrong_facts, wrong_tone, off_topic, incomplete, or other) and the expected response text. This gives reviewers a shared record of where the agent fell short.

6. Performance Aggregate technical metrics for the conversation:

  • Input and output token counts.
  • Average latency per turn.
  • Total number of turns.
  • Count of blocked turns and error turns.

Annotation / Teach Your Agent

The annotation system is how your team builds a correction dataset over time, turning every mistake into a learning signal. Any team member who reviews a conversation can flag an assistant message by clicking the thumbs-down icon and filling in the modal:

  • Reason — Select from a dropdown: wrong_facts, wrong_tone, off_topic, incomplete, or other.
  • Expected response — A free-text field where you write what the correct response should have been.

Submitted annotations are stored server-side and appear in the Annotations section of the Info tab for that conversation. Over time, the accumulated annotations form a labeled dataset that can be used to fine-tune or guide your agent toward better behavior.

For a complete walkthrough of how annotations feed into agent improvement, see the dedicated Continual Learning page.


Tab 3: Insights

The Insights tab surfaces aggregated quality and performance intelligence across all conversations in your selected date range. It requires the AI Assistant feature on your subscription — if your plan does not include it, you will see a lock icon and an upgrade prompt instead of the data.

The tab is divided into two stacked sections:

Quality Evaluation

The QualityTab component shows aggregate quality scores across your entire conversation set for the selected period. For each of the six evaluators (Relevance, Faithfulness, Toxicity, PII Leakage, Completeness, Prompt Injection), you see:

  • An overall pass rate expressed as a percentage.
  • Chart visualizations that show how quality has trended over time.
  • Breakdowns by agent so you can see which agents score highest and lowest on each dimension.

Use this section to identify systemic quality problems. If Faithfulness is consistently low across all agents, your knowledge base may need updating. If Toxicity scores are flagging, a specific prompt configuration may be causing harmful outputs.

Performance

The ToolUsagePanel component shows how your agents' operational tools are performing:

  • Tool usage statistics — How often each tool was called and with what success rate.
  • Lead capture counts — How many leads were collected in the period.
  • Appointment bookings — How many bookings were completed through the agent.
  • Email handoff metrics — How often conversations were escalated and what happened afterward.
  • Funnel breakdown — A funnel-style visualization showing how conversations progress from initial contact through to a completed action (lead, booking, or handoff).

Tab 4: Reports

The Reports tab is your automated reporting engine. It lets you create scheduled reports that get built and delivered to your inbox without any manual work. Full access requires the Scale subscription tier; lower tiers may see a partial experience or an upgrade prompt.

Creating a Report

The EnhancedReportsTab component walks you through a report builder with these options:

  • Schedule — Daily, weekly, or monthly delivery.
  • Template — Choose from the built-in template library:
    • Daily Summary
    • Weekly Business Review
    • Monthly Strategic
    • Agent Performance
    • Customer Journey
  • Recipients — Add one or more email addresses to receive the report automatically.

Managing Reports

Once reports are created, the tab gives you a full management interface:

  • Report library — A list of all your saved reports with their current status and last-run timestamp.
  • Execution history — A log of every time a report was generated, when it was delivered, and whether the delivery succeeded.
  • Download / export — Pull any report as a PDF (formatted for sharing) or CSV (raw data for spreadsheet analysis).

Practical Usage

Set up a Weekly Business Review report on Monday morning cadence and add your whole team as recipients. Set up a Daily Summary for yourself so you arrive each morning with yesterday's highlights already in your inbox. Use the Agent Performance template when you are evaluating whether to adjust an agent's configuration or training.


Getting Maximum Value From Your Analytics

Daily Actions

  1. Glance at the TrendCards — The four always-visible cards at the top tell you immediately if anything is off.
  2. Check the Overview tab — Scan trend lines for unexpected dips or spikes.
  3. Open the Conversations tab — Look for the NEW badge and read through any unread chats, paying attention to the Eval Score Strip on assistant messages.
  4. Flag bad responses — Use the annotation button whenever you spot an incorrect answer. Every annotation improves your correction dataset.

Weekly Reviews

  1. Review the Insights tab — Check per-evaluator pass rates and compare agents. Identify which evaluator has the lowest pass rate and investigate the root cause.
  2. Inspect tool activity — Look at the ToolUsagePanel in Insights to see if lead capture or appointment booking rates are healthy.
  3. Confirm reports are running — Switch to the Reports tab and verify that your scheduled reports ran successfully and were delivered.

Monthly Optimization

  1. Export a Monthly Strategic report — Share it with leadership as a record of business impact.
  2. Review your annotations — Look at the Annotations sections across high-traffic conversations to identify patterns in what your agent gets wrong and consider updating your knowledge base or agent instructions accordingly.
  3. Benchmark agents — Use the Overview tab's agent comparison view to see if any agent has fallen behind others and needs configuration work.

Pro Tips

  • Start with Quality in Insights — A low Faithfulness score across the board is often a sign that your knowledge base is outdated. Updating a few key documents can produce an immediate, measurable improvement in pass rates.
  • Use the pipeline trace to debug, not just to monitor — When a customer reports a bad answer, open that conversation, find the message, and expand the trace. The GROUNDING span almost always reveals whether the agent had access to the right information.
  • Combine emotion filtering with Eval Scores — Filter Conversations to show only anger or sadness emotion, then open each chat and look at the Eval Score Strips. Frustrated customers and low-quality responses tend to cluster together and point to specific agent gaps.
  • Keep report recipient lists current — Add new team members to relevant reports as soon as they join. Automated reports are only valuable if the right people are reading them.
  • Annotate consistently — One annotation is a data point. Fifty annotations on the same type of mistake is a pattern your team can act on. Encourage everyone who reviews conversations to annotate, not just team leads.