Skip to main content
Test Runs for simulation show the execution results of your multi-turn datasets. Each run provides a complete conversation transcript between the simulated user and your agent, along with evaluation results, scenario details, and performance metrics.

Why Simulation Test Runs Matter

Simulation test runs provide deep insights into conversational agent performance:
CapabilityBenefit
Conversation TranscriptsSee the full multi-turn dialogue to understand how your agent performed
Scenario DetailsView goal, persona, user data, and fact checker configuration
Turn-by-Turn TracingJump directly to execution traces for each conversation turn
Evaluation ResultsReview turn-level and session-level evaluator scores
Exit Reason TrackingUnderstand why conversations ended (goal achieved, failed, abandoned, max turns)
Aggregated MetricsMonitor cost, latency, and success rates across simulations

Test Runs Dashboard

Navigate to Evaluation → Test Runs from the left navigation panel to see simulation test runs. Simulation Test Runs Dashboard
ColumnDescription
NameName of the test run
TypeMulti-turn for simulation test runs
Started AtTimestamp when the simulation began
StatusCurrent state: Completed, In Progress, or Failed
DatasetThe dataset used for this simulation
  • Date Range: Filter runs by time period to compare performance over time
  • Search: Find specific test runs by agent or dataset name
  • Sort: Order by date, status, or dataset

Viewing Test Run Details

Click on any simulation test run to access detailed results. Test Run Details

Summary Metrics

The top of the detail view shows aggregated performance data:
MetricDescription
Total ItemsNumber of scenarios run in this test
Passed ItemsCount of scenarios that achieved their goals
Failed ItemsCount of scenarios that did not achieve goals
Total CostAggregate token/API cost for all scenarios
Total DurationEnd-to-end time for the simulation run
Average LatencyMean response time across all turns

Viewing Scenario Details

Click on any test run item to view the detailed scenario results. This opens a modal with three tabs.

Tab 1: Conversation

The Conversation tab shows the full multi-turn dialogue between the simulated user and your agent. Conversation Tab Features:
  • Turn-by-Turn Display: Each conversation turn is clearly separated
  • User Messages: Shows what the simulated user said
  • Agent Responses: Shows what your agent replied
  • Trace Links: Click View Trace on any turn to see detailed execution traces
  • Turn Index: Track which turn number you’re viewing (Turn 1, Turn 2, etc.)
  • Exit Reason: Shows why the conversation ended
Exit Reasons:
Exit ReasonDescription
Goal AchievedThe scenario objective was successfully completed
Goal FailedThe objective could not be achieved
AbandonedThe simulated user gave up or stopped engaging
Max Turns ReachedHit the turn limit before goal completion
Use the View Trace link to debug specific turns where the agent’s response was unexpected or incorrect. Traces show the full LLM call, tool usage, and latency breakdown.

Tab 2: Evaluation Results

The Evaluation Results tab shows scores from all configured evaluators. Evaluation Results Tab Each evaluator produces a normalized score between 0 and 1. Scores at or above 0.6 pass; scores below 0.6 fail. Example Results:
EvaluatorScorePass/Fail
Goal Fulfillment1Pass
Factual Accuracy1Pass
Conversation Completeness1Pass
Profile Utilization0.75Pass
Guideline Adherence0.5Fail

Tab 3: Scenario Details

The Scenario Details tab shows the complete configuration used for this simulation. Scenario Details Tab Scenario Section:
FieldValue
GoalThe scenario objective (e.g., “Get a refund for a damaged product”)
Max TurnsMaximum turns allowed (e.g., 5)
User PersonaThe persona used (e.g., Frustrated 😤)
User Data Section: Shows all context data provided to the simulated user:
{
  "order id": "3",
  "product name": "laptop stand"
}
Fact Checker Section: Shows facts the agent needed to communicate:
{
  "item usage": "unused",
  "refund window": "7 days",
  "days since delivery": "28"
}
Provider Configuration Section:
FieldValue
ProviderThe LLM provider used (e.g., openai)
ModelThe model used for simulation (e.g., gpt-4.1)
The Scenario Details tab is crucial for understanding the context of each simulation. It shows exactly what data the simulated user had access to and what facts the agent was expected to communicate.

Analyzing Simulation Results

Identifying Patterns

When reviewing simulation test runs, look for:
  • Goal achievement rates: What percentage of simulations achieved their goals?
  • Persona differences: Does your agent perform better with certain personas? Run the same scenarios with all persona types and compare results.
  • Turn efficiency: Are conversations longer than necessary? Compare turn counts for successful vs failed scenarios.
  • Common failure points: Which turns typically cause issues?
  • Fact accuracy: Are specific facts consistently missed?
  • Cost trends: Monitor total cost across test runs and identify scenarios that consume excessive turns.

Debugging Failed Simulations

For each failed scenario:
  1. Review the Conversation tab: Identify where the conversation went wrong
  2. Check the Evaluation Results tab: See which evaluators failed and why
  3. Examine the Scenario Details tab: Verify the user data and facts were correct
  4. Click View Trace: Inspect the full execution flow for problematic turns — check LLM inputs, tool calls, and latency breakdowns

Comparing Across Runs

To track improvement or regression:
  1. Run simulations after each agent update
  2. Compare goal achievement rates and evaluator scores across runs
  3. Investigate scenarios that changed from pass to fail
  4. Track turn efficiency and cost trends over time

Best Practices

  • Test after every agent change: Run simulations when updating your agent to catch regressions early
  • Create baseline runs: Establish performance benchmarks before making changes
  • Always check traces for failures: Don’t just read the conversation — inspect the execution flow, LLM context, and tool calls
  • Review latency: Identify slow turns that might frustrate real users
  • Simulation Overview - Understand the full simulation framework
  • Datasets - Create scenarios that generate test runs
  • Evaluators - Configure scoring logic for simulations
  • Traces - Debug simulation turns with execution traces
Last modified on March 17, 2026