Skip to main content
Test Runs for simulation show the execution results of your multi-turn datasets. Each run provides a complete conversation transcript between the simulated user and your agent, along with evaluation results, scenario details, and performance metrics.

Why Simulation Test Runs Matter

Simulation test runs provide deep insights into conversational agent performance:
CapabilityBenefit
Conversation TranscriptsSee the full multi-turn dialogue to understand how your agent performed
Scenario DetailsView goal, persona, user data, and fact checker configuration
Turn-by-Turn TracingJump directly to execution traces for each conversation turn
Evaluation ResultsReview turn-level and session-level evaluator scores
Exit Reason TrackingUnderstand why conversations ended (goal achieved, failed, abandoned, max turns)
Aggregated MetricsMonitor cost, latency, and success rates across simulations

Test Runs Dashboard

Navigate to Evaluation → Test Runs from the left navigation panel. Filter by Multi turn type to see simulation test runs. Simulation Test Runs Dashboard
ColumnDescription
NameThe agent that was tested in the simulation
TypeMulti-turn for simulation test runs
Started AtTimestamp when the simulation began
StatusCurrent state: Completed, In Progress, or Failed
DatasetThe dataset used for this simulation
  • Turn Type: Filter to show only Multi-turn (simulation) test runs
  • Date Range: Filter runs by time period to compare performance over time
  • Search: Find specific test runs by agent or dataset name
  • Sort: Order by date, status, or dataset

Viewing Test Run Details

Click on any simulation test run to access detailed results. Test Run Details

Summary Metrics

The top of the detail view shows aggregated performance data:
MetricDescription
Total ItemsNumber of scenarios run in this test
Passed ItemsCount of scenarios that achieved their goals
Failed ItemsCount of scenarios that did not achieve goals
Total CostAggregate token/API cost for all scenarios
Total DurationEnd-to-end time for the simulation run
Average LatencyMean response time across all turns

Viewing Scenario Details

Click on any test run item to view the detailed scenario results. This opens a modal with three tabs.

Tab 1: Conversation

The Conversation tab shows the full multi-turn dialogue between the simulated user and your agent. Conversation Tab Features:
  • Turn-by-Turn Display: Each conversation turn is clearly separated
  • User Messages: Shows what the simulated user said
  • Agent Responses: Shows what your agent replied
  • Trace Links: Click View Trace on any turn to see detailed execution traces
  • Turn Index: Track which turn number you’re viewing (Turn 1, Turn 2, etc.)
  • Exit Reason: Shows why the conversation ended
Exit Reasons:
Exit ReasonDescription
Goal AchievedThe scenario objective was successfully completed
Goal FailedThe objective could not be achieved
AbandonedThe simulated user gave up or stopped engaging
Max Turns ReachedHit the turn limit before goal completion
Conversation Flow:
Turn 1:
User: "Hi, I need help with a refund for order ORD-123456."
Agent: "I'd be happy to help you with that refund. Let me look up your order..."
[View Trace]

Turn 2:
User: "The product arrived damaged, and I'd like a full refund."
Agent: "I'm sorry to hear that. I've located your order for Wireless Headphones..."
[View Trace]

Turn 3:
...
Use the View Trace link to debug specific turns where the agent’s response was unexpected or incorrect. Traces show the full LLM call, tool usage, and latency breakdown.

Tab 2: Evaluation Results

The Evaluation Results tab shows scores from all configured evaluators. Evaluation Results Tab Evaluation Results: Overall conversation scores:
EvaluatorScorePass/Fail
Goal Fulfillment1Pass
Factual Accuracy1Pass
Conversation Completeness1Pass
Profile Utilization0.75Pass
Guidline Adherence0.5Fail

Tab 3: Scenario Details

The Scenario Details tab shows the complete configuration used for this simulation. Scenario Details Tab Scenario Section:
FieldValue
GoalThe scenario objective (e.g., “Get a refund for a damaged product”)
Max TurnsMaximum turns allowed (e.g., 5)
User PersonaThe persona used (e.g., Frustrated 😤)
User Data Section: Shows all context data provided to the simulated user:
{
  "order id": "3",
  "product name": "laptop stand"
}
Fact Checker Section: Shows facts the agent needed to communicate:
{
  "item usage": "unused",
  "refund window": "7 days",
  "days since delivery": "28"
}
Provider Configuration Section:
FieldValue
ProviderThe LLM provider used (e.g., openai)
ModelThe model used for simulation (e.g., gpt-5)
The Scenario Details tab is crucial for understanding the context of each simulation. It shows exactly what data the simulated user had access to and what facts the agent was expected to communicate.

Analyzing Simulation Results

Identifying Patterns

When reviewing simulation test runs, look for:
  • Goal achievement rates: What percentage of simulations achieved their goals?
  • Persona differences: Does your agent perform better with certain personas?
  • Turn efficiency: Are conversations longer than necessary?
  • Common failure points: Which turns typically cause issues?
  • Fact accuracy: Are specific facts consistently missed?

Debugging Failed Simulations

For each failed scenario:
  1. Review the Conversation tab: Identify where the conversation went wrong
  2. Check the Evaluation Results tab: See which evaluators failed and why
  3. Examine the Scenario Details tab: Verify the user data and facts were correct
  4. Click View Trace: Inspect the full execution flow for problematic turns
  5. Review LLM inputs: Check what context and prompts were sent to the LLM

Comparing Across Runs

To track improvement or regression:
  1. Run simulations after each agent update (abilities, constraints, or prompt changes)
  2. Compare goal achievement rates across runs
  3. Investigate scenarios that changed from pass to fail
  4. Track turn efficiency and cost trends over time

Use Cases

Agent Capability Testing

Validate new agent abilities:
  1. Create scenarios that exercise the new capability
  2. Run simulations with multiple personas
  3. Verify goal achievement and fact accuracy
  4. Analyze conversation quality across different user types

Constraint Compliance Testing

Ensure your agent respects boundaries:
  1. Create scenarios that attempt to violate constraints
  2. Use turn-level constraint adherence evaluators
  3. Verify the agent refuses or escalates appropriately
  4. Check that violations are caught in every turn

Persona Optimization

Optimize performance across user types:
  1. Run the same scenarios with all personas (neutral, friendly, frustrated, confused)
  2. Compare goal achievement rates by persona
  3. Identify which personas cause the most issues
  4. Refine agent abilities to handle challenging personas

Cost and Efficiency Monitoring

Track simulation costs and turn counts:
  1. Monitor total cost trends across test runs
  2. Compare turn counts for successful vs failed scenarios
  3. Identify scenarios that consume excessive turns
  4. Optimize prompts to reduce turn count while maintaining quality

Best Practices

Regular Simulation Testing

  • Test after every agent change: Run simulations when updating abilities or constraints
  • Create baseline runs: Establish performance benchmarks before making changes
  • Track metrics over time: Monitor goal achievement rates, costs, and turn counts

Scenario Coverage

  • Test all personas: Run key scenarios with each persona type
  • Include edge cases: Create scenarios that challenge your agent’s boundaries
  • Vary complexity: Mix simple (2-3 turn) and complex (7-10 turn) scenarios
  • Update scenarios: Add new scenarios as you discover failure patterns

Trace-Level Debugging

  • Always check traces for failures: Don’t just look at the conversation—inspect the execution
  • Verify LLM context: Ensure the agent received the right information
  • Check tool calls: Verify the agent used tools correctly
  • Review latency: Identify slow turns that might frustrate real users
  • Simulation Overview - Understand the full simulation framework
  • Datasets - Create scenarios that generate test runs
  • Evaluators - Configure scoring logic for simulations
  • Agents - Define agents to test
  • Traces - Debug simulation turns with execution traces
Last modified on February 11, 2026