AI agents don’t just answer questions—they engage in complex, multi-turn conversations to achieve goals. Netra’s Simulation framework lets you test these interactions systematically, simulating realistic user behaviors to validate your agent’s performance before deployment.Documentation Index
Fetch the complete documentation index at: https://docs.getnetra.ai/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start: Simulation
New to simulations? Get your first simulation running in minutes.
Why Simulation Matters
Traditional testing falls short for conversational agents. Simulations provide a comprehensive way to test multi-turn interactions with realistic user behaviors:| Question | What Netra Simulates |
|---|---|
| Does my agent handle multi-turn conversations correctly? | Full conversation flows with simulated user responses |
| Can my agent achieve specific goals? | Goal-oriented scenarios with success/failure tracking |
| How does my agent perform with different user personas? | Frustrated, confused, friendly, or neutral users |
Core Building Blocks
The Simulation suite is built on three interconnected pillars:Evaluators
Evaluators assess the entire conversation after it completes. Netra provides 8 preconfigured library evaluators in two categories:| Category | Evaluators |
|---|---|
| Quality | Guideline Adherence, Conversation Completeness, Profile Utilization, Conversational Flow, Conversation Memory, Factual Accuracy |
| Agentic | Goal Fulfillment, Information Elicitation |
Datasets
Datasets are collections of simulation scenarios that define multi-turn conversation goals.| Feature | Description |
|---|---|
| Multi-Turn Scenarios | Define conversation goals with simulated user interactions |
| User Personas | Choose from neutral, friendly, frustrated, confused, or custom personas |
| User Data & Facts | Provide context data and facts the agent must communicate correctly |
| Variable Mapping | Map evaluator inputs to scenario fields, agent responses, or conversation metadata |
Test Runs
Test Runs execute your simulation scenarios, providing detailed conversation transcripts and evaluation results.| Feature | Description |
|---|---|
| Conversation Transcript | Full multi-turn dialogue between simulated user and agent |
| Scenario Details | View goal, persona, user data, and fact checker configuration |
| Trace Integration | Link directly to execution traces for each turn to debug issues |
| Aggregated Metrics | View total cost, average latency, and pass/fail rates across the simulation |
Use Cases
Goal Achievement Testing
Validate that your agent can successfully complete user objectives:- Create scenarios with specific goals (e.g., “Get a refund from customer support”)
- Define what facts the agent must communicate
- Run simulations and verify goal achievement across different personas
- Analyze conversation transcripts to understand failure points
Persona-Based Testing
Test agent performance with different user types:- Create datasets with various personas (frustrated, confused, friendly)
- Run the same scenario across all personas
- Compare results to identify which personas your agent handles poorly
- Refine your agent based on insights
Getting Started
Configure Evaluators
Set up evaluators to define your scoring criteria — choose from the library or create custom ones.
Create a Dataset
Build a multi-turn dataset with simulation scenarios, user personas, and facts to verify.
Run Simulations
Execute your dataset and view conversation transcripts and results in Test Runs.
Related
- Evaluators - Configure scoring logic and criteria
- Datasets - Create multi-turn simulation scenarios
- Test Runs - Analyze simulation results and conversation transcripts
- Traces - Understand how simulations connect to trace data
