simulation client that lets you:
- Run simulations - Execute multi-turn conversations against your AI agent
- Define tasks - Create custom task implementations to wrap your agent
- Control concurrency - Manage parallel execution for throughput
Netra.simulation to run multi-turn simulations and test your AI agents programmatically.
Getting Started
Thesimulation client is available on the main Netra entry point after initialization.
run_simulation
Execute a multi-turn conversation simulation against a dataset. Your task function is called repeatedly for each turn until the conversation completes.Parameters
| Parameter | Type | Description |
|---|---|---|
name | str | Name of the simulation run (required) |
dataset_id | str | ID of the dataset containing conversation scenarios |
task | BaseTask | Task instance that wraps your AI agent |
context | dict? | Optional context data passed to the simulation |
max_concurrency | int | Maximum parallel conversations (default: 5) |
Response
| Field | Type | Description |
|---|---|---|
success | bool | Overall success status |
total_items | int | Number of dataset items |
completed | list[dict] | Successfully completed items |
failed | list[dict] | Failed items with error details |
Completed Item
Completed Item
| Field | Type | Description |
|---|---|---|
run_item_id | str | Unique item identifier |
success | bool | Always True for completed items |
final_turn_id | str | ID of the last turn in the conversation |
Failed Item
Failed Item
| Field | Type | Description |
|---|---|---|
run_item_id | str | Unique item identifier |
success | bool | Always False for failed items |
error | str | Error message describing the failure |
turn_id | str | ID of the turn where failure occurred |
Complete Example
Next Steps
- Dashboard Query - Query dashboard metrics
- Usage Utilities - Query traces and spans
- Simulation Cookbooks - In-depth tutorials
- Evaluation - Evaluate AI outputs