Documentation Index Fetch the complete documentation index at: https://docs.getnetra.ai/llms.txt
Use this file to discover all available pages before exploring further.
The Netra SDK exposes a simulation client that lets you:
Run simulations - Execute multi-turn conversations against your AI agent
Define tasks - Create custom task implementations to wrap your agent
Control concurrency - Manage parallel execution for throughput
This page shows how to use Netra.simulation to run multi-turn simulations and test your AI agents programmatically.
Getting Started
The simulation client is available on the main Netra entry point after initialization.
from netra import Netra
Netra.init( app_name = "sample-app" )
# Access the simulation client
Netra.simulation.run_simulation( ... )
run_simulation
Execute a multi-turn conversation simulation against a dataset. Your task function is called repeatedly for each turn until the conversation completes.
from netra import Netra
from netra.simulation.task import BaseTask
from netra.simulation.models import TaskResult
Netra.init( app_name = "sample-app" )
class MyAgentTask ( BaseTask ):
def __init__ ( self , agent ):
self .agent = agent
def run ( self , message : str , session_id : str | None = None ) -> TaskResult:
response = self .agent.chat(message, session_id = session_id)
return TaskResult(
message = response.text,
session_id = session_id or "default" ,
)
result = Netra.simulation.run_simulation(
name = "Customer Support Simulation" ,
dataset_id = "dataset-123" ,
task = MyAgentTask(my_agent),
context = { "environment" : "staging" },
max_concurrency = 5 ,
)
print ( f "Completed: { len (result[ 'completed' ]) } " )
print ( f "Failed: { len (result[ 'failed' ]) } " )
Parameters
Parameter Type Description namestrName of the simulation run (required) dataset_idstrID of the dataset containing conversation scenarios taskBaseTaskTask instance that wraps your AI agent contextdict?Optional context data passed to the simulation max_concurrencyintMaximum parallel conversations (default: 5)
Response
Field Type Description successboolOverall success status total_itemsintNumber of dataset items completedlist[dict]Successfully completed items failedlist[dict]Failed items with error details
Field Type Description run_item_idstrUnique item identifier successboolAlways True for completed items final_turn_idstrID of the last turn in the conversation
Field Type Description run_item_idstrUnique item identifier successboolAlways False for failed items errorstrError message describing the failure turn_idstrID of the turn where failure occurred
BaseTask
Create a custom task by inheriting from the BaseTask abstract base class. Your implementation wraps your AI agent and handles the conversation flow.
Definition
TaskResult Dataclass
from abc import ABC , abstractmethod
from typing import Optional, Awaitable
from netra.simulation.models import TaskResult
class BaseTask ( ABC ):
@abstractmethod
def run ( self , message : str , session_id : Optional[ str ] = None ) -> TaskResult | Awaitable[TaskResult]:
"""
Execute a single turn in the conversation.
Args:
message: The input message from the simulation.
session_id: The session identifier (optional).
Returns:
TaskResult: The task result containing the response message and session ID.
"""
pass
Implementation Requirements
Requirement Description Inherit BaseTask Your class must inherit from the BaseTask class Implement run() The run() method must return a TaskResult (can be async) Handle sessions Maintain conversation context using the session_id Return message Always return a response message in the TaskResult
Complete Example
from netra import Netra
from netra.simulation.task import BaseTask
from netra.simulation.models import TaskResult
from openai import OpenAI
# Initialize
Netra.init(
app_name = "simulation-demo" ,
headers = "x-api-key=your-api-key" ,
)
client = OpenAI()
# Store conversation history per session
conversations: dict[ str , list ] = {}
class OpenAITask ( BaseTask ):
"""Task that uses OpenAI for multi-turn conversations."""
def run ( self , message : str , session_id : str | None = None ) -> TaskResult:
import uuid
session = session_id or str (uuid.uuid4())
# Initialize conversation
if session not in conversations:
conversations[session] = [
{ "role" : "system" , "content" : "You are a helpful customer support agent." }
]
# Add user message
conversations[session].append({ "role" : "user" , "content" : message})
# Call OpenAI
response = client.chat.completions.create(
model = "gpt-4o-mini" ,
messages = conversations[session]
)
# Store and return response
content = response.choices[ 0 ].message.content
conversations[session].append({ "role" : "assistant" , "content" : content})
return TaskResult( message = content, session_id = session)
# Run simulation
result = Netra.simulation.run_simulation(
name = "Customer Support Test" ,
dataset_id = "your-dataset-id" ,
task = OpenAITask(),
context = { "model" : "gpt-4o-mini" , "temperature" : 0 },
max_concurrency = 5 ,
)
# Analyze results
print ( f "Total: { result[ 'total_items' ] } " )
print ( f "Completed: { len (result[ 'completed' ]) } " )
print ( f "Failed: { len (result[ 'failed' ]) } " )
for failure in result[ "failed" ]:
print ( f " Failed { failure[ 'run_item_id' ] } : { failure[ 'error' ] } " )
Netra.shutdown()
Next Steps