Documentation Index Fetch the complete documentation index at: https://docs.getnetra.ai/llms.txt
Use this file to discover all available pages before exploring further.
The Netra SDK exposes a simulation client that lets you:
Run simulations - Execute multi-turn conversations against your AI agent
Define tasks - Create custom task implementations to wrap your agent
Control concurrency - Manage parallel execution for throughput
This page shows how to use netra.simulation to run multi-turn simulations and test your AI agents programmatically.
Getting Started
The simulation client is available on the main Netra entry point after initialization.
import { Netra } from "netra-sdk-js" ;
const client = new Netra ({
apiKey: "your-api-key" ,
});
// Access the simulation client
await client . simulation . runSimulation ( ... );
runSimulation
Execute a multi-turn conversation simulation against a dataset. Your task function is called repeatedly for each turn until the conversation completes.
import { Netra } from "netra-sdk-js" ;
import { BaseTask , TaskResult } from "netra-sdk-js/simulation" ;
const client = new Netra ({ apiKey: "..." });
class MyAgentTask extends BaseTask {
private agent : any ;
constructor ( agent : any ) {
super ();
this . agent = agent ;
}
async run ( message : string , sessionId ?: string | null ) : Promise < TaskResult > {
const response = await this . agent . chat ( message , { sessionId });
return {
message: response . text ,
sessionId: sessionId || "default" ,
};
}
}
const result = await client . simulation . runSimulation ({
name: "Customer Support Simulation" ,
datasetId: "dataset-123" ,
task: new MyAgentTask ( myAgent ),
context: { environment: "staging" },
maxConcurrency: 5 ,
});
if ( result ) {
console . log ( `Completed: ${ result . completed . length } ` );
console . log ( `Failed: ${ result . failed . length } ` );
}
Parameters (SimulationOptions)
Parameter Type Description namestringName of the simulation run (required) datasetIdstringID of the dataset containing conversation scenarios taskBaseTaskTask instance that wraps your AI agent contextRecord<string, any>?Optional context data passed to the simulation maxConcurrencynumberMaximum parallel conversations (default: 5)
Response: SimulationResult
Field Type Description successbooleanOverall success status totalItemsnumberNumber of dataset items completedConversationResult[]Successfully completed items failedConversationResult[]Failed items with error details
Completed Item (ConversationResult)
Field Type Description runItemIdstringUnique item identifier successbooleanAlways true for completed items finalTurnIdstringID of the last turn in the conversation
Failed Item (ConversationResult)
Field Type Description runItemIdstringUnique item identifier successbooleanAlways false for failed items errorstringError message describing the failure turnIdstringID of the turn where failure occurred
BaseTask
Create a custom task by extending the BaseTask abstract class. Your implementation wraps your AI agent and handles the conversation flow.
Definition
TaskResult Interface
import { BaseTask , TaskResult } from "netra-sdk-js/simulation" ;
abstract class BaseTask {
/**
* Execute a single turn in the conversation
* @param message - The input message from the simulation
* @param sessionId - The session identifier
* @returns The task result containing the response message and session ID
*/
abstract run (
message : string ,
sessionId ?: string | null
) : Promise < TaskResult > | TaskResult ;
}
Implementation Requirements
Requirement Description Extend BaseTask Your class must extend the BaseTask abstract class Implement run() The run() method must return a TaskResult Handle sessions Maintain conversation context using the sessionId Return message Always return a response message in the TaskResult
Complete Example
import { Netra } from "netra-sdk-js" ;
import { BaseTask , TaskResult } from "netra-sdk-js/simulation" ;
import OpenAI from "openai" ;
async function main () {
// Initialize
const client = new Netra ({
apiKey: "your-api-key" ,
});
const openai = new OpenAI ();
// Store conversation history per session
const conversations : Record < string , any []> = {};
class OpenAITask extends BaseTask {
/**
* Task that uses OpenAI for multi-turn conversations.
*/
async run ( message : string , sessionId ?: string | null ) : Promise < TaskResult > {
const session = sessionId || crypto . randomUUID ();
// Initialize conversation
if ( ! conversations [ session ]) {
conversations [ session ] = [
{
role: "system" ,
content: "You are a helpful customer support agent." ,
},
];
}
// Add user message
conversations [ session ]. push ({ role: "user" , content: message });
// Call OpenAI
const response = await openai . chat . completions . create ({
model: "gpt-4o-mini" ,
messages: conversations [ session ],
});
// Store and return response
const content = response . choices [ 0 ]. message . content || "" ;
conversations [ session ]. push ({ role: "assistant" , content });
return {
message: content ,
sessionId: session ,
};
}
}
// Run simulation
const result = await client . simulation . runSimulation ({
name: "Customer Support Test" ,
datasetId: "your-dataset-id" ,
task: new OpenAITask (),
context: { model: "gpt-4o-mini" , temperature: 0 },
maxConcurrency: 5 ,
});
if ( result ) {
// Analyze results
console . log ( `Total: ${ result . totalItems } ` );
console . log ( `Completed: ${ result . completed . length } ` );
console . log ( `Failed: ${ result . failed . length } ` );
for ( const failure of result . failed ) {
console . log ( ` Failed ${ failure . runItemId } : ${ failure . error } ` );
}
}
}
main ();
Next Steps