Migrating Tracing / Observability
Helicone captures LLM traffic at the gateway. Netra adds hierarchical traces with nested spans, so you can follow complete multi-step agent workflows, not just individual model calls.Option A: Use the Netra SDK (Recommended)
Netra ships official SDKs for Python and TypeScript to instrument application code directly, plus native integrations across 50+ frameworks and model providers including OpenAI, LangChain, LlamaIndex, Anthropic, and more. See the full integrations overview to pick the integration that matches your stack. If you’re using OpenAI, the simplest migration is to keep your existing OpenAI SDK calls. Since Helicone is OpenAI-compatible, you can even keep Helicone as a gateway during the transition:Option B: Use OpenTelemetry
If you already have OpenTelemetry in place, you don’t need to re-instrument. Point your OTLP exporter at Netra and keep your existing spans and context propagation. Configure your exporter using Netra’s OTLP endpoint:Option C: Replace the Gateway with LiteLLM Proxy
If you relied on Helicone primarily as an AI gateway (provider routing, fallbacks, or operational controls), you can replace it with LiteLLM Proxy and continue sending traces to Netra:What’s next?
Once your tracing is set up, you’re ready to explore the rest of Netra. Pick what matters most to your team:Run Simulations
Test your agents with realistic, multi-turn conversations using configurable personas and goals.
Run Evaluations
Measure quality, accuracy, and reliability with LLM-as-Judge and code evaluators.
Set Up Custom Dashboards
Build your own views to track traces, costs, and agent behavior across projects.
Configure Alerts
Get notified about anomalies, cost spikes, and performance issues.
