Skip to main content
Helicone has moved into maintenance mode after its acquisition by Mintlify. This page walks through how to transition both prompt management and observability from Helicone to Netra.

Migrating Tracing / Observability

Helicone captures LLM traffic at the gateway. Netra adds hierarchical traces with nested spans, so you can follow complete multi-step agent workflows, not just individual model calls. Netra ships official SDKs for Python and TypeScript to instrument application code directly, plus native integrations across 50+ frameworks and model providers including OpenAI, LangChain, LlamaIndex, Anthropic, and more. See the full integrations overview to pick the integration that matches your stack. If you’re using OpenAI, the simplest migration is to keep your existing OpenAI SDK calls. Since Helicone is OpenAI-compatible, you can even keep Helicone as a gateway during the transition:
from netra import Netra

# Initialize before importing other libraries for best results
Netra.init(app_name="my-ai-app", environment="production")
Then keep the rest of your OpenAI code the same:
# Your existing code works unchanged
from openai import OpenAI

client = OpenAI(
    base_url="<your_helicone_url>",  # Keep helicone's url during transition
    api_key="<your_api_key>"
)
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
)
Beyond gateway logging, Netra also supports application-level tracing via decorators in Python and equivalent patterns in TypeScript. This creates hierarchical traces with nested spans for multi-step workflows (tool calls, retrieval, post-processing, and more), giving you full execution context that a gateway alone can’t provide.

Option B: Use OpenTelemetry

If you already have OpenTelemetry in place, you don’t need to re-instrument. Point your OTLP exporter at Netra and keep your existing spans and context propagation. Configure your exporter using Netra’s OTLP endpoint:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.eu.getnetra.ai/telemetry"
export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=<your_netra_api_key>"

Option C: Replace the Gateway with LiteLLM Proxy

If you relied on Helicone primarily as an AI gateway (provider routing, fallbacks, or operational controls), you can replace it with LiteLLM Proxy and continue sending traces to Netra:
from netra import Netra

# Initialize before importing other libraries for best results
Netra.init(app_name="my-ai-app", environment="production")
Then keep your OpenAI client code the same—just point it at your LiteLLM proxy:
# Your existing code works unchanged
from openai import OpenAI

client = OpenAI(
    base_url="<your_litellm_url>",  # Add your litellm url
    api_key="<your_api_key>"
)
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
)
This preserves the gateway pattern while routing your tracing data into Netra. See the LiteLLM Proxy integration guide for setup and configuration.

What’s next?

Once your tracing is set up, you’re ready to explore the rest of Netra. Pick what matters most to your team:
Last modified on March 5, 2026