Skip to main content
This guide walks you through sending your first trace to Netra.

Prerequisites

Before you begin, make sure you’ve completed the initial setup from the Getting Started guide:
  • Created your API key
  • Configured environment variables
  • Installed the Netra SDK
1. Install the SDK
pip install netra-sdk
2. Set environment variables
export NETRA_API_KEY="your-api-key-here"
export NETRA_OTLP_ENDPOINT="https://api.getnetra.ai/telemetry"

Send Your First Trace

Add Netra initialization at the start of your application, then run a simple LLM call. Netra will automatically capture the trace.
import os
from netra import Netra
from openai import OpenAI

# Initialize Netra
Netra.init(
    app_name="my-ai-app",
    environment="development",
    headers=f"x-api-key={os.getenv('NETRA_API_KEY')}",
)

# Make an LLM call - this will be automatically traced
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello, how are you?"}],
)

print(response.choices[0].message.content)
Netra automatically instruments popular LLM libraries like OpenAI, Anthropic, LangChain, and more. See Auto Instrumentation for the full list.

View Your Trace

Open the Netra Dashboard and navigate to Observability → Traces. You should see your trace appear within a few seconds. First Trace Click on the trace to see the full timeline, including:
  • LLM call details and parameters
  • Token usage and cost
  • Latency breakdown
  • Full prompt and response content

Troubleshooting

IssueSolution
No traces appearingVerify NETRA_API_KEY and NETRA_OTLP_ENDPOINT are set correctly
LLM calls not tracedEnsure Netra.init() is called before importing the LLM library
Missing prompt contentSet traceContent: true in initialization

Learn more

Last modified on February 3, 2026