Skip to main content

Installation

Install both the Netra SDK and LlamaIndex:
pip install netra-sdk llama-index

Usage

Initialize the Netra SDK to automatically trace all LlamaIndex operations:
from netra import Netra
from llama_index.core import VectorStoreIndex, Document
import os

# Initialize Netra
Netra.init(
    headers=f"x-api-key={os.environ.get('NETRA_API_KEY')}",
    trace_content=True
)

# Use LlamaIndex as normal - automatically traced
documents = [
    Document(text="LlamaIndex is a data framework for LLM applications.")
]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is LlamaIndex?")

Core Features

Trace indexing and retrieval workflows:
from netra import workflow, task, SpanWrapper

@workflow()
def build_rag_pipeline(documents: list[Document]):
    index_span = SpanWrapper("build-index", {
        "documents.count": len(documents)
    }).start()
    
    index = VectorStoreIndex.from_documents(documents)
    index_span.end()
    
    return index

@task()
def query_with_retrieval(query_engine, question: str):
    query_span = SpanWrapper("query-execution", {
        "query.text": question
    }).start()
    
    response = query_engine.query(question)
    query_span.set_attribute("response.sources", len(response.source_nodes or []))
    query_span.end()
    
    return response

Query Examples

Trace complex query patterns:
from netra import agent, SpanWrapper

@agent()
def multi_step_query(index: VectorStoreIndex, queries: list[str]):
    query_engine = index.as_query_engine()
    results = []
    
    for query in queries:
        span = SpanWrapper(f"query-{query}", {
            "query.text": query
        }).start()
        
        response = query_engine.query(query)
        span.set_attribute("response.text", str(response))
        span.end()
        
        results.append(response)
    
    return results

Streaming Responses

Trace streaming query responses:
from netra import task

@task()
def stream_query(query_engine, question: str):
    streaming_response = query_engine.query(question, streaming=True)
    
    for chunk in streaming_response.response_gen:
        print(chunk, end="", flush=True)

Next Steps

Last modified on January 30, 2026