Skip to main content
Netra provides seamless integrations with popular LLM providers, AI frameworks, vector databases, and speech services. With just a few lines of code, you can enable automatic tracing and observability for your AI applications.
Missing an integration? We’d love to hear from you! Open a GitHub issue to request a new integration.

Integration Types

Netra integrations fall into several categories:

LLM Providers

Capture traces from direct LLM API calls including prompts, completions, token usage, and costs.

All LLM Providers

ProviderDescriptionLink
OpenAIGPT-4, GPT-3.5 Turbo, Embeddings, DALL-EView Docs
Anthropic ClaudeClaude 3 Opus, Sonnet, HaikuView Docs
Google GeminiGemini Pro, Gemini Ultra, EmbeddingsView Docs
AWS BedrockClaude, Titan, Llama 2 via AWSView Docs
Google Vertex AIPaLM, Gemini on Google CloudView Docs
Mistral AIMistral Large, Medium, Small, EmbeddingsView Docs
GroqUltra-fast inference for Llama, MixtralView Docs
CohereCommand, Embed, Rerank modelsView Docs
OllamaLocal LLM deploymentView Docs
Together AIOpen-source model hostingView Docs
ReplicateRun ML models in the cloudView Docs
Hugging FaceTransformers libraryView Docs
Aleph AlphaLuminous modelsView Docs
IBM watsonxEnterprise AI platformView Docs

AI Frameworks

Trace complex AI applications with full visibility into agent workflows, chains, and tool calls.

All AI Frameworks

FrameworkDescriptionLink
LangChainPopular framework for LLM application developmentView Docs
LangGraphBuild stateful, multi-actor AI agentsView Docs
LlamaIndexData framework for connecting LLMs to dataView Docs
CrewAIFramework for orchestrating AI agentsView Docs
Pydantic AIType-safe AI application developmentView Docs
LiteLLMUnified API for 100+ LLM providersView Docs
HaystackEnd-to-end NLP frameworkView Docs
DSPyProgramming with foundation modelsView Docs
Google ADKAgent Development KitView Docs
CerebrasHigh-performance AI inferenceView Docs
GroqLow-latency LLM inferenceView Docs
MCPModel Context ProtocolView Docs

Vector Databases

Monitor vector search operations, embeddings, and retrieval performance.

Speech Services

Track speech-to-text and text-to-speech operations for voice AI applications.

Quick Start

Enable auto-instrumentation for any supported integration:
from netra import Netra
from netra.instrumentation.instruments import InstrumentSet

Netra.init(
    app_name="my-ai-app",
    headers=f"x-api-key={os.getenv('NETRA_API_KEY')}",
    instruments={
        InstrumentSet.OPENAI,
        InstrumentSet.LANGCHAIN,
        InstrumentSet.PINECONE,
    },
)

# Your LLM calls are now automatically traced
If you don’t specify instruments, Netra will auto-detect and instrument all supported libraries in your environment.

Next Steps

Last modified on February 3, 2026