Documentation Index
Fetch the complete documentation index at: https://docs.getnetra.ai/llms.txt
Use this file to discover all available pages before exploring further.
The Netra SDK exposes a prompts API that lets you:
- Fetch managed prompts – Retrieve prompt versions by name and label from the Netra backend
- Use labels for versioning – Target specific prompt versions such as
"production", "staging", or any custom label
- Integrate seamlessly – Drop fetched prompts directly into your LLM calls with zero boilerplate
This page shows how to use the get_prompt utility in Netra to fetch and use your managed prompts.
Getting Started
The prompts client is initialized automatically when you call Netra.init. No additional flags are required as long as a valid OTLP endpoint and API key are configured, prompts are ready to use.
from netra import Netra
Netra.init(
app_name="sample-app",
)
get_prompt
Fetch a prompt version by name and label from the Netra backend.
from netra import Netra
Netra.init(
app_name="sample-app",
)
# Fetch the production version of a prompt
prompt = Netra.prompts.get_prompt(name="welcome-message")
# Fetch a specific label
staging_prompt = Netra.prompts.get_prompt(
name="welcome-message",
label="staging",
)
Parameters
| Parameter | Type | Description |
|---|
name | str | Name of the prompt to fetch. This is required — passing an empty string returns None. |
label | str | Label of the prompt version to retrieve (e.g. "production", "staging"). Defaults to "production". |
Return Value
| Scenario | Return |
|---|
| Prompt found | dict — the prompt version data returned by the Netra backend |
name is empty or None | None |
| Network / server error | {} (empty dict) |
| Client not initialized | {} (empty dict) |
Configuration
Required
get_prompt relies on the OTLP endpoint and API key configured via Netra.init or environment variables.
| Setting | Environment Variable | Description |
|---|
| OTLP endpoint | NETRA_OTLP_ENDPOINT | Base URL for the Netra backend. The prompts client strips a trailing /telemetry suffix automatically. |
| API key | NETRA_API_KEY | API key sent as the x-api-key header on every request. |
Error Handling
get_prompt is designed to never throw. Errors are logged and a safe fallback value is returned so your application keeps running.
from netra import Netra
Netra.init(app_name="sample-app")
prompt = Netra.prompts.get_prompt(name="onboarding-flow")
if not prompt:
# Handle missing prompt — use a hardcoded fallback
prompt_text = "Welcome! How can I help you today?"
else:
prompt_text = prompt.get("messages", "")
Common error scenarios and their log messages:
| Scenario | Log Level | Message |
|---|
Empty name argument | ERROR | netra.prompts: name is required to fetch a prompt |
| Client not initialized | ERROR | netra.prompts: Prompts client is not initialized; cannot fetch prompt version for '<name>' |
| Network / HTTP error | ERROR | netra.prompts: Failed to fetch prompt version for '<name>' (label=<label>): <error> |
Complete Example
from netra import Netra
# 1. Initialize Netra
Netra.init(
app_name="chatbot-service",
environment="production",
)
# 2. Fetch a managed prompt
system_prompt = Netra.prompts.get_prompt(name="chatbot-system")
if not system_prompt:
raise RuntimeError("Required prompt 'chatbot-system' not found in Netra")
# 3. Use the prompt in your LLM call
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=system_prompt.get("messages", ""),
)
print(response.choices[0].message.content)
# 4. Shut down cleanly
Netra.shutdown()
Next Steps