Skip to main content
This guide walks you through creating your first alert rule to monitor your AI system and get notified when something goes wrong.

1. Prerequisites

Before setting up alerts, ensure you have:

2. Create a Contact Point

Contact points define where notifications are sent when alerts trigger.
1

Navigate to Settings

Go to Settings → Contact Points from the left navigation panel.
2

Create Contact Point

Click Create Contact Point.
3

Configure

  • Name: Enter a descriptive name (e.g., “Engineering Alerts”)
  • Integration: Select Email or Slack
  • Details: Enter email address or Slack webhook URL
4

Save

Click Create to save your contact point.
For Slack, create an Incoming Webhook in your workspace and paste the URL.

3. Create Your First Alert Rule

1

Navigate to Alert Rules

Go to Alert Rules from the left navigation panel.
2

Create Alert Rule

Click Create Alert Rule in the top right corner.
3

Basic Information

  • Alert Name: “High Cost Alert”
  • Description: “Notifies when a single request exceeds $0.50”
4

Select Contact Points

Choose the contact point you created in Step 2.
5

Configure Scope

Select Trace to monitor entire requests.
6

Select Metric

Choose Cost to monitor token/API spend.
7

Set Trigger

  • Operator: Greater than (>)
  • Threshold: 0.50
8

Save

Click Create to activate the alert.

4. Test Your Alert

Trigger a trace that exceeds your threshold:
from netra import Netra
from openai import OpenAI

Netra.init(
    app_name="alert-test",
    headers=f"x-api-key={os.getenv('NETRA_API_KEY')}",
)

client = OpenAI()

# Generate a longer response to increase cost
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "Write a detailed 500-word essay about AI safety."}
    ],
)

5. Verify Notification

After triggering a high-cost trace:
  1. Check your configured contact point (email inbox or Slack channel)
  2. You should receive a notification with:
    • Alert name and description
    • Triggered value (actual cost)
    • Timestamp
    • Link to the trace
Alerts evaluate in real-time. Notifications arrive within seconds of the threshold being breached.

Common Alert Examples

Latency Alert

Monitor response time:
  • Scope: Trace
  • Metric: Latency
  • Trigger: > 5000ms

Error Rate Alert

Catch failures:
  • Scope: Span
  • Metric: Error
  • Trigger: = true

Token Usage Alert

Control token consumption:
  • Scope: Trace
  • Metric: Token Count
  • Trigger: > 10000

Troubleshooting

IssueSolution
No notifications receivedVerify contact point configuration and check spam folder
Alert not triggeringConfirm threshold is lower than actual values in your traces
Too many alertsIncrease threshold or add time-based aggregation

Next Steps

Last modified on January 30, 2026