Observability for LLM agents

See every LLM call across OpenAI, Anthropic, and Google - with full request/response data, token usage, and agent cost tracking.

AI agents fail silently

Your AI agent made dozens of API calls, burned through 120k tokens, and returned a confidently wrong answer. Your logs show nothing useful. Your users are frustrated.

Traditional observability tools weren't built for this. They track HTTP requests and database queries, not reasoning chains and token economics.

  • Retry loops that waste thousands of tokens
  • Hallucinated tool calls that silently fail
  • Context windows exhausted mid-conversation
  • Model responses that pass validation but miss intent

See what your LLM calls actually do

AmberTrace captures every LLM API call automatically. Every request, every response, with full token counts, latency, and error data.

Full trace visibility

Browse every LLM call with full request and response data. Filter by provider, model, or status to find exactly what you need.

Token economics

Track prompt and completion tokens per call. Identify expensive operations and optimize your prompts based on real usage data.

Failure detection

Every failed call is captured with full error details. Filter by error status to spot problems and debug faster.

Multi-provider support

Works with OpenAI, Anthropic, and Google out of the box. Same dashboard, same traces, regardless of which provider you're calling.

Smart alerting

Get notified on Slack or Discord when costs spike, error rates climb, or budgets are at risk. Scheduled summaries keep your team informed.

AmberTrace dashboard showing total traces, token usage, average duration, success rate, and provider breakdown

Understand your AI usage at a glance

Track total traces, token consumption, average latency, and success rates across all providers. See exactly where your AI budget goes with per-provider breakdowns.

Three lines to instrument your agent

1

Install the SDK

pip install ambertrace or npm install ambertrace

2

Initialize with your API key

One line at the start of your application. No code changes to your LLM calls.

3

View traces in real-time

Every call appears in your dashboard immediately. Filter by model, status, or duration.

Get alerted

Set up cost, error, and budget alerts on Slack or Discord to catch issues before your users do.

Works with your existing code

AmberTrace patches OpenAI, Anthropic, and Google clients automatically. No wrapper functions, no decorators, no manual instrumentation.

Python
main.py
import ambertrace
from openai import OpenAI

ambertrace.init(api_key="at_...")

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Analyze this data"}]
)

# Every LLM call is now traced automatically
TypeScript
index.ts
import ambertrace from '@ambertrace/node';
import OpenAI from 'openai';

ambertrace.init({ apiKey: 'at_...' });

const client = new OpenAI();
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Analyze this data' }]
});

// Every LLM call is now traced automatically

Built for teams shipping AI to production

AI & ML Engineers

Debug agent behavior without adding print statements everywhere. Understand why your agent made specific decisions and where it went off track.

Technical Founders

Ship AI features faster with confidence. Know exactly how much each user session costs and identify optimization opportunities before they become problems.

Platform & SRE Teams

Give your AI teams the same observability infrastructure they expect for traditional services. Standardize on a single tool for all LLM monitoring.

Pricing

Start free. Scale when ready.

Generous free tier for individual developers. Simple per-seat pricing when your team grows. No surprise bills.

Starter

$0Free forever
Start free trial
  • Up to 50,000 traces / month
  • 1 team member
  • 7-day data retention
  • All providers (OpenAI, Anthropic, Google)
  • Full trace timeline view
  • Token usage tracking
  • Community support (GitHub Issues)
Recommended

Grow

$49/ user / month
Start free trial

Everything in Starter, plus:

  • Up to 300,000 traces / month
  • Up to 5 team members
  • 30-day data retention
  • Alerting & notifications (Slack, Discord)
  • Failure detection & anomaly flagging
  • Team dashboards & shared views
  • Cost-per-session analytics
  • API access for CI/CD integration
  • Priority email support

Compare plans in detail

FeatureStarterGrow
Monthly traces50,000300,000
Team members1Up to 5
Data retention7 days30 days
ProvidersOpenAI, Anthropic, GoogleOpenAI, Anthropic, Google
Trace timeline view
Token usage tracking
Cost-per-session analytics
Failure detection
Alerting (Slack, Discord)
Team dashboards & shared views
API access
SupportCommunityPriority email

Need more than 300K traces, self-hosting, SAML SSO, or a custom SLA? Contact us

Frequently asked questions

Stop debugging AI agents blindly

Start tracing your LLM calls today. Free to get started.

Open source SDKs. Self-host or use our cloud.