Auto-Instrumentation
Zero-code LLM tracing. Just import and go.
You will learn: - Which providers are auto-tracked - What data is captured - How to add context
How It Works
import kalibr # This patches the SDKs
import openai
# All calls are now tracked automatically
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)
Supported Providers
| Provider | Package | Auto-Tracked |
|---|---|---|
| OpenAI | openai |
✓ |
| Anthropic | anthropic |
✓ |
| Google AI | google-generativeai |
✓ |
Works with LangChain, LlamaIndex, CrewAI (they use these SDKs internally).
What's Captured
For every LLM call: - Provider and model - Input/output tokens - Duration - Cost (calculated automatically) - Success/error status
Adding Context
import os
os.environ["KALIBR_WORKFLOW_ID"] = "research_pipeline"
os.environ["KALIBR_TENANT_ID"] = "acme_corp"
# All subsequent calls tagged with this context
Import Order Matters
# ✓ Correct
import kalibr
import openai
# ✗ Wrong - openai won't be instrumented
import openai
import kalibr
Disabling Auto-Instrumentation
export KALIBR_AUTO_INSTRUMENT=false
Then use manual @trace decorators instead.
Next Steps
- Manual Tracing — Track non-LLM operations
- Workflows — Group related calls