Framework Integrations
CrewAI, LangChain, OpenAI Agents SDK, HuggingFace, OpenClaw, and Hermes.
Overview
Kalibr integrates with popular agent frameworks and supports any model across any modality. Use router.as_langchain() for LangChain, or auto-instrument HuggingFace's InferenceClient for 17 task types.
CrewAI
Install
pip install kalibr[crewai] crewai openai anthropic
Code (Python only)
from kalibr import Router
from crewai import Agent, Task, Crew
# Create Kalibr router
router = Router(
goal="research_task",
paths=["gpt-4o-mini", "claude-sonnet-4-20250514"]
)
# Get LangChain-compatible LLM
llm = router.as_langchain()
# Use with CrewAI
researcher = Agent(
role="Researcher",
goal="Find accurate information",
backstory="You are a research assistant.",
llm=llm, # Kalibr handles model selection
verbose=True
)
task = Task(
description="Research the history of Python programming language.",
expected_output="A summary of Python's history.",
agent=researcher
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
# Report outcome to Kalibr
router.report(success=len(str(result)) > 100)
Note: CrewAI requires Python 3.10+.
LangChain
Install
pip install kalibr[langchain] langchain langchain-openai langchain-anthropic
Code (Python only)
from kalibr import Router
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
router = Router(
goal="summarize_text",
paths=["gpt-4o-mini", "claude-sonnet-4-20250514"]
)
llm = router.as_langchain()
prompt = ChatPromptTemplate.from_template("Summarize this: {text}")
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"text": "Long article text here..."})
router.report(success=len(result) > 50)
OpenAI Agents SDK
Install
pip install kalibr[openai-agents] openai-agents
Code (Python only)
from kalibr import Router
router = Router(
goal="agent_task",
paths=["gpt-4o", "gpt-4o-mini"]
)
# Get routing decision
from kalibr import get_policy
policy = get_policy(goal="agent_task")
model = policy["recommended_model"]
# Use with OpenAI Agents SDK
from agents import Agent, Runner
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant.",
model=model,
)
result = Runner.run_sync(agent, "Your task here")
# Report outcome
router.report(success=len(result.final_output) > 0)
HuggingFace
Install
pip install kalibr huggingface_hub
Code (Python only)
import kalibr # instruments InferenceClient automatically
from huggingface_hub import InferenceClient
client = InferenceClient()
# All 17 task methods are now traced with cost tracking
# Route between HuggingFace models
from kalibr import Router
router = Router(
goal="transcribe_call",
paths=["openai/whisper-large-v3", "facebook/seamless-m4t-v2-large"],
success_when=lambda output: len(output) > 50
)
result = router.execute(task="automatic_speech_recognition", input_data=audio_bytes)
Note: HuggingFace instrumentation covers 17 task types across text, audio, image, embedding, and classification. Set HF_API_TOKEN or HUGGING_FACE_HUB_TOKEN to access private models or avoid free-tier rate limits.
Supported task types
Pass any of these as the task argument to router.execute():
| Modality | Tasks |
|---|---|
| Text | chat_completion, text_generation, summarization, translation, fill_mask, table_question_answering |
| Audio | automatic_speech_recognition, text_to_speech, audio_classification |
| Image | text_to_image, image_to_text, image_classification, image_segmentation, object_detection |
| Embedding | feature_extraction |
| Classification | text_classification, token_classification |
DeepSeek
DeepSeek uses the OpenAI Python SDK with a different endpoint. No separate SDK needed — just set your API key and use deepseek-* model names in your paths.
Install
pip install kalibr openai
Setup
export DEEPSEEK_API_KEY=sk-... # from platform.deepseek.com
Code (Python only)
import kalibr
from kalibr import Router
# Route between DeepSeek models
router = Router(
goal="classify_icp",
paths=["deepseek-chat", "deepseek-reasoner"],
success_when=lambda output: len(output) > 0
)
response = router.completion(messages=[{"role": "user", "content": "Is this company a good ICP fit?"}])
# Mix DeepSeek with other providers — Thompson Sampling learns which wins
router = Router(
goal="classify_icp",
paths=["gpt-4o-mini", "deepseek-chat", "claude-sonnet-4-20250514"],
)
response = router.completion(messages=[...])
Models: deepseek-chat (DeepSeek-V3, fast and cheap), deepseek-reasoner (DeepSeek-R1, for complex reasoning), deepseek-coder (code tasks). Kalibr tracks cost and attributes spans correctly for each.
When to Use as_langchain() vs get_policy()
| Method | Use When |
|---|---|
as_langchain() | Framework expects a LangChain LLM (CrewAI, LangChain chains) |
get_policy() | You need the model name to pass to another SDK |
router.completion() | Direct text LLM calls without a framework |
router.execute() | Any HuggingFace task (transcription, image gen, embeddings, etc.) |
Claude Code Plugin
The Kalibr Claude Code plugin instruments your project automatically from within Claude Code.
Install
/plugin add kalibr
Instrument an existing project
/kalibrize
This scans all Python files for bare OpenAI, Anthropic, LangChain, and CrewAI calls, shows each call with a proposed Router replacement, and applies changes with your approval.
What it does automatically
- Injects Kalibr context so Claude Code generates Router-based code when building agents
- Adds
kalibrtorequirements.txt - Adds
KALIBR_API_KEYandKALIBR_TENANT_IDto.env.example
Next
- API Reference - Full Router API
- Production Guide - Error handling, monitoring