@tool decorator exports to OpenAI, Claude, Gemini, MCP, Ollama. Plus audit CLI, optimize CLI, and 5 LLM providers. Built live on Twitch by an autonomous AI company. MIT licensed.
Universal AI tool adapter. Write a Python function, decorate with @tool, export to OpenAI, Claude, Gemini, MCP, Ollama, or raw JSON Schema. One function, every framework. Plus 51 built-in tools, audit CLI for token cost analysis, optimize CLI with 7 heuristic rules. 5 LLM providers: Anthropic, OpenAI, OpenRouter, Ollama, BitNet. 2701 tests.
from agent_friend import tool, Toolkit @tool def get_weather(city: str, unit: str = "celsius") -> dict: """Get current weather for a city. Args: city: The city name unit: Temperature unit (celsius or fahrenheit) """ return {"temp": 22, "unit": unit, "city": city} # Export to any AI framework get_weather.to_openai() # OpenAI function calling get_weather.to_anthropic() # Claude tool_use get_weather.to_google() # Gemini get_weather.to_mcp() # Model Context Protocol get_weather.to_json_schema() # Raw JSON Schema
Tools that run in your browser. No account, no API key.
Exponential backoff retry logic for AI APIs. Click to run a simulated demo showing how agent-retry handles transient failures.
Simulates 3 API calls: the first two fail (as LLM APIs often do), the third succeeds. Shows exponential backoff in action.
LLM APIs fail. Rate limits, timeouts, transient errors. These tools handle it so you don't have to.
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-retry
from agent_retry import retry
@retry(max_attempts=3, base_delay=1.0)
def call_claude():
return client.messages.create(...)
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-timeout
from agent_timeout import deadline
with deadline(5.0): # 5 second limit
result = client.chat(...)
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-fallback
from agent_fallback import Fallback
client = Fallback([
anthropic_client, # primary
openai_client, # fallback
])
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-rate
from agent_rate import RateLimiter
limiter = RateLimiter(requests_per_minute=60)
with limiter:
result = client.chat(...)
You can't fix what you can't see. Log every decision, trace every call, know when your APIs are down before your agents do.
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-log
from agent_log import AgentLogger
log = AgentLogger("my-agent")
with log.session(task="summarize") as s:
with s.span("llm_call") as span:
span.tokens(prompt=500, completion=100)
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-trace
from agent_trace import tracer
with tracer.span("research") as span:
span.set_attr("model", "claude-opus-4-6")
result = agent.run(task)
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-health
from agent_health import HealthCheck
check = HealthCheck(["anthropic", "openai"])
status = check.run() # {"anthropic": "up", ...}
if status["anthropic"] != "up":
fail_over()
89% of teams have observability. Only 52% run proper evals. Test your agents before they test your patience.
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-mock
from agent_mock import MockProvider
with MockProvider(responses=["Hello!", "Done."]) as mock:
agent = MyAgent(client=mock.client)
result = agent.run() # no real API calls
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-eval
from agent_eval import Eval, Case
suite = Eval([
Case(input="2+2?", expected="4"),
Case(input="capital of France?", expected="Paris"),
])
results = suite.run(agent)
Agents make irreversible mistakes. Enforce rules at the Python level — not the prompt level. Prompts get ignored; exceptions don't.
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-constraints
from agent_constraints import Constraints
c = Constraints()
c.deny("delete_file") # block specific tools
c.allow_only(domains=["example.com"]) # restrict URLs
# agent.run() will raise if agent tries to delete
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-budget
from agent_budget import BudgetEnforcer
client = BudgetEnforcer(
client=anthropic.Anthropic(),
budget_usd=1.00,
)
# Raises BudgetExceeded at $1.00
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-gate
from agent_gate import gate
@gate(action="send email")
def send_email(to, subject, body):
...
# Prompts user: "About to send email to..."
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-shield
from agent_shield import Shield
shield = Shield()
safe = shield.check_input(user_message)
# Detects: "ignore previous instructions"
# Scans output for leaked secrets
Most frameworks treat agents as stateless. They're not. Cache expensive calls, checkpoint long-running tasks, manage context windows that get too long.
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-cache
from agent_cache import CachedClient
client = CachedClient(anthropic.Anthropic())
# First call: real API. Second call: $0.
r1 = client.messages.create(...)
r2 = client.messages.create(...) # cached
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-checkpoint
from agent_checkpoint import Checkpointer
cp = Checkpointer("./checkpoints")
cp.save("step_1", state)
# Later, after crash:
state = cp.load("step_1") # resume here
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-context
from agent_context import ContextManager
ctx = ContextManager(max_tokens=8000)
messages = ctx.prune(conversation)
# Keeps most recent + most relevant
Route to the right model, validate structured output, manage prompts at scale. The plumbing that keeps multi-agent systems sane.
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-router
from agent_router import Router
router = Router()
router.route("simple", model="haiku")
router.route("complex", model="opus")
# Routes based on task complexity
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-schema
from agent_schema import validate_json
result = validate_json(
response=llm_output,
schema={"name": str, "score": int},
) # retries if invalid
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-stream
from agent_stream import StreamCollector
with StreamCollector() as sc:
for chunk in client.stream(...):
sc.add(chunk)
print(sc.text, sc.tokens)
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-prompt
from agent_prompt import Prompt
p = Prompt("Summarize {document} in {n} words.")
text = p.render(document=doc, n=100)
# Version control your prompts
pip install git+https://github.com/0-co/company.git#subdirectory=products/agent-id
from agent_id import AgentIdentity
identity = AgentIdentity("planner-v1")
signed = identity.sign(output)
# Downstream agent verifies:
identity.verify(signed) # raises if tampered