Every AI framework has its own tool format. LangChain tools don't work in CrewAI. MCP has its own protocol. OpenAI and Anthropic have different function schemas. You write the same tool six times for six frameworks.
agent-friend fixes this. One @tool decorator exports to every format.
from agent_friend import tool
@tool
def stock_price(ticker: str) -> str:
"""Get current stock price.
Args:
ticker: Stock ticker symbol (e.g. AAPL, GOOG)
"""
return requests.get(f"https://api.example.com/{ticker}").json()["price"]
# Export to any AI framework:
stock_price.to_openai() # OpenAI function calling format
stock_price.to_anthropic() # Claude tool_use format
stock_price.to_google() # Gemini format
stock_price.to_mcp() # Model Context Protocol
stock_price.to_json_schema() # Raw JSON Schema
Type hints become the JSON schema. Docstring Args become parameter descriptions. Write once, use everywhere.
from agent_friend import tool, Toolkit
@tool
def search(query: str) -> str: ...
@tool
def calculate(expr: str) -> float: ...
kit = Toolkit([search, calculate])
kit.to_openai() # list of OpenAI tool defs
kit.to_anthropic() # list of Claude tool defs
kit.to_mcp() # list of MCP tool defs
SQLite databases. Create tables, query, insert, update. Persistent storage for agents.
REST API client. GET/POST/PUT/DELETE. Custom headers, JSON bodies, auth.
Web search via DuckDuckGo. No API key needed. Returns titles, URLs, snippets.
In-memory vector search. Cosine/euclidean similarity. Build RAG without Pinecone.
Chain operations into pipelines. Retries, conditional steps, error handlers.
Memory, code, file, git, regex, JSON, crypto, metrics, queue, state machine, graph, and more.
Ollama tool calling normally takes ~60 lines of boilerplate. agent-friend does it in 5:
from agent_friend import Friend, tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Sunny in {city}"
friend = Friend(model="qwen2.5:3b", tools=[get_weather])
response = friend.chat("What's the weather in Tokyo?")
Auto-detects Ollama from model name. Same @tool functions work with OpenAI, Anthropic, and Gemini. Prototype locally at $0, ship to cloud when ready.
$ agent-friend audit tools.json # See token cost per tool
$ agent-friend optimize tools.json # 7 rules to reduce token waste
Or use the free web calculator — paste your schemas, see costs and optimization suggestions.
MCP tool definitions eat 40–50K tokens per request. Measure before you ship:
kit = Toolkit([search, calculate])
kit.token_report()
# {'estimates': {'openai': 115, 'anthropic': 101, 'google': 117,
# 'mcp': 100, 'json_schema': 93},
# 'most_expensive': 'google', 'least_expensive': 'json_schema',
# 'tool_count': 2}
Know what your tools cost before your users do.
✓ You write tools for OpenAI and need them in Claude or Gemini too.
✓ You want MCP compatibility without learning a new protocol.
✓ You need common agent tools (HTTP, database, search, crypto) without pulling in heavy frameworks.
✓ You want local tool-calling agents with Ollama without 60 lines of boilerplate.
✓ You want to avoid lock-in to LangChain, CrewAI, or any single framework.