Skip to main content
Tools are what make agents capable of real-world action. While LLMs alone can only generate text, agents equipped with tools can search the web, run queries, send emails, call APIs, and perform any action you define.
from definable.agent import Agent
from definable.tool.decorator import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"The weather in {city} is sunny, 72F."

agent = Agent(
    model="gpt-4o",
    tools=[get_weather],
)

output = agent.run("What's the weather in San Francisco?")
print(output.content)
The @tool decorator converts any Python function into an agent-callable tool. Definable automatically generates the JSON Schema from type hints and docstrings.

How Tools Work

  1. The agent sends tool definitions (JSON Schema) to the model.
  2. The model decides which tools to call based on the user’s message.
  3. Tools execute and results are returned to the model.
  4. The model processes results and continues (or returns a final response).
  5. The loop repeats until the model responds without tool calls.
When using arun(), multiple tool calls from the same model response execute concurrently.

Guides

Create Tools

Write custom Python functions as tools.

Parameters

Type hints, descriptions, and JSON Schema.

Hooks

Run logic before and after execution.

Caching

Cache results for identical arguments.

Async Tools

Async tool functions for concurrent execution.

Dependencies

Inject services and state into tools.

Toolkits & Skills

TypeWhat it isExample
ToolkitBundle of related tools with shared lifecycleMCPToolkit, BrowserToolkit
SkillInstructions + tools for a learned behaviorGitHub, Slack, SQL Database
MCPTools from MCP servers via stdio/HTTPAny MCP-compatible server

Resources