Skip to main content
In this guide, you will build an agent that:
  • Connects to an LLM (OpenAI GPT-4o)
  • Uses a custom tool to take actions
  • Returns structured results
All in about 15 lines of code.

1. Define the Agent

Save the following code as my_agent.py:
my_agent.py
from definable.agent import Agent
from definable.tool.decorator import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"The weather in {city} is sunny, 72F."

agent = Agent(
    model="gpt-4o",
    tools=[get_weather],
    instructions="You are a helpful weather assistant.",
)

output = agent.run("What's the weather in San Francisco and Tokyo?")
print(output.content)
You now have:
  • An agent that reasons about which tools to call
  • Automatic parallel tool execution
  • A typed RunOutput with content, messages, and metadata

2. Run Your Agent

1

Set up your environment

python3 -m venv .venv
source .venv/bin/activate
2

Install Definable

pip install definable
3

Export your OpenAI API key

export OPENAI_API_KEY=sk-***
4

Run your agent

python my_agent.py
You should see the agent call get_weather for both cities and return a combined response.

3. Add Memory

Make your agent remember past conversations:
my_agent.py
from definable.agent import Agent
from definable.memory import Memory, SQLiteStore
from definable.tool.decorator import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"The weather in {city} is sunny, 72F."

agent = Agent(
    model="gpt-4o",
    tools=[get_weather],
    instructions="You are a helpful weather assistant.",
    memory=Memory(store=SQLiteStore("./memory.db")),
)

# First conversation
output = agent.run("My name is Alice. What's the weather in Paris?", user_id="alice")
print(output.content)

# Later — the agent remembers
output = agent.run("What city did I ask about?", user_id="alice")
print(output.content)  # "You asked about Paris."

4. Add Knowledge

Ground your agent in documents:
my_agent.py
from definable.agent import Agent
from definable.knowledge import Knowledge
from definable.vectordb import InMemoryVectorDB

knowledge = Knowledge(vector_db=InMemoryVectorDB())
knowledge.add("Definable supports 10 LLM providers including OpenAI, Anthropic, and Google.")
knowledge.add("Agents can use tools, knowledge, memory, and middleware.")

agent = Agent(
    model="gpt-4o",
    knowledge=knowledge,
    instructions="Answer questions using the provided knowledge base.",
)

output = agent.run("How many LLM providers does Definable support?")
print(output.content)

5. Stream Responses

Stream tokens as they are generated:
for event in agent.run_stream("Tell me a story about AI agents."):
    if event.event == "RunContent" and event.content:
        print(event.content, end="", flush=True)

What You Just Built

In a few lines of code, you built:
  • An agent with tool calling and parallel execution
  • Persistent memory across conversations
  • Knowledge-grounded responses via RAG
  • Real-time streaming output
This same architecture scales to multi-agent teams, structured workflows, and production deployments.

Next Steps

TaskGuide
Explore agent configurationAgent configuration
Add reasoning capabilitiesThinking
Deploy to messaging platformsInterfaces
Coordinate multiple agentsTeams
Build structured pipelinesWorkflows
Browse all examplesExamples