Skip to main content
Agents support four execution modes. Choose based on your application’s needs.

Synchronous

The simplest way to run an agent. Blocks until the full response is ready.
from definable.agents import Agent
from definable.models import OpenAIChat

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    instructions="You are a helpful assistant.",
)

output = agent.run("What is the capital of France?")
print(output.content)  # "The capital of France is Paris."

Asynchronous

Use arun() in async contexts such as web servers or async pipelines:
output = await agent.arun("What is the capital of France?")
print(output.content)

Streaming (Sync)

Stream events in real time. Each event represents a step in the agent’s execution:
for event in agent.run_stream("Write a short story about a robot."):
    if event.event == "RunContent":
        print(event.content, end="", flush=True)
    elif event.event == "ToolCallStarted":
        print(f"\n→ Calling {event.tool_name}...")
    elif event.event == "ToolCallCompleted":
        print(f"  Done: {event.result[:100]}")
    elif event.event == "RunCompleted":
        print(f"\n\nTokens used: {event.metrics.total_tokens}")

Streaming (Async)

async for event in agent.arun_stream("Write a short story about a robot."):
    if event.event == "RunContent":
        print(event.content, end="", flush=True)

Run Parameters

All four methods accept the same parameters:
instruction
str
required
The user’s message or instruction to the agent.
messages
List[Message]
Existing conversation history. The instruction is appended as the latest user message.
session_id
str
Session identifier for grouping related runs together.
run_id
str
Unique identifier for this run. Auto-generated if not provided.
user_id
str
User identifier for memory scoping. When set, memory recall and storage are scoped to this user.
images
List[Image]
Images to include with the user message.
videos
List[Video]
Videos to include with the user message.
audio
List[Audio]
Audio files to include with the user message.
files
List[File]
Files to include with the user message. When readers are enabled, file content is automatically extracted and injected into the prompt.
output_schema
Type[BaseModel]
Pydantic model for structured output from the final response.

The RunOutput Object

Every non-streaming run returns a RunOutput with the full results:
output = agent.run("Hello!")

# Content
print(output.content)           # The agent's text response
print(output.content_type)      # "text", "json", etc.

# Metadata
print(output.run_id)            # Unique run identifier
print(output.agent_id)          # Agent that produced this output
print(output.model)             # Model used
print(output.status)            # RunStatus.COMPLETED

# Metrics
print(output.metrics.total_tokens)
print(output.metrics.cost)
print(output.metrics.duration)

# Messages (full conversation history)
print(len(output.messages))

# Media outputs
print(output.images)
print(output.audio)

Stream Event Types

Streaming runs yield RunOutputEvent objects. Key event types:
EventDescription
RunStartedAgent execution has begun
RunContentA chunk of the agent’s text response
RunContentCompletedFull content generation is done
ToolCallStartedA tool call is about to execute
ToolCallCompletedA tool call finished successfully
ToolCallErrorA tool call failed
ReasoningStepA reasoning step from a thinking model
KnowledgeRetrievalStartedKnowledge retrieval began
KnowledgeRetrievalCompletedKnowledge retrieval finished
MemoryRecallStartedMemory recall began
MemoryRecallCompletedMemory recall finished
MemoryUpdateStartedMemory storage began
MemoryUpdateCompletedMemory storage finished
FileReadStartedFile reading began
FileReadCompletedFile reading finished
RunCompletedThe entire run is finished (includes final RunOutput)
RunErrorThe run failed with an error
The RunCompleted event contains the full RunOutput object in event.output, giving you access to aggregated metrics and the complete message history even when streaming.

Deploying with serve()

Use agent.serve() to start the full agent runtime — messaging interfaces, HTTP endpoints, webhooks, and cron jobs in a single call:
agent.serve(telegram_interface, discord_interface, port=8000)
*interfaces
BaseInterface
Interface instances (Telegram, Discord, Signal) to run concurrently.
host
str
default:"0.0.0.0"
HTTP server bind address.
port
int
default:"8000"
HTTP server port.
enable_server
bool
default:"None"
Force HTTP server on/off. Auto-detects from registered webhooks when None.
dev
bool
default:"false"
Enable hot-reload dev mode.
agent.serve() is a blocking sync call. Use agent.aserve() for async contexts. When webhooks or cron triggers are registered via @agent.on(...), they run alongside interfaces automatically. Set agent.auth to protect HTTP endpoints with API key or JWT authentication. See Agent Runtime for full details on webhooks, cron, event triggers, lifecycle hooks, and the HTTP server.