Tracing captures every significant event during an agent run — model calls, tool executions, errors — and exports them to one or more backends. This gives you full visibility into what your agent did and why.
Enabling Tracing
Pass tracing directly to the Agent constructor:
from definable.agent import Agent, JSONLExporter
from definable.agent.tracing import Tracing
from definable.model import OpenAIChat
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
tracing=Tracing(
enabled=True,
exporters=[JSONLExporter("./traces")],
),
)
output = agent.run("Hello!")
# Trace written to ./traces/{session_id}.jsonl
Alternatively, tracing can also be configured via AgentConfig for backward compatibility:
from definable.agent import Agent, AgentConfig, JSONLExporter
from definable.agent.tracing import Tracing
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
config=AgentConfig(tracing=Tracing(exporters=[JSONLExporter("./traces")])),
)
When both Agent(tracing=...) and AgentConfig(tracing=...) are supplied, the direct tracing parameter takes precedence.
Tracing Reference
Enable or disable tracing.
List of exporter instances that receive trace events.
Optional function to filter which events are exported. Return True to include, False to skip.
Number of events to batch before flushing to exporters.
Maximum time in milliseconds between flushes.
JSONLExporter
The built-in exporter writes one JSON object per line to a file, organized by session ID:
from definable.agent import JSONLExporter
exporter = JSONLExporter(
trace_dir="./traces",
flush_each=True, # Flush after every event
mirror_stdout=False, # Also print events to stdout
)
Each file is named {session_id}.jsonl. Events include timestamps, run IDs, and full event data.
Reading Trace Files
from definable.agent.tracing import read_trace_file, read_trace_events
# Read raw lines
lines = read_trace_file("./traces/abc-123.jsonl")
# Read as parsed event dicts
events = read_trace_events("./traces/abc-123.jsonl")
for event in events:
print(f"{event['event']} at {event['timestamp']}")
Event Filtering
Skip noisy events to keep traces focused:
agent = Agent(
model=model,
tracing=Tracing(
exporters=[JSONLExporter("./traces")],
event_filter=lambda e: e.event != "RunContent", # Skip streaming chunks
),
)
Traced Events
| Event | When It Fires |
|---|
RunStarted | Agent begins a run |
RunContent | A content chunk is generated (streaming) |
RunContentCompleted | Content generation is finished |
ToolCallStarted | A tool call begins executing |
ToolCallCompleted | A tool call finishes |
ToolCallError | A tool call fails |
ReasoningStep | A reasoning step is produced |
RunCompleted | The run finishes successfully |
RunError | The run fails with an error |
Custom Exporters
Implement the TraceExporter protocol to send events to any backend:
from definable.agent.tracing import TraceExporter
class DatadogExporter:
"""Send trace events to Datadog."""
async def export(self, events):
for event in events:
await send_to_datadog(event.to_dict())
async def flush(self):
pass
async def shutdown(self):
pass
Use it alongside other exporters:
agent = Agent(
model=model,
tracing=Tracing(
exporters=[
JSONLExporter("./traces"),
DatadogExporter(),
],
),
)
NoOpExporter
Use NoOpExporter to discard all events. Useful in tests where you want tracing enabled but don’t need output:
from definable.agent import Agent
from definable.agent.tracing import Tracing, NoOpExporter
agent = Agent(
model=model,
tracing=Tracing(exporters=[NoOpExporter()]),
)
Debug Mode
For quick, color-coded turn-by-turn inspection of model calls, use debug=True:
from definable.agent import Agent
agent = Agent(
model="openai/gpt-4o",
debug=True, # Prints model call breakdown to stderr
)
output = agent.run("What is 2 + 2?")
This auto-adds a DebugExporter to tracing, which uses rich to print color-coded panels showing:
- Messages sent to the model
- Tools available
- Model response content and tool calls
- Token usage and timing
Debug mode composes with existing tracing — if you already have tracing=Tracing(exporters=[...]), adding debug=True appends the DebugExporter without replacing your exporters.
from definable.agent import Agent
from definable.agent.tracing import Tracing, JSONLExporter, DebugExporter
# Both file tracing and debug output
agent = Agent(
model="openai/gpt-4o",
tracing=Tracing(exporters=[JSONLExporter("./traces")]),
debug=True,
)
DebugExporter
You can also use DebugExporter directly:
from definable.agent.tracing import Tracing, DebugExporter
agent = Agent(
model="openai/gpt-4o",
tracing=Tracing(exporters=[DebugExporter()]),
)
The DebugExporter listens for ModelCallStartedEvent and ModelCallCompletedEvent to render its output.
Tracing failures never break agent execution. If an exporter raises an exception, the error is suppressed and the agent continues normally.