AgentConfig is an immutable dataclass that controls every aspect of agent behavior. Pass it when creating an agent, or use the defaults.
Basic Configuration
Configuration Reference
Identity
Unique identifier for the agent. Auto-generated UUID if not set.
Human-readable name used in logs and traces.
Execution
Maximum number of model-call-then-tool-execution loops before stopping. Prevents infinite loops when the model keeps calling tools.
Token limit per run. Stops execution if exceeded.
Timeout in seconds for streaming responses.
Reliability
Automatically retry on transient network errors.
Maximum number of retry attempts for transient errors.
Validate tool arguments against their schema before execution.
State & Dependencies
Default session state available to tools. Merged with per-run state.
Dependencies injected into tools that declare them (e.g., database connections, API clients).
Tracing
Tracing configuration (backward compat). Prefer passing
tracing=Tracing(...) directly to Agent. See Tracing for details.Compression
Compression is configured directly on theAgent constructor (not in AgentConfig):
Readers
File reader configuration for processing attached files. See File Readers.
Memory
Memory is configured directly on theAgent constructor:
Knowledge
Knowledge (RAG) is configured directly on theAgent constructor:
Thinking
The thinking layer is configured directly on theAgent constructor (not in AgentConfig):
Immutable Updates
AgentConfig is frozen after creation. Use with_updates() to create a modified copy:
base_config is unchanged. This pattern makes it safe to share configs across agents.
Compression
Compress large tool results to save tokens:Compression uses a separate model call to summarize tool output. This adds a small amount of latency but can significantly reduce total token usage for tools that return large results.
Readers Config
Configure file reader behavior for processing attached files:Enable or disable file reading.
Custom reader instance. When
None, a default BaseReader with all available built-in parsers is created.Maximum total character length of all extracted file content. When
None, no limit is applied.Format for injecting file content into the prompt.
"xml" wraps content in XML tags, "markdown" uses code blocks.Audio Transcription
The audio transcriber is configured directly on theAgent constructor. It automatically transcribes voice messages (from Telegram, Discord, or direct arun() calls) to text before the model sees them.
How It Works
Whenaudio_transcriber is set:
- Before the pipeline runs,
_transcribe_audio()iterates over messages withAudioattachments - Each audio clip is transcribed via the configured backend (Whisper by default)
- The transcript text is appended to the message’s
contentfield - The
audiofield is cleared from the message so non-audio models don’t receive rawinput_audioblocks
Security
Security features are configured directly on theAgent constructor:
Usage Tracking
Track token usage and estimated cost across agent runs:Deep Research
The deep research layer is configured directly on theAgent constructor:
DeepResearchConfig Reference
Research depth preset.
"quick" (1 wave, 8 sources), "standard" (3 waves, 15 sources), "deep" (5 waves, 30 sources).Search backend name:
"duckduckgo", "google", or "serpapi".Backend-specific config (API keys, CSE ID, etc.). Required for Google CSE and SerpAPI.
Custom async search callable. Overrides
search_provider when set.Model for CKU extraction (should be cheap/fast). Defaults to the agent’s model.
Maximum number of unique sources to read across all waves.
Maximum number of research waves.
Number of concurrent search queries per wave.
Number of concurrent page reads.
Minimum relevance score for CKU inclusion.
Whether to include source citations in the research context.
Whether to surface contradictions between sources.
Format for the injected context:
"xml" or "markdown".Approximate token budget for the research context block.
Stop when novelty ratio drops below this threshold between waves.
When to run research:
"always" (every run), "auto" (model decides), "tool" (explicit tool call).Description shown in the layer guide injected into the system prompt. If
None, uses the default description.