1. Model Call
Call an LLM directly:2. Agent with Tools
Add tools so the agent can take actions:3. Agent with Knowledge (RAG)
Ground the agent in your documents:4. Agent with Memory
Give the agent persistent memory across conversations:5. Streaming
Stream tokens as they are generated:What’s Next
Agents
The core execution loop with tools, middleware, and tracing.
Teams
Coordinate multiple agents to collaborate or divide work.
Workflows
Orchestrate agents through structured step pipelines.
Models
10 providers with streaming, structured output, and vision.
Tools
Custom tools with hooks, caching, and dependency injection.
Knowledge
Full RAG pipeline with readers, chunkers, and vector databases.