- Connects to an LLM (OpenAI GPT-4o)
- Uses a custom tool to take actions
- Returns structured results
1. Define the Agent
Save the following code asmy_agent.py:
my_agent.py
You now have:
- An agent that reasons about which tools to call
- Automatic parallel tool execution
- A typed
RunOutputwith content, messages, and metadata
2. Run Your Agent
3. Add Memory
Make your agent remember past conversations:my_agent.py
4. Add Knowledge
Ground your agent in documents:my_agent.py
5. Stream Responses
Stream tokens as they are generated:What You Just Built
In a few lines of code, you built:- An agent with tool calling and parallel execution
- Persistent memory across conversations
- Knowledge-grounded responses via RAG
- Real-time streaming output
Next Steps
| Task | Guide |
|---|---|
| Explore agent configuration | Agent configuration |
| Add reasoning capabilities | Thinking |
| Deploy to messaging platforms | Interfaces |
| Coordinate multiple agents | Teams |
| Build structured pipelines | Workflows |
| Browse all examples | Examples |