Structured output lets you define a Pydantic model and have the LLM return data that conforms exactly to that schema.
With Agents
Pass output_schema to an agent run:
from pydantic import BaseModel, Field
from definable.agent import Agent
class MovieReview(BaseModel):
title: str = Field(description="Movie title")
rating: float = Field(description="Rating out of 10")
summary: str = Field(description="Brief summary")
pros: list[str] = Field(description="Positive aspects")
cons: list[str] = Field(description="Negative aspects")
agent = Agent(model="gpt-4o", instructions="You are a movie critic.")
output = await agent.arun("Review Inception", output_schema=MovieReview)
review = output.parsed # MovieReview instance
print(f"{review.title}: {review.rating}/10")
Use output_schema=, not response_model=. The response_model parameter does not exist.
With Models
Pass response_format to a model call:
import json
from definable.model import OpenAIChat
from definable.model.message import Message
model = OpenAIChat(id="gpt-4o")
response = model.invoke(
messages=[Message(role="user", content="Recommend a sci-fi movie.")],
assistant_message=Message(role="assistant", content=""),
response_format=MovieReview,
)
movie = MovieReview(**json.loads(response.content))
Complex Schemas
Nested models, lists, enums, and optional fields are all supported:
from typing import Optional
from enum import Enum
class Priority(str, Enum):
low = "low"
medium = "medium"
high = "high"
class Task(BaseModel):
title: str = Field(description="Short task title")
priority: Priority
subtasks: Optional[list[str]] = None
class ProjectPlan(BaseModel):
project_name: str
tasks: list[Task]
total_hours: float
Provider Support
| Provider | Native JSON Schema | Prompt-Based Fallback |
|---|
| OpenAI | Yes | — |
| Anthropic | Yes | — |
| Google Gemini | Yes | — |
| DeepSeek | No | Yes |
| Moonshot | No | Yes |
| xAI | No | Yes |
| Mistral | No | Yes |
When a provider does not support native structured outputs, Definable automatically includes the JSON Schema in the system prompt and instructs the model to respond in the correct format.