Setup
export OPENAI_API_KEY="sk-..."
Basic Usage
from definable.models import OpenAIChat
model = OpenAIChat(id="gpt-4o")
response = model.invoke(messages=[{"role": "user", "content": "Hello!"}])
print(response.content)
Parameters
Model identifier. Common values: gpt-4o, gpt-4o-mini, o1, o1-mini, o3-mini.
OpenAI API key. Defaults to the OPENAI_API_KEY environment variable.
Override the API base URL.
Sampling temperature (0.0 to 2.0).
Maximum number of output tokens.
Nucleus sampling parameter.
Frequency penalty (-2.0 to 2.0).
Presence penalty (-2.0 to 2.0).
Seed for deterministic sampling.
Whether to use strict mode for structured outputs.
Structured Output
OpenAI supports native structured outputs via JSON Schema:
from pydantic import BaseModel
class Movie(BaseModel):
title: str
year: int
genre: str
response = model.invoke(
messages=[{"role": "user", "content": "Recommend a sci-fi movie."}],
response_format=Movie,
)
print(response.parsed) # Movie(title='Interstellar', year=2014, genre='Science Fiction')
Audio Support
OpenAI models support audio input and output:
from definable.media import Audio
model = OpenAIChat(
id="gpt-4o-audio-preview",
modalities=["text", "audio"],
audio={"voice": "alloy", "format": "wav"},
)
response = model.invoke(
messages=[{
"role": "user",
"content": "Say hello in French.",
}]
)
print(response.audio.transcript)
Vision
Pass images in messages for visual understanding:
from definable.media import Image
response = model.invoke(
messages=[{
"role": "user",
"content": "What's in this image?",
"images": [Image(url="https://example.com/photo.jpg")],
}]
)
print(response.content)
Async Usage
response = await model.ainvoke(
messages=[{"role": "user", "content": "Hello!"}]
)
Streaming
for chunk in model.invoke_stream(
messages=[{"role": "user", "content": "Tell me a story."}]
):
if chunk.content:
print(chunk.content, end="", flush=True)