Setup
export OPENAI_API_KEY="sk-..."
Basic Usage
from definable.model.openai import OpenAIChat
from definable.model.message import Message
model = OpenAIChat(id="gpt-4o")
response = model.invoke(
messages=[Message(role="user", content="Hello!")],
assistant_message=Message(role="assistant", content=""),
)
print(response.content)
Parameters
Model identifier. Common values: gpt-4o, gpt-4o-mini, o1, o1-mini, o3-mini.
OpenAI API key. Defaults to the OPENAI_API_KEY environment variable.
Override the API base URL.
Sampling temperature (0.0 to 2.0).
Maximum number of output tokens.
Nucleus sampling parameter.
Frequency penalty (-2.0 to 2.0).
Presence penalty (-2.0 to 2.0).
Seed for deterministic sampling.
Whether to use strict mode for structured outputs.
Structured Output
OpenAI supports native structured outputs via JSON Schema:
import json
from pydantic import BaseModel
from definable.model.message import Message
class Movie(BaseModel):
title: str
year: int
genre: str
response = model.invoke(
messages=[Message(role="user", content="Recommend a sci-fi movie.")],
assistant_message=Message(role="assistant", content=""),
response_format=Movie,
)
movie = Movie(**json.loads(response.content))
print(movie) # Movie(title='Interstellar', year=2014, genre='Science Fiction')
Audio Support
OpenAI models support audio input and output:
from definable.model.openai import OpenAIChat
from definable.model.message import Message
model = OpenAIChat(
id="gpt-4o-audio-preview",
modalities=["text", "audio"],
audio={"voice": "alloy", "format": "wav"},
)
response = model.invoke(
messages=[Message(role="user", content="Say hello in French.")],
assistant_message=Message(role="assistant", content=""),
)
print(response.audio.transcript)
Vision
Pass images in messages for visual understanding:
from definable.media import Image
from definable.model.message import Message
response = model.invoke(
messages=[Message(role="user", content="What's in this image?", images=[Image(url="https://example.com/photo.jpg")])],
assistant_message=Message(role="assistant", content=""),
)
print(response.content)
Async Usage
from definable.model.message import Message
response = await model.ainvoke(
messages=[Message(role="user", content="Hello!")],
assistant_message=Message(role="assistant", content=""),
)
Streaming
from definable.model.message import Message
for chunk in model.invoke_stream(
messages=[Message(role="user", content="Tell me a story.")],
assistant_message=Message(role="assistant", content=""),
):
if chunk.content:
print(chunk.content, end="", flush=True)