Skip to main content

Setup

export OPENAI_API_KEY="sk-..."

Basic Usage

from definable.models import OpenAIChat

model = OpenAIChat(id="gpt-4o")
response = model.invoke(messages=[{"role": "user", "content": "Hello!"}])
print(response.content)

Parameters

id
str
default:"gpt-4o"
Model identifier. Common values: gpt-4o, gpt-4o-mini, o1, o1-mini, o3-mini.
api_key
str
OpenAI API key. Defaults to the OPENAI_API_KEY environment variable.
base_url
str
Override the API base URL.
temperature
float
Sampling temperature (0.0 to 2.0).
max_tokens
int
Maximum number of output tokens.
top_p
float
Nucleus sampling parameter.
frequency_penalty
float
Frequency penalty (-2.0 to 2.0).
presence_penalty
float
Presence penalty (-2.0 to 2.0).
seed
int
Seed for deterministic sampling.
strict_output
bool
default:"true"
Whether to use strict mode for structured outputs.

Structured Output

OpenAI supports native structured outputs via JSON Schema:
from pydantic import BaseModel

class Movie(BaseModel):
    title: str
    year: int
    genre: str

response = model.invoke(
    messages=[{"role": "user", "content": "Recommend a sci-fi movie."}],
    response_format=Movie,
)
print(response.parsed)  # Movie(title='Interstellar', year=2014, genre='Science Fiction')

Audio Support

OpenAI models support audio input and output:
from definable.media import Audio

model = OpenAIChat(
    id="gpt-4o-audio-preview",
    modalities=["text", "audio"],
    audio={"voice": "alloy", "format": "wav"},
)

response = model.invoke(
    messages=[{
        "role": "user",
        "content": "Say hello in French.",
    }]
)
print(response.audio.transcript)

Vision

Pass images in messages for visual understanding:
from definable.media import Image

response = model.invoke(
    messages=[{
        "role": "user",
        "content": "What's in this image?",
        "images": [Image(url="https://example.com/photo.jpg")],
    }]
)
print(response.content)

Async Usage

response = await model.ainvoke(
    messages=[{"role": "user", "content": "Hello!"}]
)

Streaming

for chunk in model.invoke_stream(
    messages=[{"role": "user", "content": "Tell me a story."}]
):
    if chunk.content:
        print(chunk.content, end="", flush=True)