The OpenAILike class lets you connect to any LLM provider that implements the OpenAI chat completions API. This includes self-hosted models (vLLM, Ollama, LiteLLM), cloud providers with OpenAI-compatible endpoints, and custom proxies.
Basic Usage
from definable.models import OpenAILike
model = OpenAILike(
id="my-model",
api_key="your-api-key",
base_url="https://your-provider.com/v1",
)
response = model.invoke(messages=[{"role": "user", "content": "Hello!"}])
print(response.content)
Parameters
Model identifier as expected by the target provider.
API key for authentication.
The base URL of the OpenAI-compatible API endpoint.
Human-readable name for this model instance.
Provider name for logging and metrics.
supports_native_structured_outputs
Whether the provider supports OpenAI-style structured outputs.
Examples
Ollama
model = OpenAILike(
id="llama3.2",
base_url="http://localhost:11434/v1",
name="Ollama Llama",
provider="Ollama",
)
vLLM
model = OpenAILike(
id="meta-llama/Llama-3-8b-chat-hf",
base_url="http://localhost:8000/v1",
name="vLLM Llama",
provider="vLLM",
)
LiteLLM Proxy
model = OpenAILike(
id="gpt-4o",
api_key="your-litellm-key",
base_url="http://localhost:4000/v1",
name="LiteLLM Proxy",
provider="LiteLLM",
)
Azure OpenAI
model = OpenAILike(
id="gpt-4o",
api_key="your-azure-key",
base_url="https://your-resource.openai.azure.com/openai/deployments/gpt-4o",
provider="Azure",
)
Streaming
for chunk in model.invoke_stream(
messages=[{"role": "user", "content": "Tell me a joke."}]
):
if chunk.content:
print(chunk.content, end="", flush=True)
Creating Custom Provider Classes
For repeated use, subclass OpenAILike:
from definable.models.openai.like import OpenAILike
class MyProviderChat(OpenAILike):
id: str = "my-default-model"
name: str = "MyProvider"
provider: str = "MyProvider"
base_url: str = "https://api.myprovider.com/v1"
supports_native_structured_outputs: bool = True
This is exactly how DeepSeekChat, MoonshotChat, and xAI are implemented internally.