Skip to main content
Run models locally with Ollama. No API key required.

Setup

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3

Basic Usage

from definable.model.ollama import Ollama
from definable.model.message import Message

model = Ollama(id="llama3")
response = model.invoke(
    messages=[Message(role="user", content="Hello!")],
    assistant_message=Message(role="assistant", content=""),
)
print(response.content)

String Shorthand

agent = Agent(model="ollama/llama3")

Parameters

id
str
required
Model name (must be pulled first with ollama pull).
host
str
default:"http://localhost:11434"
Ollama server URL.
temperature
float
Sampling temperature.
max_tokens
int
Maximum output tokens.

Imports

from definable.model.ollama import Ollama