Run models locally with Ollama. No API key required.
Setup
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3
Basic Usage
from definable.model.ollama import Ollama
from definable.model.message import Message
model = Ollama(id="llama3")
response = model.invoke(
messages=[Message(role="user", content="Hello!")],
assistant_message=Message(role="assistant", content=""),
)
print(response.content)
String Shorthand
agent = Agent(model="ollama/llama3")
Parameters
Model name (must be pulled first with ollama pull).
host
str
default:"http://localhost:11434"
Ollama server URL.
Imports
from definable.model.ollama import Ollama