OllamaModelProvider¶
The OllamaModelProvider class provides access to Ollama models, supporting both local Ollama instances and Ollama Cloud. It implements the ModelProvider interface from the OpenAI Agents SDK.
Overview¶
OllamaModelProvider handles:
- Local Ollama: Connects to local Ollama instances (default:
http://localhost:11434) - Ollama Cloud: Connects to Ollama Cloud using API key authentication
- Automatic Detection: Automatically detects cloud vs local based on model name suffix (
-cloud) or API key presence - Lazy Initialization: Only creates the Ollama client when actually needed
Constructor¶
Parameters¶
| Parameter | Type | Description |
|---|---|---|
api_key / apiKey |
string \| undefined |
Ollama Cloud API key. If provided, uses Ollama Cloud. |
base_url / baseURL |
string \| undefined |
Base URL for Ollama instance. Defaults to http://localhost:11434 for local or https://ollama.com for cloud. |
ollama_client / ollamaClient |
Any \| undefined |
Custom Ollama client instance. If provided, api_key and base_url are ignored. |
Notes¶
- If
api_keyis provided, the provider will use Ollama Cloud - If
api_keyis not provided, the provider defaults to local Ollama - Models ending with
-cloudsuffix automatically use Ollama Cloud URL - The Ollama client is created lazily (only when
get_model()is called)
Methods¶
get_model()¶
Returns an OllamaModel instance for the specified model name. The client is initialized on first call.
Parameters¶
| Parameter | Type | Description |
|---|---|---|
model_name / modelName |
string |
The name of the Ollama model (e.g., llama3, mistral). |
Returns¶
| Type | Description |
|---|---|
Model |
An OllamaModel instance that converts Ollama responses to OpenAI format. |
Example¶
import { OllamaModelProvider } from '@timestep-ai/timestep';
// Local Ollama
const localProvider = new OllamaModelProvider();
const model = await localProvider.getModel('llama3');
// Ollama Cloud
const cloudProvider = new OllamaModelProvider({ apiKey: 'your-api-key' });
const cloudModel = await cloudProvider.getModel('llama3');
Examples¶
Local Ollama Instance¶
from timestep import OllamaModelProvider
from agents import Agent, Runner, RunConfig
# Defaults to localhost:11434
ollama_provider = OllamaModelProvider()
agent = Agent(model="llama3")
run_config = RunConfig(model_provider=ollama_provider)
result = Runner.run_streamed(agent, agent_input, run_config=run_config)
import { OllamaModelProvider } from '@timestep-ai/timestep';
import { Agent, Runner } from '@openai/agents';
// Defaults to localhost:11434
const ollamaProvider = new OllamaModelProvider();
const agent = new Agent({ model: 'llama3' });
const runner = new Runner({ modelProvider: ollamaProvider });
const result = await runner.run(agent, agentInput, { stream: true });
Ollama Cloud¶
from timestep import OllamaModelProvider
from agents import Agent, Runner, RunConfig
import os
# Use Ollama Cloud with API key
cloud_provider = OllamaModelProvider(
api_key=os.environ.get("OLLAMA_API_KEY")
)
agent = Agent(model="llama3")
run_config = RunConfig(model_provider=cloud_provider)
result = Runner.run_streamed(agent, agent_input, run_config=run_config)
import { OllamaModelProvider } from '@timestep-ai/timestep';
import { Agent, Runner } from '@openai/agents';
// Use Ollama Cloud with API key
const cloudProvider = new OllamaModelProvider({
apiKey: Deno.env.get('OLLAMA_API_KEY'),
});
const agent = new Agent({ model: 'llama3' });
const runner = new Runner({ modelProvider: cloudProvider });
const result = await runner.run(agent, agentInput, { stream: true });
Custom Base URL¶
Automatic Cloud Detection¶
Models ending with -cloud automatically use Ollama Cloud:
Custom Client¶
import { OllamaModelProvider } from '@timestep-ai/timestep';
import { Ollama } from 'ollama';
// Use custom Ollama client
const customClient = new Ollama({ host: 'http://custom-host:11434' });
const provider = new OllamaModelProvider({ ollamaClient: customClient });
const model = await provider.getModel('llama3');
Error Handling¶
Missing Ollama Package¶
If the ollama package is not installed, an error will be raised when get_model() is called:
Invalid Configuration¶
If both api_key and ollama_client are provided, an error is raised:
Lazy Initialization¶
The Ollama client is only created when get_model() is first called. This means:
- You can create the provider even if Ollama isn't running
- Errors only occur when actually trying to use a model
- Useful for optional Ollama support in applications
See Also¶
- OllamaModel - The model class that handles response conversion
- MultiModelProvider - For using Ollama alongside OpenAI
- Architecture - For details on Ollama integration