Class: LLM::Ollama
- Defined in:
- lib/llm/providers/ollama.rb,
lib/llm/providers/ollama/models.rb,
lib/llm/providers/ollama/error_handler.rb,
lib/llm/providers/ollama/stream_parser.rb,
lib/llm/providers/ollama/request_adapter.rb,
lib/llm/providers/ollama/response_adapter.rb
Overview
The Ollama class implements a provider for Ollama – and the provider supports a wide range of models. It is straight forward to run on your own hardware, and there are a number of multi-modal models that can process both images and text.
Defined Under Namespace
Classes: Models
Constant Summary collapse
- HOST =
-
"localhost"
Instance Method Summary collapse
-
#complete(prompt,
params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#name ⇒
Symbol
Returns the provider's name.
-
#embed(input,
model: default_model, **params) ⇒ LLM::Response
Provides an embedding.
-
#initialize ⇒
Ollama constructor
A new instance of Ollama.
-
#models ⇒
LLM::Ollama::Models
Provides an interface to Ollama's models API.
-
#assistant_role
⇒ String
Returns the role of the assistant in the conversation.
-
#default_model ⇒
String
Returns the default model for chat completions.
Methods inherited from Provider
#audio, #chat, clients, #developer_role, #files, #images, #inspect, #moderations, #persist!, #respond, #responses, #schema, #server_tool, #server_tools, #system_role, #tool_role, #tracer, #tracer=, #user_role, #vector_stores, #web_search, #with
Constructor Details
Instance Method Details
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
69 70 71 72 73 74 75 76 77 |
# File 'lib/llm/providers/ollama.rb', line 69 def complete(prompt, params = {}) params, stream, tools, role = normalize_complete_params(params) req = build_complete_request(prompt, params, role) res, span, tracer = execute(request: req, stream: stream, operation: "chat", model: params[:model]) res = ResponseAdapter.adapt(res, type: :completion) .extend(Module.new { define_method(:__tools__) { tools } }) tracer.on_request_finish(operation: "chat", model: params[:model], res:, span:) res end |
#name ⇒ Symbol
Returns the provider's name
38 39 40 |
# File 'lib/llm/providers/ollama.rb', line 38 def name :ollama end |
#embed(input, model: default_model, **params) ⇒ LLM::Response
Provides an embedding
49 50 51 52 53 54 55 56 57 |
# File 'lib/llm/providers/ollama.rb', line 49 def (input, model: default_model, **params) params = {model:}.merge!(params) req = Net::HTTP::Post.new("/v1/embeddings", headers) req.body = LLM.json.dump({input:}.merge!(params)) res, span, tracer = execute(request: req, operation: "embeddings", model:) res = ResponseAdapter.adapt(res, type: :embedding) tracer.on_request_finish(operation: "embeddings", model:, res:, span:) res end |
#models ⇒ LLM::Ollama::Models
Provides an interface to Ollama's models API
83 84 85 |
# File 'lib/llm/providers/ollama.rb', line 83 def models LLM::Ollama::Models.new(self) end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually "assistant" or "model"
89 90 91 |
# File 'lib/llm/providers/ollama.rb', line 89 def assistant_role "assistant" end |
#default_model ⇒ String
Returns the default model for chat completions
97 98 99 |
# File 'lib/llm/providers/ollama.rb', line 97 def default_model "qwen3:latest" end |