Class: LLM::Ollama
- Defined in:
- lib/llm/providers/ollama.rb,
lib/llm/providers/ollama/format.rb,
lib/llm/providers/ollama/error_handler.rb,
lib/llm/providers/ollama/response_parser.rb
Overview
The Ollama class implements a provider for Ollama.
This provider supports a wide range of models, it is relatively
straight forward to run on your own hardware, and includes multi-modal
models that can process images and text. See the example for a demonstration
of a multi-modal model by the name llava
Constant Summary collapse
- HOST =
"localhost"
Instance Method Summary collapse
-
#initialize(secret) ⇒ Ollama
constructor
A new instance of Ollama.
-
#embed(input, model: "llama3.2", **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#complete(prompt, role = :user, model: "llama3.2", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models.
Methods inherited from Provider
#audio, #chat, #chat!, #files, #images, #inspect, #respond, #respond!, #responses
Constructor Details
Instance Method Details
#embed(input, model: "llama3.2", **params) ⇒ LLM::Response::Embedding
Provides an embedding
42 43 44 45 46 47 48 |
# File 'lib/llm/providers/ollama.rb', line 42 def (input, model: "llama3.2", **params) params = {model:}.merge!(params) req = Net::HTTP::Post.new("/v1/embeddings", headers) req.body = JSON.dump({input:}.merge!(params)) res = request(@http, req) Response::Embedding.new(res).extend(response_parser) end |
#complete(prompt, role = :user, model: "llama3.2", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
60 61 62 63 64 65 66 67 |
# File 'lib/llm/providers/ollama.rb', line 60 def complete(prompt, role = :user, model: "llama3.2", **params) params = {model:, stream: false}.merge!(params) req = Net::HTTP::Post.new("/api/chat", headers) = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)] req.body = JSON.dump({messages: format()}.merge!(params)) res = request(@http, req) Response::Completion.new(res).extend(response_parser) end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
71 72 73 |
# File 'lib/llm/providers/ollama.rb', line 71 def assistant_role "assistant" end |
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models
77 78 79 |
# File 'lib/llm/providers/ollama.rb', line 77 def models @models ||= load_models!("ollama") end |