Class: LLM::Ollama

Inherits:
Provider show all
Defined in:
lib/llm/providers/ollama.rb,
lib/llm/providers/ollama/format.rb,
lib/llm/providers/ollama/error_handler.rb,
lib/llm/providers/ollama/response_parser.rb

Overview

The Ollama class implements a provider for Ollama.

This provider supports a wide range of models, it is relatively straight forward to run on your own hardware, and includes multi-modal models that can process images and text. See the example for a demonstration of a multi-modal model by the name llava

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.ollama(nil)
bot = LLM::Chat.new(llm, model: "llava").lazy
bot.chat LLM::File("/images/capybara.png")
bot.chat "Describe the image"
bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }

Constant Summary collapse

HOST =
"localhost"

Instance Method Summary collapse

Methods inherited from Provider

#audio, #chat, #chat!, #files, #images, #inspect, #respond, #respond!, #responses

Constructor Details

#initialize(secret) ⇒ Ollama

Returns a new instance of Ollama.

Parameters:

  • secret (String)

    The secret key for authentication



31
32
33
# File 'lib/llm/providers/ollama.rb', line 31

def initialize(secret, **)
  super(secret, host: HOST, port: 11434, ssl: false, **)
end

Instance Method Details

#embed(input, model: "llama3.2", **params) ⇒ LLM::Response::Embedding

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: "llama3.2")

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:



42
43
44
45
46
47
48
# File 'lib/llm/providers/ollama.rb', line 42

def embed(input, model: "llama3.2", **params)
  params   = {model:}.merge!(params)
  req      = Net::HTTP::Post.new("/v1/embeddings", headers)
  req.body = JSON.dump({input:}.merge!(params))
  res      = request(@http, req)
  Response::Embedding.new(res).extend(response_parser)
end

#complete(prompt, role = :user, model: "llama3.2", **params) ⇒ LLM::Response::Completion

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(ENV["KEY"])
messages = [
  {role: "system", content: "Your task is to answer all of my questions"},
  {role: "system", content: "Your answers should be short and concise"},
]
res = llm.complete("Hello. What is the answer to 5 + 2 ?", :user, messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: "llama3.2")

    The model to use for the completion

  • params (Hash)

    Other completion parameters

Returns:

Raises:

See Also:



60
61
62
63
64
65
66
67
# File 'lib/llm/providers/ollama.rb', line 60

def complete(prompt, role = :user, model: "llama3.2", **params)
  params   = {model:, stream: false}.merge!(params)
  req      = Net::HTTP::Post.new("/api/chat", headers)
  messages = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)]
  req.body = JSON.dump({messages: format(messages)}.merge!(params))
  res      = request(@http, req)
  Response::Completion.new(res).extend(response_parser)
end

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”



71
72
73
# File 'lib/llm/providers/ollama.rb', line 71

def assistant_role
  "assistant"
end

#modelsHash<String, LLM::Model>

Returns a hash of available models

Returns:

  • (Hash<String, LLM::Model>)

    Returns a hash of available models



77
78
79
# File 'lib/llm/providers/ollama.rb', line 77

def models
  @models ||= load_models!("ollama")
end