Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Gemini, Ollama, OpenAI, VoyageAI

Instance Method Summary collapse

Constructor Details

#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • secret (String)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response



20
21
22
23
24
25
26
# File 'lib/llm/provider.rb', line 20

def initialize(secret, host:, port: 443, timeout: 60, ssl: true)
  @secret = secret
  @http = Net::HTTP.new(host, port).tap do |http|
    http.use_ssl = ssl
    http.read_timeout = timeout
  end
end

Instance Method Details

#modelsHash<String, LLM::Model>

Returns a hash of available models

Returns:

  • (Hash<String, LLM::Model>)

    Returns a hash of available models

Raises:

  • (NotImplementedError)


183
184
185
# File 'lib/llm/provider.rb', line 183

def models
  raise NotImplementedError
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


32
33
34
# File 'lib/llm/provider.rb', line 32

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @secret=[REDACTED] @http=#{@http.inspect}>"
end

#embed(input, model:, **params) ⇒ LLM::Response::Embedding

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



47
48
49
# File 'lib/llm/provider.rb', line 47

def embed(input, model:, **params)
  raise NotImplementedError
end

#complete(prompt, role = :user, model: nil, **params) ⇒ LLM::Response::Completion

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(ENV["KEY"])
messages = [
  {role: "system", content: "Your task is to answer all of my questions"},
  {role: "system", content: "Your answers should be short and concise"},
]
res = llm.complete("Hello. What is the answer to 5 + 2 ?", :user, messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

  • params (Hash)

    Other completion parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



72
73
74
# File 'lib/llm/provider.rb', line 72

def complete(prompt, role = :user, model: nil, **params)
  raise NotImplementedError
end

#chat(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy version of a LLM::Chat object.

Starts a new lazy chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



88
89
90
# File 'lib/llm/provider.rb', line 88

def chat(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).lazy.chat(prompt, role)
end

#chat!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy version of a LLM::Chat object.

Starts a new chat powered by the chat completions API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



104
105
106
# File 'lib/llm/provider.rb', line 104

def chat!(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).chat(prompt, role)
end

#respond(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a lazy variant of a LLM::Chat object.

Starts a new lazy chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



120
121
122
# File 'lib/llm/provider.rb', line 120

def respond(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).lazy.respond(prompt, role)
end

#respond!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat

Note:

This method creates a non-lazy variant of a LLM::Chat object.

Starts a new chat powered by the responses API

Parameters:

  • params (Hash)

    Other completion parameters to maintain throughout a chat

  • prompt (String)

    The input prompt to be completed

  • role (Symbol) (defaults to: :user)

    The role of the prompt (e.g. :user, :system)

  • model (String) (defaults to: nil)

    The model to use for the completion

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



136
137
138
# File 'lib/llm/provider.rb', line 136

def respond!(prompt, role = :user, model: nil, **params)
  LLM::Chat.new(self, params).respond(prompt, role)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


147
148
149
# File 'lib/llm/provider.rb', line 147

def responses
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Gemini::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


154
155
156
# File 'lib/llm/provider.rb', line 154

def images
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


161
162
163
# File 'lib/llm/provider.rb', line 161

def audio
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


168
169
170
# File 'lib/llm/provider.rb', line 168

def files
  raise NotImplementedError
end

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


176
177
178
# File 'lib/llm/provider.rb', line 176

def assistant_role
  raise NotImplementedError
end