Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Gemini, Ollama, OpenAI, VoyageAI

Instance Method Summary collapse

Constructor Details

#initialize(key:, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • key (String, nil)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response

  • ssl (Boolean) (defaults to: true)

    Whether to use SSL for the connection



22
23
24
25
26
27
28
# File 'lib/llm/provider.rb', line 22

def initialize(key:, host:, port: 443, timeout: 60, ssl: true)
  @key = key
  @http = Net::HTTP.new(host, port).tap do |http|
    http.use_ssl = ssl
    http.read_timeout = timeout
  end
end

Instance Method Details

#with(headers:) ⇒ LLM::Provider

Add one or more headers to all requests

Examples:

llm = LLM.openai(key: ENV["KEY"])
llm.with(headers: {"OpenAI-Organization" => ENV["ORG"]})
llm.with(headers: {"OpenAI-Project" => ENV["PROJECT"]})

Parameters:

  • headers (Hash<String,String>)

    One or more headers

Returns:



206
207
208
# File 'lib/llm/provider.rb', line 206

def with(headers:)
  tap { (@headers ||= {}).merge!(headers) }
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


34
35
36
# File 'lib/llm/provider.rb', line 34

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @http=#{@http.inspect}>"
end

#embed(input, model: nil, **params) ⇒ LLM::Response::Embedding

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: nil)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



49
50
51
# File 'lib/llm/provider.rb', line 49

def embed(input, model: nil, **params)
  raise NotImplementedError
end

#complete(prompt, params = {}) ⇒ LLM::Response::Completion

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :role (Symbol)

    Defaults to the provider’s default role

  • :model (String)

    Defaults to the provider’s default model

  • :schema (#to_json, nil)

    Defaults to nil

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



73
74
75
# File 'lib/llm/provider.rb', line 73

def complete(prompt, params = {})
  raise NotImplementedError
end

#chat(prompt, params = {}) ⇒ LLM::Chat

Note:

This method creates a lazy version of a LLM::Chat object.

Starts a new lazy chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:



85
86
87
88
# File 'lib/llm/provider.rb', line 85

def chat(prompt, params = {})
  role = params.delete(:role)
  LLM::Chat.new(self, params).lazy.chat(prompt, role:)
end

#chat!(prompt, params = {}) ⇒ LLM::Chat

Note:

This method creates a non-lazy version of a LLM::Chat object.

Starts a new chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



99
100
101
102
# File 'lib/llm/provider.rb', line 99

def chat!(prompt, params = {})
  role = params.delete(:role)
  LLM::Chat.new(self, params).chat(prompt, role:)
end

#respond(prompt, params = {}) ⇒ LLM::Chat

Note:

This method creates a lazy variant of a LLM::Chat object.

Starts a new lazy chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



113
114
115
116
# File 'lib/llm/provider.rb', line 113

def respond(prompt, params = {})
  role = params.delete(:role)
  LLM::Chat.new(self, params).lazy.respond(prompt, role:)
end

#respond!(prompt, params = {}) ⇒ LLM::Chat

Note:

This method creates a non-lazy variant of a LLM::Chat object.

Starts a new chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



127
128
129
130
# File 'lib/llm/provider.rb', line 127

def respond!(prompt, params = {})
  role = params.delete(:role)
  LLM::Chat.new(self, params).respond(prompt, role:)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


139
140
141
# File 'lib/llm/provider.rb', line 139

def responses
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Gemini::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


146
147
148
# File 'lib/llm/provider.rb', line 146

def images
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


153
154
155
# File 'lib/llm/provider.rb', line 153

def audio
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


160
161
162
# File 'lib/llm/provider.rb', line 160

def files
  raise NotImplementedError
end

#modelsLLM::OpenAI::Models

Returns an interface to the models API

Returns:

Raises:

  • (NotImplementedError)


167
168
169
# File 'lib/llm/provider.rb', line 167

def models
  raise NotImplementedError
end

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


175
176
177
# File 'lib/llm/provider.rb', line 175

def assistant_role
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

    Returns the default model for chat completions

Raises:

  • (NotImplementedError)


182
183
184
# File 'lib/llm/provider.rb', line 182

def default_model
  raise NotImplementedError
end

#schemaJSON::Schema

Returns an object that can generate a JSON schema

Returns:



189
190
191
192
193
194
# File 'lib/llm/provider.rb', line 189

def schema
  @schema ||= begin
    require_relative "../json/schema"
    JSON::Schema.new
  end
end