Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Gemini, Ollama, OpenAI

Instance Method Summary collapse

Constructor Details

#initialize(key:, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • key (String, nil)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response

  • ssl (Boolean) (defaults to: true)

    Whether to use SSL for the connection



22
23
24
25
26
27
# File 'lib/llm/provider.rb', line 22

def initialize(key:, host:, port: 443, timeout: 60, ssl: true)
  @key = key
  @client = Net::HTTP.new(host, port)
  @client.use_ssl = ssl
  @client.read_timeout = timeout
end

Instance Method Details

#with(headers:) ⇒ LLM::Provider

Add one or more headers to all requests

Examples:

llm = LLM.openai(key: ENV["KEY"])
llm.with(headers: {"OpenAI-Organization" => ENV["ORG"]})
llm.with(headers: {"OpenAI-Project" => ENV["PROJECT"]})

Parameters:

  • headers (Hash<String,String>)

    One or more headers

Returns:



216
217
218
# File 'lib/llm/provider.rb', line 216

def with(headers:)
  tap { (@headers ||= {}).merge!(headers) }
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


33
34
35
# File 'lib/llm/provider.rb', line 33

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @http=#{@http.inspect}>"
end

#embed(input, model: nil, **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: nil)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



48
49
50
# File 'lib/llm/provider.rb', line 48

def embed(input, model: nil, **params)
  raise NotImplementedError
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :role (Symbol)

    Defaults to the provider’s default role

  • :model (String)

    Defaults to the provider’s default model

  • :schema (#to_json, nil)

    Defaults to nil

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



72
73
74
# File 'lib/llm/provider.rb', line 72

def complete(prompt, params = {})
  raise NotImplementedError
end

#chat(prompt, params = {}) ⇒ LLM::Bot

Note:

This method creates a lazy version of a LLM::Bot object.

Starts a new lazy chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:



84
85
86
87
# File 'lib/llm/provider.rb', line 84

def chat(prompt, params = {})
  role = params.delete(:role)
  LLM::Bot.new(self, params).chat(prompt, role:)
end

#chat!(prompt, params = {}) ⇒ LLM::Bot

Note:

This method creates a non-lazy version of a LLM::Bot object.

Starts a new chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



98
99
100
101
# File 'lib/llm/provider.rb', line 98

def chat!(prompt, params = {})
  role = params.delete(:role)
  LLM::Bot.new(self, params).chat(prompt, role:)
end

#respond(prompt, params = {}) ⇒ LLM::Bot

Note:

This method creates a lazy variant of a LLM::Bot object.

Starts a new lazy chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



112
113
114
115
# File 'lib/llm/provider.rb', line 112

def respond(prompt, params = {})
  role = params.delete(:role)
  LLM::Bot.new(self, params).respond(prompt, role:)
end

#respond!(prompt, params = {}) ⇒ LLM::Bot

Note:

This method creates a non-lazy variant of a LLM::Bot object.

Starts a new chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



126
127
128
129
# File 'lib/llm/provider.rb', line 126

def respond!(prompt, params = {})
  role = params.delete(:role)
  LLM::Bot.new(self, params).respond(prompt, role:)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


138
139
140
# File 'lib/llm/provider.rb', line 138

def responses
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Gemini::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


145
146
147
# File 'lib/llm/provider.rb', line 145

def images
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


152
153
154
# File 'lib/llm/provider.rb', line 152

def audio
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


159
160
161
# File 'lib/llm/provider.rb', line 159

def files
  raise NotImplementedError
end

#modelsLLM::OpenAI::Models

Returns an interface to the models API

Returns:

Raises:

  • (NotImplementedError)


166
167
168
# File 'lib/llm/provider.rb', line 166

def models
  raise NotImplementedError
end

#moderationsLLM::OpenAI::Moderations

Returns an interface to the moderations API

Returns:

Raises:

  • (NotImplementedError)


173
174
175
# File 'lib/llm/provider.rb', line 173

def moderations
  raise NotImplementedError
end

#vector_storesLLM::OpenAI::VectorStore

Returns an interface to the vector stores API

Returns:

  • (LLM::OpenAI::VectorStore)

    Returns an interface to the vector stores API

Raises:

  • (NotImplementedError)


180
181
182
# File 'lib/llm/provider.rb', line 180

def vector_stores
  raise NotImplementedError
end

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Raises:

  • (NotImplementedError)


188
189
190
# File 'lib/llm/provider.rb', line 188

def assistant_role
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

    Returns the default model for chat completions

Raises:

  • (NotImplementedError)


195
196
197
# File 'lib/llm/provider.rb', line 195

def default_model
  raise NotImplementedError
end

#schemaLLM::Schema

Returns an object that can generate a JSON schema

Returns:



202
203
204
# File 'lib/llm/provider.rb', line 202

def schema
  @schema ||= LLM::Schema.new
end