Class: LLM::Provider Abstract

Inherits:
Object
  • Object
show all
Includes:
Client
Defined in:
lib/llm/provider.rb

Overview

This class is abstract.

The Provider class represents an abstract class for LLM (Language Model) providers.

Direct Known Subclasses

Anthropic, Google, Ollama, OpenAI

Constant Summary collapse

@@clients =
{}

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider

Returns a new instance of Provider.

Parameters:

  • key (String, nil)

    The secret key for authentication

  • host (String)

    The host address of the LLM provider

  • port (Integer) (defaults to: 443)

    The port number

  • timeout (Integer) (defaults to: 60)

    The number of seconds to wait for a response

  • ssl (Boolean) (defaults to: true)

    Whether to use SSL for the connection

  • persistent (Boolean) (defaults to: false)

    Whether to use a persistent connection. Requires the net-http-persistent gem.



33
34
35
36
37
38
39
40
41
42
43
# File 'lib/llm/provider.rb', line 33

def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false)
  @key = key
  @host = host
  @port = port
  @timeout = timeout
  @ssl = ssl
  @client = persistent ? persistent_client : nil
  @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/")
  @headers = {"User-Agent" => "llm.rb v#{LLM::VERSION}"}
  @monitor = Monitor.new
end

Class Method Details

.clientsObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



17
# File 'lib/llm/provider.rb', line 17

def self.clients = @@clients

Instance Method Details

#persist!LLM::Provider

This method configures a provider to use a persistent connection pool via the optional dependency Net::HTTP::Persistent

Examples:

llm = LLM.openai(key: ENV["KEY"]).persist!
# do something with 'llm'

Returns:



314
315
316
317
318
319
# File 'lib/llm/provider.rb', line 314

def persist!
  client = persistent_client
  lock do
    tap { @client = client }
  end
end

#inspectString

Note:

The secret key is redacted in inspect for security reasons

Returns an inspection of the provider object

Returns:

  • (String)


49
50
51
# File 'lib/llm/provider.rb', line 49

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @client=#{@client.inspect} @tracer=#{tracer.inspect}>"
end

#nameSymbol

Returns the provider's name

Returns:

  • (Symbol)

    Returns the provider's name

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



58
59
60
# File 'lib/llm/provider.rb', line 58

def name
  raise NotImplementedError
end

#embed(input, model: nil, **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: nil)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



73
74
75
# File 'lib/llm/provider.rb', line 73

def embed(input, model: nil, **params)
  raise NotImplementedError
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.messages[0].role}]", res.messages[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :role (Symbol)

    Defaults to the provider's default role

  • :model (String)

    Defaults to the provider's default model

  • :schema (#to_json, nil)

    Defaults to nil

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



97
98
99
# File 'lib/llm/provider.rb', line 97

def complete(prompt, params = {})
  raise NotImplementedError
end

#chat(prompt, params = {}) ⇒ LLM::Context

Starts a new chat powered by the chat completions API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:



106
107
108
109
# File 'lib/llm/provider.rb', line 106

def chat(prompt, params = {})
  role = params.delete(:role)
  LLM::Context.new(self, params).talk(prompt, role:)
end

#respond(prompt, params = {}) ⇒ LLM::Context

Starts a new chat powered by the responses API

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



117
118
119
120
# File 'lib/llm/provider.rb', line 117

def respond(prompt, params = {})
  role = params.delete(:role)
  LLM::Context.new(self, params).respond(prompt, role:)
end

#responsesLLM::OpenAI::Responses

Note:

Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.

Returns:

Raises:

  • (NotImplementedError)


129
130
131
# File 'lib/llm/provider.rb', line 129

def responses
  raise NotImplementedError
end

#imagesLLM::OpenAI::Images, LLM::Google::Images

Returns an interface to the images API

Returns:

Raises:

  • (NotImplementedError)


136
137
138
# File 'lib/llm/provider.rb', line 136

def images
  raise NotImplementedError
end

#audioLLM::OpenAI::Audio

Returns an interface to the audio API

Returns:

Raises:

  • (NotImplementedError)


143
144
145
# File 'lib/llm/provider.rb', line 143

def audio
  raise NotImplementedError
end

#filesLLM::OpenAI::Files

Returns an interface to the files API

Returns:

Raises:

  • (NotImplementedError)


150
151
152
# File 'lib/llm/provider.rb', line 150

def files
  raise NotImplementedError
end

#modelsLLM::OpenAI::Models

Returns an interface to the models API

Returns:

Raises:

  • (NotImplementedError)


157
158
159
# File 'lib/llm/provider.rb', line 157

def models
  raise NotImplementedError
end

#moderationsLLM::OpenAI::Moderations

Returns an interface to the moderations API

Returns:

Raises:

  • (NotImplementedError)


164
165
166
# File 'lib/llm/provider.rb', line 164

def moderations
  raise NotImplementedError
end

#vector_storesLLM::OpenAI::VectorStore

Returns an interface to the vector stores API

Returns:

  • (LLM::OpenAI::VectorStore)

    Returns an interface to the vector stores API

Raises:

  • (NotImplementedError)


171
172
173
# File 'lib/llm/provider.rb', line 171

def vector_stores
  raise NotImplementedError
end

#assistant_roleString

Returns the role of the assistant in the conversation. Usually "assistant" or "model"

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually "assistant" or "model"

Raises:

  • (NotImplementedError)


179
180
181
# File 'lib/llm/provider.rb', line 179

def assistant_role
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

    Returns the default model for chat completions

Raises:

  • (NotImplementedError)


186
187
188
# File 'lib/llm/provider.rb', line 186

def default_model
  raise NotImplementedError
end

#schemaLLM::Schema

Returns an object that can generate a JSON schema

Returns:



193
194
195
# File 'lib/llm/provider.rb', line 193

def schema
  LLM::Schema.new
end

#with(headers:) ⇒ LLM::Provider

Add one or more headers to all requests

Examples:

llm = LLM.openai(key: ENV["KEY"])
llm.with(headers: {"OpenAI-Organization" => ENV["ORG"]})
llm.with(headers: {"OpenAI-Project" => ENV["PROJECT"]})

Parameters:

  • headers (Hash<String,String>)

    One or more headers

Returns:



207
208
209
210
211
# File 'lib/llm/provider.rb', line 207

def with(headers:)
  lock do
    tap { @headers.merge!(headers) }
  end
end

#server_toolsString => LLM::ServerTool

Note:

This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.

Returns all known tools provided by a provider.

Returns:



219
220
221
# File 'lib/llm/provider.rb', line 219

def server_tools
  {}
end

#server_tool(name, options = {}) ⇒ LLM::ServerTool

Note:

OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.

Returns a tool provided by a provider.

Examples:

llm   = LLM.openai(key: ENV["KEY"])
tools = [llm.server_tool(:web_search)]
res   = llm.responses.create("Summarize today's news", tools:)
print res.output_text, "\n"

Parameters:

  • name (String, Symbol)

    The name of the tool

  • options (Hash) (defaults to: {})

    Configuration options for the tool

Returns:



236
237
238
# File 'lib/llm/provider.rb', line 236

def server_tool(name, options = {})
  LLM::ServerTool.new(name, options, self)
end

#web_search(query:) ⇒ LLM::Response

Provides a web search capability

Parameters:

  • query (String)

    The search query

Returns:

Raises:

  • (NotImplementedError)

    When the method is not implemented by a subclass



246
247
248
# File 'lib/llm/provider.rb', line 246

def web_search(query:)
  raise NotImplementedError
end

#user_roleSymbol

Returns:

  • (Symbol)


252
253
254
# File 'lib/llm/provider.rb', line 252

def user_role
  :user
end

#system_roleSymbol

Returns:

  • (Symbol)


258
259
260
# File 'lib/llm/provider.rb', line 258

def system_role
  :system
end

#developer_roleSymbol

Returns:

  • (Symbol)


264
265
266
# File 'lib/llm/provider.rb', line 264

def developer_role
  :developer
end

#tool_roleSymbol

Returns:

  • (Symbol)


270
271
272
# File 'lib/llm/provider.rb', line 270

def tool_role
  :tool
end

#tracerLLM::Tracer

Returns a fiber-local tracer

Returns:



277
278
279
# File 'lib/llm/provider.rb', line 277

def tracer
  weakmap[self] || LLM::Tracer::Null.new(self)
end

#tracer=(tracer) ⇒ void

This method returns an undefined value.

Set a fiber-local tracer

Examples:

llm = LLM.openai(key: ENV["KEY"])
Thread.new do
  llm.tracer = LLM::Tracer::Logger.new(llm, path: "/path/to/log/1.txt")
end
Thread.new do
  llm.tracer = LLM::Tracer::Logger.new(llm, path: "/path/to/log/2.txt")
end
# ...

Parameters:



295
296
297
298
299
300
301
302
303
304
305
# File 'lib/llm/provider.rb', line 295

def tracer=(tracer)
  if tracer.nil?
    if weakmap.respond_to?(:delete)
      weakmap.delete(self)
    else
      weakmap[self] = nil
    end
  else
    weakmap[self] = tracer
  end
end