Class: LLM::Provider Abstract
Overview
The Provider class represents an abstract class for LLM (Language Model) providers.
Constant Summary collapse
- @@clients =
{}
- @@mutex =
Mutex.new
Class Method Summary collapse
-
.mutex ⇒ Object
private
-
.clients ⇒ Object
private
Instance Method Summary collapse
-
#web_search(query:) ⇒ LLM::Response
Provides a web search capability.
-
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider
constructor
A new instance of Provider.
-
#inspect ⇒ String
Returns an inspection of the provider object.
-
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding.
-
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#chat(prompt, params = {}) ⇒ LLM::Bot
Starts a new lazy chat powered by the chat completions API.
-
#chat!(prompt, params = {}) ⇒ LLM::Bot
Starts a new chat powered by the chat completions API.
-
#respond(prompt, params = {}) ⇒ LLM::Bot
Starts a new lazy chat powered by the responses API.
-
#respond!(prompt, params = {}) ⇒ LLM::Bot
Starts a new chat powered by the responses API.
-
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
-
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API.
-
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API.
-
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API.
-
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API.
-
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API.
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#default_model ⇒ String
Returns the default model for chat completions.
-
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema.
-
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests.
-
#tools ⇒ String => LLM::Tool
Returns all known tools provided by a provider.
-
#tool(name, options = {}) ⇒ LLM::Tool
Returns a tool provided by a provider.
Constructor Details
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider
Returns a new instance of Provider.
38 39 40 41 42 43 44 45 46 |
# File 'lib/llm/provider.rb', line 38 def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) @key = key @host = host @port = port @timeout = timeout @ssl = ssl @client = persistent ? persistent_client : transient_client @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/") end |
Class Method Details
.mutex ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
22 |
# File 'lib/llm/provider.rb', line 22 def self.mutex = @@mutex |
.clients ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
18 |
# File 'lib/llm/provider.rb', line 18 def self.clients = @@clients |
Instance Method Details
#web_search(query:) ⇒ LLM::Response
Provides a web search capability
272 273 274 |
# File 'lib/llm/provider.rb', line 272 def web_search(query:) raise NotImplementedError end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
52 53 54 |
# File 'lib/llm/provider.rb', line 52 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @http=#{@http.inspect}>" end |
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding
67 68 69 |
# File 'lib/llm/provider.rb', line 67 def (input, model: nil, **params) raise NotImplementedError end |
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
91 92 93 |
# File 'lib/llm/provider.rb', line 91 def complete(prompt, params = {}) raise NotImplementedError end |
#chat(prompt, params = {}) ⇒ LLM::Bot
This method creates a lazy version of a LLM::Bot object.
Starts a new lazy chat powered by the chat completions API
103 104 105 106 |
# File 'lib/llm/provider.rb', line 103 def chat(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).chat(prompt, role:) end |
#chat!(prompt, params = {}) ⇒ LLM::Bot
This method creates a non-lazy version of a LLM::Bot object.
Starts a new chat powered by the chat completions API
117 118 119 120 |
# File 'lib/llm/provider.rb', line 117 def chat!(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).chat(prompt, role:) end |
#respond(prompt, params = {}) ⇒ LLM::Bot
This method creates a lazy variant of a LLM::Bot object.
Starts a new lazy chat powered by the responses API
131 132 133 134 |
# File 'lib/llm/provider.rb', line 131 def respond(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).respond(prompt, role:) end |
#respond!(prompt, params = {}) ⇒ LLM::Bot
This method creates a non-lazy variant of a LLM::Bot object.
Starts a new chat powered by the responses API
145 146 147 148 |
# File 'lib/llm/provider.rb', line 145 def respond!(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).respond(prompt, role:) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
157 158 159 |
# File 'lib/llm/provider.rb', line 157 def responses raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API
164 165 166 |
# File 'lib/llm/provider.rb', line 164 def images raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
171 172 173 |
# File 'lib/llm/provider.rb', line 171 def audio raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
178 179 180 |
# File 'lib/llm/provider.rb', line 178 def files raise NotImplementedError end |
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API
185 186 187 |
# File 'lib/llm/provider.rb', line 185 def models raise NotImplementedError end |
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API
192 193 194 |
# File 'lib/llm/provider.rb', line 192 def moderations raise NotImplementedError end |
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API
199 200 201 |
# File 'lib/llm/provider.rb', line 199 def vector_stores raise NotImplementedError end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
207 208 209 |
# File 'lib/llm/provider.rb', line 207 def assistant_role raise NotImplementedError end |
#default_model ⇒ String
Returns the default model for chat completions
214 215 216 |
# File 'lib/llm/provider.rb', line 214 def default_model raise NotImplementedError end |
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema
221 222 223 |
# File 'lib/llm/provider.rb', line 221 def schema @schema ||= LLM::Schema.new end |
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests
235 236 237 |
# File 'lib/llm/provider.rb', line 235 def with(headers:) tap { (@headers ||= {}).merge!(headers) } end |
#tools ⇒ String => LLM::Tool
This method might be outdated, and the LLM::Provider#tool method can be used if a tool is not found here.
Returns all known tools provided by a provider.
245 246 247 |
# File 'lib/llm/provider.rb', line 245 def tools {} end |