Class: LLM::Provider Abstract
- Inherits:
-
Object
- Object
- LLM::Provider
- Defined in:
- lib/llm/provider.rb
Overview
The Provider class represents an abstract class for LLM (Language Model) providers.
Instance Method Summary collapse
-
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests.
-
#inspect ⇒ String
Returns an inspection of the provider object.
-
#embed(input, model: nil, **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#complete(prompt, params = {}) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#chat(prompt, params = {}) ⇒ LLM::Bot
Starts a new lazy chat powered by the chat completions API.
-
#chat!(prompt, params = {}) ⇒ LLM::Bot
Starts a new chat powered by the chat completions API.
-
#respond(prompt, params = {}) ⇒ LLM::Bot
Starts a new lazy chat powered by the responses API.
-
#respond!(prompt, params = {}) ⇒ LLM::Bot
Starts a new chat powered by the responses API.
-
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
-
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API.
-
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API.
-
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API.
-
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API.
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#default_model ⇒ String
Returns the default model for chat completions.
-
#schema ⇒ JSON::Schema
Returns an object that can generate a JSON schema.
-
#initialize(key:, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider
constructor
A new instance of Provider.
Constructor Details
#initialize(key:, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider
Returns a new instance of Provider.
22 23 24 25 26 27 |
# File 'lib/llm/provider.rb', line 22 def initialize(key:, host:, port: 443, timeout: 60, ssl: true) @key = key @client = Net::HTTP.new(host, port) @client.use_ssl = ssl @client.read_timeout = timeout end |
Instance Method Details
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests
209 210 211 |
# File 'lib/llm/provider.rb', line 209 def with(headers:) tap { (@headers ||= {}).merge!(headers) } end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
33 34 35 |
# File 'lib/llm/provider.rb', line 33 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @http=#{@http.inspect}>" end |
#embed(input, model: nil, **params) ⇒ LLM::Response::Embedding
Provides an embedding
48 49 50 |
# File 'lib/llm/provider.rb', line 48 def (input, model: nil, **params) raise NotImplementedError end |
#complete(prompt, params = {}) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
72 73 74 |
# File 'lib/llm/provider.rb', line 72 def complete(prompt, params = {}) raise NotImplementedError end |
#chat(prompt, params = {}) ⇒ LLM::Bot
This method creates a lazy version of a LLM::Bot object.
Starts a new lazy chat powered by the chat completions API
84 85 86 87 |
# File 'lib/llm/provider.rb', line 84 def chat(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).chat(prompt, role:) end |
#chat!(prompt, params = {}) ⇒ LLM::Bot
This method creates a non-lazy version of a LLM::Bot object.
Starts a new chat powered by the chat completions API
98 99 100 101 |
# File 'lib/llm/provider.rb', line 98 def chat!(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).chat(prompt, role:) end |
#respond(prompt, params = {}) ⇒ LLM::Bot
This method creates a lazy variant of a LLM::Bot object.
Starts a new lazy chat powered by the responses API
112 113 114 115 |
# File 'lib/llm/provider.rb', line 112 def respond(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).respond(prompt, role:) end |
#respond!(prompt, params = {}) ⇒ LLM::Bot
This method creates a non-lazy variant of a LLM::Bot object.
Starts a new chat powered by the responses API
126 127 128 129 |
# File 'lib/llm/provider.rb', line 126 def respond!(prompt, params = {}) role = params.delete(:role) LLM::Bot.new(self, params).respond(prompt, role:) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
138 139 140 |
# File 'lib/llm/provider.rb', line 138 def responses raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API
145 146 147 |
# File 'lib/llm/provider.rb', line 145 def images raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
152 153 154 |
# File 'lib/llm/provider.rb', line 152 def audio raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
159 160 161 |
# File 'lib/llm/provider.rb', line 159 def files raise NotImplementedError end |
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API
166 167 168 |
# File 'lib/llm/provider.rb', line 166 def models raise NotImplementedError end |
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API
173 174 175 |
# File 'lib/llm/provider.rb', line 173 def moderations raise NotImplementedError end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
181 182 183 |
# File 'lib/llm/provider.rb', line 181 def assistant_role raise NotImplementedError end |
#default_model ⇒ String
Returns the default model for chat completions
188 189 190 |
# File 'lib/llm/provider.rb', line 188 def default_model raise NotImplementedError end |
#schema ⇒ JSON::Schema
Returns an object that can generate a JSON schema
195 196 197 |
# File 'lib/llm/provider.rb', line 195 def schema @schema ||= JSON::Schema.new end |