Class: LLM::Provider Abstract
- Inherits:
-
Object
- Object
- LLM::Provider
- Defined in:
- lib/llm/provider.rb
Overview
The Provider class represents an abstract class for LLM (Language Model) providers.
Instance Method Summary collapse
-
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models.
-
#inspect ⇒ String
Returns an inspection of the provider object.
-
#embed(input, model:, **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#complete(prompt, role = :user, model: nil, **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#chat(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new lazy chat powered by the chat completions API.
-
#chat!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new chat powered by the chat completions API.
-
#respond(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new lazy chat powered by the responses API.
-
#respond!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
Starts a new chat powered by the responses API.
-
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
-
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API.
-
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API.
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider
constructor
A new instance of Provider.
Constructor Details
#initialize(secret, host:, port: 443, timeout: 60, ssl: true) ⇒ Provider
Returns a new instance of Provider.
20 21 22 23 24 25 26 |
# File 'lib/llm/provider.rb', line 20 def initialize(secret, host:, port: 443, timeout: 60, ssl: true) @secret = secret @http = Net::HTTP.new(host, port).tap do |http| http.use_ssl = ssl http.read_timeout = timeout end end |
Instance Method Details
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models
183 184 185 |
# File 'lib/llm/provider.rb', line 183 def models raise NotImplementedError end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
32 33 34 |
# File 'lib/llm/provider.rb', line 32 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @secret=[REDACTED] @http=#{@http.inspect}>" end |
#embed(input, model:, **params) ⇒ LLM::Response::Embedding
Provides an embedding
47 48 49 |
# File 'lib/llm/provider.rb', line 47 def (input, model:, **params) raise NotImplementedError end |
#complete(prompt, role = :user, model: nil, **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
72 73 74 |
# File 'lib/llm/provider.rb', line 72 def complete(prompt, role = :user, model: nil, **params) raise NotImplementedError end |
#chat(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a lazy version of a LLM::Chat object.
Starts a new lazy chat powered by the chat completions API
88 89 90 |
# File 'lib/llm/provider.rb', line 88 def chat(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).lazy.chat(prompt, role) end |
#chat!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a non-lazy version of a LLM::Chat object.
Starts a new chat powered by the chat completions API
104 105 106 |
# File 'lib/llm/provider.rb', line 104 def chat!(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).chat(prompt, role) end |
#respond(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a lazy variant of a LLM::Chat object.
Starts a new lazy chat powered by the responses API
120 121 122 |
# File 'lib/llm/provider.rb', line 120 def respond(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).lazy.respond(prompt, role) end |
#respond!(prompt, role = :user, model: nil, **params) ⇒ LLM::Chat
This method creates a non-lazy variant of a LLM::Chat object.
Starts a new chat powered by the responses API
136 137 138 |
# File 'lib/llm/provider.rb', line 136 def respond!(prompt, role = :user, model: nil, **params) LLM::Chat.new(self, params).respond(prompt, role) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
147 148 149 |
# File 'lib/llm/provider.rb', line 147 def responses raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API
154 155 156 |
# File 'lib/llm/provider.rb', line 154 def images raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
161 162 163 |
# File 'lib/llm/provider.rb', line 161 def audio raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
168 169 170 |
# File 'lib/llm/provider.rb', line 168 def files raise NotImplementedError end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
176 177 178 |
# File 'lib/llm/provider.rb', line 176 def assistant_role raise NotImplementedError end |