Class: LLM::Provider Abstract
Overview
The Provider class represents an abstract class for LLM (Language Model) providers.
Constant Summary collapse
- @@clients =
-
{}
Class Method Summary collapse
- .clients ⇒ Object private
Instance Method Summary collapse
-
#tracer=(tracer)
⇒ void
Set the tracer.
-
#initialize(key:,
host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒
Provider constructor
A new instance of Provider.
-
#inspect ⇒
String
Returns an inspection of the provider object.
-
#embed(input,
model: nil, **params) ⇒ LLM::Response
Provides an embedding.
-
#complete(prompt,
params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#chat(prompt, params
= {}) ⇒ LLM::Session
Starts a new chat powered by the chat completions API.
-
#respond(prompt,
params = {}) ⇒ LLM::Session
Starts a new chat powered by the responses API.
-
#responses ⇒
LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
-
#images ⇒
LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API.
-
#audio ⇒
LLM::OpenAI::Audio
Returns an interface to the audio API.
-
#files ⇒
LLM::OpenAI::Files
Returns an interface to the files API.
-
#models ⇒
LLM::OpenAI::Models
Returns an interface to the models API.
-
#moderations ⇒
LLM::OpenAI::Moderations
Returns an interface to the moderations API.
-
#vector_stores ⇒
LLM::OpenAI::VectorStore
Returns an interface to the vector stores API.
-
#assistant_role
⇒ String
Returns the role of the assistant in the conversation.
-
#default_model ⇒
String
Returns the default model for chat completions.
-
#schema ⇒
LLM::Schema
Returns an object that can generate a JSON schema.
-
#with(headers:) ⇒
LLM::Provider
Add one or more headers to all requests.
-
#server_tools ⇒
String => LLM::ServerTool
Returns all known tools provided by a provider.
-
#server_tool(name,
options = {}) ⇒ LLM::ServerTool
Returns a tool provided by a provider.
-
#web_search(query:)
⇒ LLM::Response
Provides a web search capability.
- #user_role ⇒ Symbol
- #system_role ⇒ Symbol
- #developer_role ⇒ Symbol
- #tool_role ⇒ Symbol
-
#tracer ⇒
LLM::Tracer
Returns an LLM tracer.
Constructor Details
#initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) ⇒ Provider
Returns a new instance of Provider.
33 34 35 36 37 38 39 40 41 42 43 |
# File 'lib/llm/provider.rb', line 33 def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false) @key = key @host = host @port = port @timeout = timeout @ssl = ssl @client = persistent ? persistent_client : transient_client @tracer = LLM::Tracer::Null.new(self) @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/") @headers = {"User-Agent" => "llm.rb v#{LLM::VERSION}"} end |
Class Method Details
.clients ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
17 |
# File 'lib/llm/provider.rb', line 17 def self.clients = @@clients |
Instance Method Details
#tracer=(tracer) ⇒ void
This method returns an undefined value.
Set the tracer
279 280 281 282 283 284 285 |
# File 'lib/llm/provider.rb', line 279 def tracer=(tracer) @tracer = if tracer.nil? LLM::Tracer::Null.new(self) else tracer end end |
#inspect ⇒ String
The secret key is redacted in inspect for security reasons
Returns an inspection of the provider object
49 50 51 |
# File 'lib/llm/provider.rb', line 49 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} @key=[REDACTED] @client=#{@client.inspect} @tracer=#{@tracer.inspect}>" end |
#embed(input, model: nil, **params) ⇒ LLM::Response
Provides an embedding
64 65 66 |
# File 'lib/llm/provider.rb', line 64 def (input, model: nil, **params) raise NotImplementedError end |
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
88 89 90 |
# File 'lib/llm/provider.rb', line 88 def complete(prompt, params = {}) raise NotImplementedError end |
#chat(prompt, params = {}) ⇒ LLM::Session
Starts a new chat powered by the chat completions API
97 98 99 100 |
# File 'lib/llm/provider.rb', line 97 def chat(prompt, params = {}) role = params.delete(:role) LLM::Session.new(self, params).talk(prompt, role:) end |
#respond(prompt, params = {}) ⇒ LLM::Session
Starts a new chat powered by the responses API
108 109 110 111 |
# File 'lib/llm/provider.rb', line 108 def respond(prompt, params = {}) role = params.delete(:role) LLM::Session.new(self, params).respond(prompt, role:) end |
#responses ⇒ LLM::OpenAI::Responses
Compared to the chat completions API, the responses API can require less bandwidth on each turn, maintain state server-side, and produce faster responses.
120 121 122 |
# File 'lib/llm/provider.rb', line 120 def responses raise NotImplementedError end |
#images ⇒ LLM::OpenAI::Images, LLM::Gemini::Images
Returns an interface to the images API
127 128 129 |
# File 'lib/llm/provider.rb', line 127 def images raise NotImplementedError end |
#audio ⇒ LLM::OpenAI::Audio
Returns an interface to the audio API
134 135 136 |
# File 'lib/llm/provider.rb', line 134 def audio raise NotImplementedError end |
#files ⇒ LLM::OpenAI::Files
Returns an interface to the files API
141 142 143 |
# File 'lib/llm/provider.rb', line 141 def files raise NotImplementedError end |
#models ⇒ LLM::OpenAI::Models
Returns an interface to the models API
148 149 150 |
# File 'lib/llm/provider.rb', line 148 def models raise NotImplementedError end |
#moderations ⇒ LLM::OpenAI::Moderations
Returns an interface to the moderations API
155 156 157 |
# File 'lib/llm/provider.rb', line 155 def moderations raise NotImplementedError end |
#vector_stores ⇒ LLM::OpenAI::VectorStore
Returns an interface to the vector stores API
162 163 164 |
# File 'lib/llm/provider.rb', line 162 def vector_stores raise NotImplementedError end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually "assistant" or "model"
170 171 172 |
# File 'lib/llm/provider.rb', line 170 def assistant_role raise NotImplementedError end |
#default_model ⇒ String
Returns the default model for chat completions
177 178 179 |
# File 'lib/llm/provider.rb', line 177 def default_model raise NotImplementedError end |
#schema ⇒ LLM::Schema
Returns an object that can generate a JSON schema
184 185 186 |
# File 'lib/llm/provider.rb', line 184 def schema @schema ||= LLM::Schema.new end |
#with(headers:) ⇒ LLM::Provider
Add one or more headers to all requests
198 199 200 |
# File 'lib/llm/provider.rb', line 198 def with(headers:) tap { @headers.merge!(headers) } end |
#server_tools ⇒ String => LLM::ServerTool
This method might be outdated, and the LLM::Provider#server_tool method can be used if a tool is not found here.
Returns all known tools provided by a provider.
208 209 210 |
# File 'lib/llm/provider.rb', line 208 def server_tools {} end |
#server_tool(name, options = {}) ⇒ LLM::ServerTool
OpenAI, Anthropic, and Gemini provide platform-tools for things like web search, and more.
Returns a tool provided by a provider.
225 226 227 |
# File 'lib/llm/provider.rb', line 225 def server_tool(name, = {}) LLM::ServerTool.new(name, , self) end |
#web_search(query:) ⇒ LLM::Response
Provides a web search capability
235 236 237 |
# File 'lib/llm/provider.rb', line 235 def web_search(query:) raise NotImplementedError end |
#user_role ⇒ Symbol
241 242 243 |
# File 'lib/llm/provider.rb', line 241 def user_role :user end |
#system_role ⇒ Symbol
247 248 249 |
# File 'lib/llm/provider.rb', line 247 def system_role :system end |
#developer_role ⇒ Symbol
253 254 255 |
# File 'lib/llm/provider.rb', line 253 def developer_role :developer end |
#tool_role ⇒ Symbol
259 260 261 |
# File 'lib/llm/provider.rb', line 259 def tool_role :tool end |
#tracer ⇒ LLM::Tracer
Returns an LLM tracer
266 267 268 |
# File 'lib/llm/provider.rb', line 266 def tracer @tracer end |