Class: LLM::Session
- Inherits:
-
Object
- Object
- LLM::Session
- Includes:
- Deserializer
- Defined in:
- lib/llm/bot.rb,
lib/llm/session/deserializer.rb
Overview
LLM::Session provides an object that can maintain a conversation. A conversation can use the chat completions API that all LLM providers support or the responses API that currently only OpenAI supports.
Defined Under Namespace
Modules: Deserializer
Instance Attribute Summary collapse
-
#messages ⇒
LLM::Buffer<LLM::Message> readonly
Returns an Enumerable for the messages in a conversation.
-
#llm ⇒
LLM::Provider readonly
Returns a provider.
Instance Method Summary collapse
-
#context_window
⇒ Integer
Returns the model's context window.
-
#talk(prompt, params
= {}) ⇒ LLM::Response (also: #chat)
Maintain a conversation via the chat completions API.
-
#respond(prompt,
params = {}) ⇒ LLM::Response
Maintain a conversation via the responses API.
- #inspect ⇒ String
-
#functions ⇒
Array<LLM::Function>
Returns an array of functions that can be called.
-
#usage ⇒
LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages.
-
#prompt(&b) ⇒
LLM::Prompt (also: #build_prompt)
Build a role-aware prompt for a single request.
-
#image_url(url)
⇒ LLM::Object
Recongize an object as a URL to an image.
-
#local_file(path) ⇒
LLM::Object
Recongize an object as a local file.
-
#remote_file(res) ⇒
LLM::Object
Reconginize an object as a remote file.
-
#tracer ⇒
LLM::Tracer
Returns an LLM tracer.
-
#model ⇒
String
Returns the model a Session is actively using.
- #to_h ⇒ Hash
- #to_json ⇒ String
-
#serialize(path:) ⇒
void (also: #save)
Save a session.
-
#deserialize(path:
nil, string: nil) ⇒ LLM::Session (also: #restore)
Restore a session.
-
#cost ⇒
LLM::Cost
Returns an approximate cost for a given session based on both the provider, and model.
-
#initialize(llm,
params = {}) ⇒ Session constructor
A new instance of Session.
Methods included from Deserializer
Constructor Details
Instance Attribute Details
#messages ⇒ LLM::Buffer<LLM::Message> (readonly)
Returns an Enumerable for the messages in a conversation
32 33 34 |
# File 'lib/llm/bot.rb', line 32 def @messages end |
#llm ⇒ LLM::Provider (readonly)
Returns a provider
37 38 39 |
# File 'lib/llm/bot.rb', line 37 def llm @llm end |
Instance Method Details
#context_window ⇒ Integer
This method returns 0 when the provider or model can't be found within Registry.
Returns the model's context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.
270 271 272 273 274 275 276 277 |
# File 'lib/llm/bot.rb', line 270 def context_window LLM .registry_for(llm) .limit(model:) .context rescue LLM::NoSuchModelError, LLM::NoSuchProviderError 0 end |
#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat
Maintain a conversation via the chat completions API. This method immediately sends a request to the LLM and returns the response.
66 67 68 69 70 71 72 73 74 75 |
# File 'lib/llm/bot.rb', line 66 def talk(prompt, params = {}) params = params.merge(messages: @messages.to_a) params = @params.merge(params) res = @llm.complete(prompt, params) role = params[:role] || @llm.user_role role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any? @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#respond(prompt, params = {}) ⇒ LLM::Response
Not all LLM providers support this API
Maintain a conversation via the responses API. This method immediately sends a request to the LLM and returns the response.
91 92 93 94 95 96 97 98 99 100 |
# File 'lib/llm/bot.rb', line 91 def respond(prompt, params = {}) res_id = @messages.find(&:assistant?)&.response&.response_id params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact params = @params.merge(params) res = @llm.responses.create(prompt, params) role = params[:role] || @llm.user_role @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#inspect ⇒ String
104 105 106 107 108 |
# File 'lib/llm/bot.rb', line 104 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} " \ "@llm=#{@llm.class}, @params=#{@params.inspect}, " \ "@messages=#{@messages.inspect}>" end |
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called
113 114 115 116 117 118 119 120 121 122 123 |
# File 'lib/llm/bot.rb', line 113 def functions @messages .select(&:assistant?) .flat_map do |msg| fns = msg.functions.select(&:pending?) fns.each do |fn| fn.tracer = tracer fn.model = msg.model end end end |
#usage ⇒ LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages
132 133 134 |
# File 'lib/llm/bot.rb', line 132 def usage @messages.find(&:assistant?)&.usage || LLM::Object.from({}) end |
#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt
Build a role-aware prompt for a single request.
Prefer this method over #build_prompt. The older method name is kept for backward compatibility.
151 152 153 |
# File 'lib/llm/bot.rb', line 151 def prompt(&b) LLM::Prompt.new(@llm, &b) end |
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image
162 163 164 |
# File 'lib/llm/bot.rb', line 162 def image_url(url) LLM::Object.from(value: url, kind: :image_url) end |
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file
172 173 174 |
# File 'lib/llm/bot.rb', line 172 def local_file(path) LLM::Object.from(value: LLM.File(path), kind: :local_file) end |
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file
182 183 184 |
# File 'lib/llm/bot.rb', line 182 def remote_file(res) LLM::Object.from(value: res, kind: :remote_file) end |
#tracer ⇒ LLM::Tracer
Returns an LLM tracer
189 190 191 |
# File 'lib/llm/bot.rb', line 189 def tracer @llm.tracer end |
#model ⇒ String
Returns the model a Session is actively using
196 197 198 |
# File 'lib/llm/bot.rb', line 196 def model .find(&:assistant?)&.model || @params[:model] end |
#to_h ⇒ Hash
202 203 204 |
# File 'lib/llm/bot.rb', line 202 def to_h {model:, messages:} end |
#to_json ⇒ String
208 209 210 |
# File 'lib/llm/bot.rb', line 208 def to_json(...) {schema_version: 1}.merge!(to_h).to_json(...) end |
#serialize(path:) ⇒ void Also known as: save
This method returns an undefined value.
Save a session
222 223 224 |
# File 'lib/llm/bot.rb', line 222 def serialize(path:) ::File.binwrite path, LLM.json.dump(self) end |
#deserialize(path: nil, string: nil) ⇒ LLM::Session Also known as: restore
Restore a session
236 237 238 239 240 241 242 243 244 245 246 247 |
# File 'lib/llm/bot.rb', line 236 def deserialize(path: nil, string: nil) payload = if path.nil? and string.nil? raise ArgumentError, "a path or string is required" elsif path ::File.binread(path) else string end ses = LLM.json.load(payload) @messages.concat [*ses["messages"]].map { (_1) } self end |
#cost ⇒ LLM::Cost
Returns an approximate cost for a given session based on both the provider, and model
254 255 256 257 258 259 260 |
# File 'lib/llm/bot.rb', line 254 def cost cost = LLM.registry_for(llm).cost(model:) LLM::Cost.new( (cost.input.to_f / 1_000_000.0) * usage.input_tokens, (cost.output.to_f / 1_000_000.0) * usage.output_tokens ) end |