Class: LLM::Context
- Inherits:
-
Object
- Object
- LLM::Context
- Includes:
- Deserializer
- Defined in:
- lib/llm/context.rb,
lib/llm/context/deserializer.rb
Overview
LLM::Context represents a stateful interaction with an LLM, including conversation history, tools, execution state, and cost tracking. It evolves over time as the system runs.
Context is the stateful environment in which an LLM operates. This is not just prompt context; it is an active, evolving execution boundary for LLM workflows.
A context can use the chat completions API that all providers support or the responses API that currently only OpenAI supports.
Defined Under Namespace
Modules: Deserializer
Instance Attribute Summary collapse
-
#messages ⇒
LLM::Buffer<LLM::Message> readonly
Returns the accumulated message history for this context.
-
#llm ⇒
LLM::Provider readonly
Returns a provider.
Instance Method Summary collapse
-
#context_window
⇒ Integer
Returns the model's context window.
-
#talk(prompt, params
= {}) ⇒ LLM::Response (also: #chat)
Interact with the context via the chat completions API.
-
#respond(prompt,
params = {}) ⇒ LLM::Response
Interact with the context via the responses API.
- #inspect ⇒ String
-
#functions ⇒
Array<LLM::Function>
Returns an array of functions that can be called.
-
#usage ⇒
LLM::Object?
Returns token usage accumulated in this context This method returns token usage for the latest assistant message, and it returns nil for non-assistant messages.
-
#prompt(&b) ⇒
LLM::Prompt (also: #build_prompt)
Build a role-aware prompt for a single request.
-
#image_url(url)
⇒ LLM::Object
Recongize an object as a URL to an image.
-
#local_file(path) ⇒
LLM::Object
Recongize an object as a local file.
-
#remote_file(res) ⇒
LLM::Object
Reconginize an object as a remote file.
-
#tracer ⇒
LLM::Tracer
Returns an LLM tracer.
-
#model ⇒
String
Returns the model a Context is actively using.
- #to_h ⇒ Hash
- #to_json ⇒ String
-
#serialize(path:) ⇒
void (also: #save)
Save the current context state.
-
#deserialize(path:
nil, string: nil) ⇒ LLM::Context (also: #restore)
Restore a saved context state.
-
#cost ⇒
LLM::Cost
Returns an approximate cost for a given context based on both the provider, and model.
-
#initialize(llm,
params = {}) ⇒ Context constructor
A new instance of Context.
Methods included from Deserializer
Constructor Details
Instance Attribute Details
#messages ⇒ LLM::Buffer<LLM::Message> (readonly)
Returns the accumulated message history for this context
38 39 40 |
# File 'lib/llm/context.rb', line 38 def @messages end |
#llm ⇒ LLM::Provider (readonly)
Returns a provider
43 44 45 |
# File 'lib/llm/context.rb', line 43 def llm @llm end |
Instance Method Details
#context_window ⇒ Integer
This method returns 0 when the provider or model can't be found within Registry.
Returns the model's context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.
277 278 279 280 281 282 283 284 |
# File 'lib/llm/context.rb', line 277 def context_window LLM .registry_for(llm) .limit(model:) .context rescue LLM::NoSuchModelError, LLM::NoSuchRegistryError 0 end |
#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat
Interact with the context via the chat completions API. This method immediately sends a request to the LLM and returns the response.
72 73 74 75 76 77 78 79 80 81 |
# File 'lib/llm/context.rb', line 72 def talk(prompt, params = {}) params = params.merge(messages: @messages.to_a) params = @params.merge(params) res = @llm.complete(prompt, params) role = params[:role] || @llm.user_role role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any? @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#respond(prompt, params = {}) ⇒ LLM::Response
Not all LLM providers support this API
Interact with the context via the responses API. This method immediately sends a request to the LLM and returns the response.
97 98 99 100 101 102 103 104 105 106 |
# File 'lib/llm/context.rb', line 97 def respond(prompt, params = {}) res_id = @messages.find(&:assistant?)&.response&.response_id params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact params = @params.merge(params) res = @llm.responses.create(prompt, params) role = params[:role] || @llm.user_role @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)] @messages.concat [res.choices[-1]] res end |
#inspect ⇒ String
110 111 112 113 114 |
# File 'lib/llm/context.rb', line 110 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} " \ "@llm=#{@llm.class}, @params=#{@params.inspect}, " \ "@messages=#{@messages.inspect}>" end |
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called
119 120 121 122 123 124 125 126 127 128 129 |
# File 'lib/llm/context.rb', line 119 def functions @messages .select(&:assistant?) .flat_map do |msg| fns = msg.functions.select(&:pending?) fns.each do |fn| fn.tracer = tracer fn.model = msg.model end end.extend(LLM::Function::Array) end |
#usage ⇒ LLM::Object?
Returns token usage accumulated in this context This method returns token usage for the latest assistant message, and it returns nil for non-assistant messages.
138 139 140 |
# File 'lib/llm/context.rb', line 138 def usage @messages.find(&:assistant?)&.usage end |
#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt
Build a role-aware prompt for a single request.
Prefer this method over #build_prompt. The older method name is kept for backward compatibility.
157 158 159 |
# File 'lib/llm/context.rb', line 157 def prompt(&b) LLM::Prompt.new(@llm, &b) end |
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image
168 169 170 |
# File 'lib/llm/context.rb', line 168 def image_url(url) LLM::Object.from(value: url, kind: :image_url) end |
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file
178 179 180 |
# File 'lib/llm/context.rb', line 178 def local_file(path) LLM::Object.from(value: LLM.File(path), kind: :local_file) end |
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file
188 189 190 |
# File 'lib/llm/context.rb', line 188 def remote_file(res) LLM::Object.from(value: res, kind: :remote_file) end |
#tracer ⇒ LLM::Tracer
Returns an LLM tracer
195 196 197 |
# File 'lib/llm/context.rb', line 195 def tracer @llm.tracer end |
#model ⇒ String
Returns the model a Context is actively using
202 203 204 |
# File 'lib/llm/context.rb', line 202 def model .find(&:assistant?)&.model || @params[:model] end |
#to_h ⇒ Hash
208 209 210 |
# File 'lib/llm/context.rb', line 208 def to_h {model:, messages:} end |
#to_json ⇒ String
214 215 216 |
# File 'lib/llm/context.rb', line 214 def to_json(...) {schema_version: 1}.merge!(to_h).to_json(...) end |
#serialize(path:) ⇒ void Also known as: save
This method returns an undefined value.
Save the current context state
228 229 230 |
# File 'lib/llm/context.rb', line 228 def serialize(path:) ::File.binwrite path, LLM.json.dump(self) end |
#deserialize(path: nil, string: nil) ⇒ LLM::Context Also known as: restore
Restore a saved context state
242 243 244 245 246 247 248 249 250 251 252 253 |
# File 'lib/llm/context.rb', line 242 def deserialize(path: nil, string: nil) payload = if path.nil? and string.nil? raise ArgumentError, "a path or string is required" elsif path ::File.binread(path) else string end ctx = LLM.json.load(payload) @messages.concat [*ctx["messages"]].map { (_1) } self end |
#cost ⇒ LLM::Cost
Returns an approximate cost for a given context based on both the provider, and model
260 261 262 263 264 265 266 267 |
# File 'lib/llm/context.rb', line 260 def cost return LLM::Cost.new(0, 0) unless usage cost = LLM.registry_for(llm).cost(model:) LLM::Cost.new( (cost.input.to_f / 1_000_000.0) * usage.input_tokens, (cost.output.to_f / 1_000_000.0) * usage.output_tokens ) end |