Class: LLM::Context

Inherits:
Object
  • Object
show all
Includes:
Deserializer
Defined in:
lib/llm/context.rb,
lib/llm/context/deserializer.rb

Overview

LLM::Context represents a stateful interaction with an LLM, including conversation history, tools, execution state, and cost tracking. It evolves over time as the system runs.

Context is the stateful environment in which an LLM operates. This is not just prompt context; it is an active, evolving execution boundary for LLM workflows.

A context can use the chat completions API that all providers support or the responses API that currently only OpenAI supports.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)

prompt = LLM::Prompt.new(llm) do
  system "Be concise and show your reasoning briefly."
  user "If a train goes 60 mph for 1.5 hours, how far does it travel?"
  user "Now double the speed for the same time."
end

ctx.talk(prompt)
ctx.messages.each { |m| puts "[#{m.role}] #{m.content}" }

Defined Under Namespace

Modules: Deserializer

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Deserializer

#deserialize_message

Constructor Details

#initialize(llm, params = {}) ⇒ Context

Returns a new instance of Context.

Parameters:

  • llm (LLM::Provider)

    A provider

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :model (String)

    Defaults to the provider's default model

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil



54
55
56
57
58
# File 'lib/llm/context.rb', line 54

def initialize(llm, params = {})
  @llm = llm
  @params = {model: llm.default_model, schema: nil}.compact.merge!(params)
  @messages = LLM::Buffer.new(llm)
end

Instance Attribute Details

#messagesLLM::Buffer<LLM::Message> (readonly)

Returns the accumulated message history for this context



38
39
40
# File 'lib/llm/context.rb', line 38

def messages
  @messages
end

#llmLLM::Provider (readonly)

Returns a provider

Returns:



43
44
45
# File 'lib/llm/context.rb', line 43

def llm
  @llm
end

Instance Method Details

#context_windowInteger

Note:

This method returns 0 when the provider or model can't be found within Registry.

Returns the model's context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.

Returns:

  • (Integer)


277
278
279
280
281
282
283
284
# File 'lib/llm/context.rb', line 277

def context_window
  LLM
    .registry_for(llm)
    .limit(model:)
    .context
rescue LLM::NoSuchModelError, LLM::NoSuchRegistryError
  0
end

#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat

Interact with the context via the chat completions API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
res = ctx.talk("Hello, what is your name?")
puts res.messages[0].content

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:



72
73
74
75
76
77
78
79
80
81
# File 'lib/llm/context.rb', line 72

def talk(prompt, params = {})
  params = params.merge(messages: @messages.to_a)
  params = @params.merge(params)
  res = @llm.complete(prompt, params)
  role = params[:role] || @llm.user_role
  role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any?
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#respond(prompt, params = {}) ⇒ LLM::Response

Note:

Not all LLM providers support this API

Interact with the context via the responses API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
res = ctx.respond("What is the capital of France?")
puts res.output_text

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:



97
98
99
100
101
102
103
104
105
106
# File 'lib/llm/context.rb', line 97

def respond(prompt, params = {})
  res_id = @messages.find(&:assistant?)&.response&.response_id
  params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact
  params = @params.merge(params)
  res = @llm.responses.create(prompt, params)
  role = params[:role] || @llm.user_role
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#inspectString

Returns:

  • (String)


110
111
112
113
114
# File 'lib/llm/context.rb', line 110

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@llm=#{@llm.class}, @params=#{@params.inspect}, " \
  "@messages=#{@messages.inspect}>"
end

#functionsArray<LLM::Function>

Returns an array of functions that can be called

Returns:



119
120
121
122
123
124
125
126
127
128
129
# File 'lib/llm/context.rb', line 119

def functions
  @messages
    .select(&:assistant?)
    .flat_map do |msg|
      fns = msg.functions.select(&:pending?)
      fns.each do |fn|
        fn.tracer = tracer
        fn.model  = msg.model
      end
    end.extend(LLM::Function::Array)
end

#usageLLM::Object?

Note:

Returns token usage accumulated in this context This method returns token usage for the latest assistant message, and it returns nil for non-assistant messages.

Returns:



138
139
140
# File 'lib/llm/context.rb', line 138

def usage
  @messages.find(&:assistant?)&.usage
end

#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt

Build a role-aware prompt for a single request.

Prefer this method over #build_prompt. The older method name is kept for backward compatibility.

Examples:

prompt = ctx.prompt do
  system "Your task is to assist the user"
  user "Hello, can you assist me?"
end
ctx.talk(prompt)

Parameters:

  • b (Proc)

    A block that composes messages. If it takes one argument, it receives the prompt object. Otherwise it runs in prompt context.

Returns:



157
158
159
# File 'lib/llm/context.rb', line 157

def prompt(&b)
  LLM::Prompt.new(@llm, &b)
end

#image_url(url) ⇒ LLM::Object

Recongize an object as a URL to an image

Parameters:

  • url (String)

    The URL

Returns:



168
169
170
# File 'lib/llm/context.rb', line 168

def image_url(url)
  LLM::Object.from(value: url, kind: :image_url)
end

#local_file(path) ⇒ LLM::Object

Recongize an object as a local file

Parameters:

  • path (String)

    The path

Returns:



178
179
180
# File 'lib/llm/context.rb', line 178

def local_file(path)
  LLM::Object.from(value: LLM.File(path), kind: :local_file)
end

#remote_file(res) ⇒ LLM::Object

Reconginize an object as a remote file

Parameters:

Returns:



188
189
190
# File 'lib/llm/context.rb', line 188

def remote_file(res)
  LLM::Object.from(value: res, kind: :remote_file)
end

#tracerLLM::Tracer

Returns an LLM tracer

Returns:



195
196
197
# File 'lib/llm/context.rb', line 195

def tracer
  @llm.tracer
end

#modelString

Returns the model a Context is actively using

Returns:

  • (String)


202
203
204
# File 'lib/llm/context.rb', line 202

def model
  messages.find(&:assistant?)&.model || @params[:model]
end

#to_hHash

Returns:

  • (Hash)


208
209
210
# File 'lib/llm/context.rb', line 208

def to_h
  {model:, messages:}
end

#to_jsonString

Returns:

  • (String)


214
215
216
# File 'lib/llm/context.rb', line 214

def to_json(...)
  {schema_version: 1}.merge!(to_h).to_json(...)
end

#serialize(path:) ⇒ void Also known as: save

This method returns an undefined value.

Save the current context state

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm)
ctx.talk "Hello"
ctx.save(path: "context.json")

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



228
229
230
# File 'lib/llm/context.rb', line 228

def serialize(path:)
  ::File.binwrite path, LLM.json.dump(self)
end

#deserialize(path: nil, string: nil) ⇒ LLM::Context Also known as: restore

Restore a saved context state

Parameters:

  • path (String, nil) (defaults to: nil)

    The path to a JSON file

  • string (String, nil) (defaults to: nil)

    A raw JSON string

Returns:

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



242
243
244
245
246
247
248
249
250
251
252
253
# File 'lib/llm/context.rb', line 242

def deserialize(path: nil, string: nil)
  payload = if path.nil? and string.nil?
    raise ArgumentError, "a path or string is required"
  elsif path
    ::File.binread(path)
  else
    string
  end
  ctx = LLM.json.load(payload)
  @messages.concat [*ctx["messages"]].map { deserialize_message(_1) }
  self
end

#costLLM::Cost

Returns an approximate cost for a given context based on both the provider, and model

Returns:

  • (LLM::Cost)

    Returns an approximate cost for a given context based on both the provider, and model



260
261
262
263
264
265
266
267
# File 'lib/llm/context.rb', line 260

def cost
  return LLM::Cost.new(0, 0) unless usage
  cost = LLM.registry_for(llm).cost(model:)
  LLM::Cost.new(
    (cost.input.to_f / 1_000_000.0)  * usage.input_tokens,
    (cost.output.to_f / 1_000_000.0) * usage.output_tokens
  )
end