Class: LLM::Session

Inherits:
Object
  • Object
show all
Includes:
Deserializer
Defined in:
lib/llm/bot.rb,
lib/llm/session/deserializer.rb

Overview

LLM::Session provides an object that can maintain a conversation. A conversation can use the chat completions API that all LLM providers support or the responses API that currently only OpenAI supports.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
ses = LLM::Session.new(llm)

prompt = LLM::Prompt.new(llm) do
  system "Be concise and show your reasoning briefly."
  user "If a train goes 60 mph for 1.5 hours, how far does it travel?"
  user "Now double the speed for the same time."
end

ses.talk(prompt)
ses.messages.each { |m| puts "[#{m.role}] #{m.content}" }

Defined Under Namespace

Modules: Deserializer

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from Deserializer

#deserialize_message

Constructor Details

#initialize(llm, params = {}) ⇒ Session

Returns a new instance of Session.

Parameters:

  • llm (LLM::Provider)

    A provider

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Options Hash (params):

  • :model (String)

    Defaults to the provider's default model

  • :tools (Array<LLM::Function>, nil)

    Defaults to nil



48
49
50
51
52
# File 'lib/llm/bot.rb', line 48

def initialize(llm, params = {})
  @llm = llm
  @params = {model: llm.default_model, schema: nil}.compact.merge!(params)
  @messages = LLM::Buffer.new(llm)
end

Instance Attribute Details

#messagesLLM::Buffer<LLM::Message> (readonly)

Returns an Enumerable for the messages in a conversation



32
33
34
# File 'lib/llm/bot.rb', line 32

def messages
  @messages
end

#llmLLM::Provider (readonly)

Returns a provider

Returns:



37
38
39
# File 'lib/llm/bot.rb', line 37

def llm
  @llm
end

Instance Method Details

#context_windowInteger

Note:

This method returns 0 when the provider or model can't be found within Registry.

Returns the model's context window. The context window is the maximum amount of input and output tokens a model can consider in a single request.

Returns:

  • (Integer)


270
271
272
273
274
275
276
277
# File 'lib/llm/bot.rb', line 270

def context_window
  LLM
    .registry_for(llm)
    .limit(model:)
    .context
rescue LLM::NoSuchModelError, LLM::NoSuchProviderError
  0
end

#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat

Maintain a conversation via the chat completions API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ses = LLM::Session.new(llm)
res = ses.talk("Hello, what is your name?")
puts res.messages[0].content

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:



66
67
68
69
70
71
72
73
74
75
# File 'lib/llm/bot.rb', line 66

def talk(prompt, params = {})
  params = params.merge(messages: @messages.to_a)
  params = @params.merge(params)
  res = @llm.complete(prompt, params)
  role = params[:role] || @llm.user_role
  role = @llm.tool_role if params[:role].nil? && [*prompt].grep(LLM::Function::Return).any?
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#respond(prompt, params = {}) ⇒ LLM::Response

Note:

Not all LLM providers support this API

Maintain a conversation via the responses API. This method immediately sends a request to the LLM and returns the response.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ses = LLM::Session.new(llm)
res = ses.respond("What is the capital of France?")
puts res.output_text

Parameters:

  • params (defaults to: {})

    The params, including optional :role (defaults to :user), :stream, :tools, :schema etc.

  • prompt (String)

    The input prompt to be completed

Returns:



91
92
93
94
95
96
97
98
99
100
# File 'lib/llm/bot.rb', line 91

def respond(prompt, params = {})
  res_id = @messages.find(&:assistant?)&.response&.response_id
  params = params.merge(previous_response_id: res_id, input: @messages.to_a).compact
  params = @params.merge(params)
  res = @llm.responses.create(prompt, params)
  role = params[:role] || @llm.user_role
  @messages.concat LLM::Prompt === prompt ? prompt.to_a : [LLM::Message.new(role, prompt)]
  @messages.concat [res.choices[-1]]
  res
end

#inspectString

Returns:

  • (String)


104
105
106
107
108
# File 'lib/llm/bot.rb', line 104

def inspect
  "#<#{self.class.name}:0x#{object_id.to_s(16)} " \
  "@llm=#{@llm.class}, @params=#{@params.inspect}, " \
  "@messages=#{@messages.inspect}>"
end

#functionsArray<LLM::Function>

Returns an array of functions that can be called

Returns:



113
114
115
116
117
118
119
120
121
122
123
# File 'lib/llm/bot.rb', line 113

def functions
  @messages
    .select(&:assistant?)
    .flat_map do |msg|
      fns = msg.functions.select(&:pending?)
      fns.each do |fn|
        fn.tracer = tracer
        fn.model  = msg.model
      end
    end
end

#usageLLM::Object

Note:

Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages

Returns:



132
133
134
# File 'lib/llm/bot.rb', line 132

def usage
  @messages.find(&:assistant?)&.usage || LLM::Object.from({})
end

#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt

Build a role-aware prompt for a single request.

Prefer this method over #build_prompt. The older method name is kept for backward compatibility.

Examples:

prompt = ses.prompt do
  system "Your task is to assist the user"
  user "Hello, can you assist me?"
end
ses.talk(prompt)

Parameters:

  • b (Proc)

    A block that composes messages. If it takes one argument, it receives the prompt object. Otherwise it runs in prompt context.

Returns:



151
152
153
# File 'lib/llm/bot.rb', line 151

def prompt(&b)
  LLM::Prompt.new(@llm, &b)
end

#image_url(url) ⇒ LLM::Object

Recongize an object as a URL to an image

Parameters:

  • url (String)

    The URL

Returns:



162
163
164
# File 'lib/llm/bot.rb', line 162

def image_url(url)
  LLM::Object.from(value: url, kind: :image_url)
end

#local_file(path) ⇒ LLM::Object

Recongize an object as a local file

Parameters:

  • path (String)

    The path

Returns:



172
173
174
# File 'lib/llm/bot.rb', line 172

def local_file(path)
  LLM::Object.from(value: LLM.File(path), kind: :local_file)
end

#remote_file(res) ⇒ LLM::Object

Reconginize an object as a remote file

Parameters:

Returns:



182
183
184
# File 'lib/llm/bot.rb', line 182

def remote_file(res)
  LLM::Object.from(value: res, kind: :remote_file)
end

#tracerLLM::Tracer

Returns an LLM tracer

Returns:



189
190
191
# File 'lib/llm/bot.rb', line 189

def tracer
  @llm.tracer
end

#modelString

Returns the model a Session is actively using

Returns:

  • (String)


196
197
198
# File 'lib/llm/bot.rb', line 196

def model
  messages.find(&:assistant?)&.model || @params[:model]
end

#to_hHash

Returns:

  • (Hash)


202
203
204
# File 'lib/llm/bot.rb', line 202

def to_h
  {model:, messages:}
end

#to_jsonString

Returns:

  • (String)


208
209
210
# File 'lib/llm/bot.rb', line 208

def to_json(...)
  {schema_version: 1}.merge!(to_h).to_json(...)
end

#serialize(path:) ⇒ void Also known as: save

This method returns an undefined value.

Save a session

Examples:

llm = LLM.openai(key: ENV["KEY"])
ses = LLM::Session.new(llm)
ses.talk "Hello"
ses.save(path: "session.json")

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



222
223
224
# File 'lib/llm/bot.rb', line 222

def serialize(path:)
  ::File.binwrite path, LLM.json.dump(self)
end

#deserialize(path: nil, string: nil) ⇒ LLM::Session Also known as: restore

Restore a session

Parameters:

  • path (String, nil) (defaults to: nil)

    The path to a JSON file

  • string (String, nil) (defaults to: nil)

    A raw JSON string

Returns:

Raises:

  • (SystemCallError)

    Might raise a number of SystemCallError subclasses



236
237
238
239
240
241
242
243
244
245
246
247
# File 'lib/llm/bot.rb', line 236

def deserialize(path: nil, string: nil)
  payload = if path.nil? and string.nil?
    raise ArgumentError, "a path or string is required"
  elsif path
    ::File.binread(path)
  else
    string
  end
  ses = LLM.json.load(payload)
  @messages.concat [*ses["messages"]].map { deserialize_message(_1) }
  self
end

#costLLM::Cost

Returns an approximate cost for a given session based on both the provider, and model

Returns:

  • (LLM::Cost)

    Returns an approximate cost for a given session based on both the provider, and model



254
255
256
257
258
259
260
# File 'lib/llm/bot.rb', line 254

def cost
  cost = LLM.registry_for(llm).cost(model:)
  LLM::Cost.new(
    (cost.input.to_f / 1_000_000.0)  * usage.input_tokens,
    (cost.output.to_f / 1_000_000.0) * usage.output_tokens
  )
end