Class: LLM::Session
- Inherits:
-
Object
- Object
- LLM::Session
- Defined in:
- lib/llm/bot.rb
Overview
LLM::Session provides an object that can maintain a conversation. A conversation can use the chat completions API that all LLM providers support or the responses API that currently only OpenAI supports.
Instance Attribute Summary collapse
-
#messages ⇒
LLM::Buffer<LLM::Message> readonly
Returns an Enumerable for the messages in a conversation.
Instance Method Summary collapse
-
#model ⇒
String
Returns the model a Session is actively using.
-
#talk(prompt, params
= {}) ⇒ LLM::Response (also: #chat)
Maintain a conversation via the chat completions API.
-
#respond(prompt,
params = {}) ⇒ LLM::Response
Maintain a conversation via the responses API.
- #inspect ⇒ String
-
#functions ⇒
Array<LLM::Function>
Returns an array of functions that can be called.
-
#usage ⇒
LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages.
-
#prompt(&b) ⇒
LLM::Prompt (also: #build_prompt)
Build a role-aware prompt for a single request.
-
#image_url(url)
⇒ LLM::Object
Recongize an object as a URL to an image.
-
#local_file(path) ⇒
LLM::Object
Recongize an object as a local file.
-
#remote_file(res) ⇒
LLM::Object
Reconginize an object as a remote file.
-
#tracer ⇒
LLM::Tracer
Returns an LLM tracer.
-
#initialize(provider,
params = {}) ⇒ Session constructor
A new instance of Session.
Constructor Details
#initialize(provider, params = {}) ⇒ Session
Returns a new instance of Session.
40 41 42 43 44 |
# File 'lib/llm/bot.rb', line 40 def initialize(provider, params = {}) @provider = provider @params = {model: provider.default_model, schema: nil}.compact.merge!(params) @messages = LLM::Buffer.new(provider) end |
Instance Attribute Details
#messages ⇒ LLM::Buffer<LLM::Message> (readonly)
Returns an Enumerable for the messages in a conversation
29 30 31 |
# File 'lib/llm/bot.rb', line 29 def @messages end |
Instance Method Details
#model ⇒ String
Returns the model a Session is actively using
189 190 191 |
# File 'lib/llm/bot.rb', line 189 def model .find(&:assistant?)&.model || @params[:model] end |
#talk(prompt, params = {}) ⇒ LLM::Response Also known as: chat
Maintain a conversation via the chat completions API. This method immediately sends a request to the LLM and returns the response.
58 59 60 61 62 63 64 65 66 67 |
# File 'lib/llm/bot.rb', line 58 def talk(prompt, params = {}) prompt, params, = fetch(prompt, params) params = params.merge(messages: [*@messages.to_a, *]) params = @params.merge(params) res = @provider.complete(prompt, params) @messages.concat [LLM::Message.new(params[:role] || :user, prompt)] @messages.concat @messages.concat [res.choices[-1]] res end |
#respond(prompt, params = {}) ⇒ LLM::Response
Not all LLM providers support this API
Maintain a conversation via the responses API. This method immediately sends a request to the LLM and returns the response.
83 84 85 86 87 88 89 90 91 92 93 |
# File 'lib/llm/bot.rb', line 83 def respond(prompt, params = {}) prompt, params, = fetch(prompt, params) res_id = @messages.find(&:assistant?)&.response&.response_id params = params.merge(previous_response_id: res_id, input: ).compact params = @params.merge(params) res = @provider.responses.create(prompt, params) @messages.concat [LLM::Message.new(params[:role] || :user, prompt)] @messages.concat @messages.concat [res.choices[-1]] res end |
#inspect ⇒ String
97 98 99 100 101 |
# File 'lib/llm/bot.rb', line 97 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} " \ "@provider=#{@provider.class}, @params=#{@params.inspect}, " \ "@messages=#{@messages.inspect}>" end |
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called
106 107 108 109 110 111 112 113 114 115 116 |
# File 'lib/llm/bot.rb', line 106 def functions @messages .select(&:assistant?) .flat_map do |msg| fns = msg.functions.select(&:pending?) fns.each do |fn| fn.tracer = tracer fn.model = msg.model end end end |
#usage ⇒ LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages
125 126 127 |
# File 'lib/llm/bot.rb', line 125 def usage @messages.find(&:assistant?)&.usage || LLM::Object.from({}) end |
#prompt(&b) ⇒ LLM::Prompt Also known as: build_prompt
Build a role-aware prompt for a single request.
Prefer this method over #build_prompt. The older method name is kept for backward compatibility.
144 145 146 |
# File 'lib/llm/bot.rb', line 144 def prompt(&b) LLM::Prompt.new(@provider, &b) end |
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image
155 156 157 |
# File 'lib/llm/bot.rb', line 155 def image_url(url) LLM::Object.from(value: url, kind: :image_url) end |
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file
165 166 167 |
# File 'lib/llm/bot.rb', line 165 def local_file(path) LLM::Object.from(value: LLM.File(path), kind: :local_file) end |
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file
175 176 177 |
# File 'lib/llm/bot.rb', line 175 def remote_file(res) LLM::Object.from(value: res, kind: :remote_file) end |
#tracer ⇒ LLM::Tracer
Returns an LLM tracer
182 183 184 |
# File 'lib/llm/bot.rb', line 182 def tracer @provider.tracer end |