Class: LLM::Bot
- Inherits:
-
Object
- Object
- LLM::Bot
- Defined in:
- lib/llm/bot.rb,
lib/llm/bot/builder.rb,
lib/llm/bot/conversable.rb
Overview
LLM::Bot provides an object that can maintain a conversation. A conversation can use the chat completions API that all LLM providers support or the responses API that currently only OpenAI supports.
Defined Under Namespace
Modules: Prompt
Instance Attribute Summary collapse
-
#messages ⇒ LLM::Buffer<LLM::Message>
readonly
Returns an Enumerable for the messages in a conversation.
Instance Method Summary collapse
-
#inspect ⇒ String
-
#chat(prompt = nil, params = {}) ⇒ Object
Maintain a conversation via the chat completions API.
-
#respond(prompt = nil, params = {}) ⇒ Object
Maintain a conversation via the responses API.
-
#initialize(provider, params = {}) ⇒ Bot
constructor
A new instance of Bot.
-
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called.
-
#drain ⇒ Array<LLM::Message>
(also: #flush)
Drains the buffer and returns all messages as an array.
-
#usage ⇒ LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages.
Constructor Details
Instance Attribute Details
#messages ⇒ LLM::Buffer<LLM::Message> (readonly)
Returns an Enumerable for the messages in a conversation
38 39 40 |
# File 'lib/llm/bot.rb', line 38 def @messages end |
Instance Method Details
#inspect ⇒ String
111 112 113 114 115 |
# File 'lib/llm/bot.rb', line 111 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} " \ "@provider=#{@provider.class}, @params=#{@params.inspect}, " \ "@messages=#{@messages.inspect}>" end |
#chat(prompt, params = {}) ⇒ LLM::Bot #chat(prompt, params) { ... } ⇒ LLM::Buffer
Maintain a conversation via the chat completions API
69 70 71 72 73 74 75 76 77 78 79 80 |
# File 'lib/llm/bot.rb', line 69 def chat(prompt = nil, params = {}) if block_given? params = prompt yield Prompt::Completion.new(self, params) elsif prompt.nil? raise ArgumentError, "wrong number of arguments (given 0, expected 1)" else params = {role: :user}.merge!(params) tap { async_completion(prompt, params) } end end |
#respond(prompt, params = {}) ⇒ LLM::Bot #respond(prompt, params) { ... } ⇒ LLM::Buffer
Maintain a conversation via the responses API
96 97 98 99 100 101 102 103 104 105 106 107 |
# File 'lib/llm/bot.rb', line 96 def respond(prompt = nil, params = {}) if block_given? params = prompt yield Prompt::Respond.new(self, params) elsif prompt.nil? raise ArgumentError, "wrong number of arguments (given 0, expected 1)" else params = {role: :user}.merge!(params) tap { async_response(prompt, params) } end end |
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called
120 121 122 123 124 125 |
# File 'lib/llm/bot.rb', line 120 def functions .select(&:assistant?) .flat_map(&:functions) .select(&:pending?) end |
#drain ⇒ Array<LLM::Message> Also known as: flush
Drains the buffer and returns all messages as an array
134 135 136 |
# File 'lib/llm/bot.rb', line 134 def drain .drain end |
#usage ⇒ LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages
146 147 148 |
# File 'lib/llm/bot.rb', line 146 def usage .find(&:assistant?)&.usage || LLM::Object.from_hash({}) end |