Class: LLM::Function::ThreadGroup

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/function/thread_group.rb

Overview

The ThreadGroup class wraps an array of Thread objects that are running LLM::Function calls concurrently. It provides a single #wait method that collects the Return values from those threads.

This class is returned by Array#spawn when you call ctx.functions.spawn on the collection returned by Context#functions. It is a lightweight wrapper that does not inherit from Ruby's built-in ThreadGroup.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn
# do other work while tools run...
ctx.talk(grp.wait)

See Also:

Instance Method Summary collapse

Constructor Details

#initialize(threads) ⇒ LLM::Function::ThreadGroup

Creates a new LLM::Function::ThreadGroup from an array of Thread objects.

Parameters:



39
40
41
# File 'lib/llm/function/thread_group.rb', line 39

def initialize(threads)
  @threads = threads
end

Instance Method Details

#alive?Boolean

Returns whether any thread in the group is still alive.

This method checks if any of the threads in the group are still running. It can be useful for monitoring concurrent tool execution without blocking.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn
while grp.alive?
  puts "Tools are still running..."
  sleep 1
end
ctx.talk(grp.wait)

Returns:

  • (Boolean)

    Returns true if any thread in the group is still alive, false otherwise.



64
65
66
# File 'lib/llm/function/thread_group.rb', line 64

def alive?
  @threads.any?(&:alive?)
end

#waitArray<LLM::Function::Return> Also known as: value

Waits for all threads in the group to finish and returns their Return values.

This method blocks until every thread in the group has completed. If a thread raised an exception, the exception is caught and wrapped in an Return with error information.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn
returns = grp.wait
# returns is now an array of LLM::Function::Return objects
ctx.talk(returns)

Returns:



89
90
91
# File 'lib/llm/function/thread_group.rb', line 89

def wait
  @threads.map(&:value)
end