Class: LLM::Function::TaskGroup

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/function/task_group.rb

Overview

The TaskGroup class wraps an array of Async::Task objects that are running LLM::Function calls concurrently using the async gem.

This class provides the same interface as ThreadGroup but uses async tasks for lightweight concurrency with automatic scheduling and I/O management.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn(:task)
# do other work while tools run...
ctx.talk(grp.wait)

See Also:

Instance Method Summary collapse

Constructor Details

#initialize(tasks) ⇒ LLM::Function::TaskGroup

Creates a new LLM::Function::TaskGroup from an array of async task objects.

Parameters:

  • tasks (Array<Async::Task>)

    An array of async tasks, each running an LLM::Function#spawn_async call.



34
35
36
# File 'lib/llm/function/task_group.rb', line 34

def initialize(tasks)
  @tasks = tasks
end

Instance Method Details

#alive?Boolean

Returns whether any task in the group is still alive.

This method checks if any of the tasks in the group are still running. It can be useful for monitoring concurrent tool execution without blocking.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn(:task)
while grp.alive?
  puts "Tools are still running..."
  sleep 1
end
ctx.talk(grp.wait)

Returns:

  • (Boolean)

    Returns true if any task in the group is still alive, false otherwise.



59
60
61
# File 'lib/llm/function/task_group.rb', line 59

def alive?
  @tasks.any?(&:alive?)
end

#waitArray<LLM::Function::Return> Also known as: value

Waits for all tasks in the group to finish and returns their Return values.

This method blocks until every task in the group has completed. If a task raised an exception, the exception is caught and wrapped in an Return with error information.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn(:task)
returns = grp.wait
# returns is now an array of LLM::Function::Return objects
ctx.talk(returns)

Returns:



84
85
86
# File 'lib/llm/function/task_group.rb', line 84

def wait
  @tasks.map(&:wait)
end