Class: LLM::Function::FiberGroup

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/function/fiber_group.rb

Overview

The FiberGroup class wraps an array of Fiber objects that are running LLM::Function calls concurrently using raw fibers.

This class provides the same interface as ThreadGroup but uses raw fibers for lightweight concurrency without the async gem.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn(:fiber)
# do other work while tools run...
ctx.talk(grp.wait)

See Also:

Instance Method Summary collapse

Constructor Details

#initialize(fibers) ⇒ LLM::Function::FiberGroup

Creates a new LLM::Function::FiberGroup from an array of fiber objects.

Parameters:

  • fibers (Array<Fiber>)

    An array of fibers, each running an LLM::Function#spawn_fiber call.



33
34
35
# File 'lib/llm/function/fiber_group.rb', line 33

def initialize(fibers)
  @fibers = fibers
end

Instance Method Details

#alive?Boolean

Returns whether any fiber in the group is still alive.

This method checks if any of the fibers in the group are still running. It can be useful for monitoring concurrent tool execution without blocking.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn(:fiber)
while grp.alive?
  puts "Tools are still running..."
  sleep 1
end
ctx.talk(grp.wait)

Returns:

  • (Boolean)

    Returns true if any fiber in the group is still alive, false otherwise.



58
59
60
# File 'lib/llm/function/fiber_group.rb', line 58

def alive?
  @fibers.any?(&:alive?)
end

#waitArray<LLM::Function::Return> Also known as: value

Waits for all fibers in the group to finish and returns their Return values.

This method blocks until every fiber in the group has completed. If a fiber raised an exception, the exception is caught and wrapped in an Return with error information.

Examples:

llm = LLM.openai(key: ENV["KEY"])
ctx = LLM::Context.new(llm, tools: [Weather, News, Stocks])
ctx.talk "Summarize the weather, headlines, and stock price."
grp = ctx.functions.spawn(:fiber)
returns = grp.wait
# returns is now an array of LLM::Function::Return objects
ctx.talk(returns)

Returns:



83
84
85
86
87
88
# File 'lib/llm/function/fiber_group.rb', line 83

def wait
  @fibers.map do |fiber|
    fiber.resume if fiber.alive?
    fiber.value
  end
end