About
This post provides an introduction on how to build agents with the llm.rb library. The post aims to be simple and newbie friendly.
We’ll walk through how to use LLM::Function and LLM::Tool, noting their differences and how agents can leverage them. For our example, our agent will either fetch GitHub star counts or evaluate arbitrary Ruby code – nothing too complicated and simple enough for a blog post.
Background
What is an agent?
For the context of this post an agent is a system that interprets user requests and, when necessary, calls tools — these tools might contact external HTTP APIs, evaluate code, or any other “action” you might like the LLM to take on your behalf.
By combining the broad intelligence of a language model with access to real, external data sources, agents are able to return answers based not just on pre-trained knowledge, but also on live, up-to-date information.
Through this tool-calling capability, agents enable language models to go beyond simple questions and answers, empowering them to take meaningful action and automate tasks. A tool might send an email on your behalf, evaluate code, crunch numbers for a report, and any other action you’d like to automate through a LLM.
📝 Note
While some define “agents” as fully autonomous systems with no human oversight, here we use the more common developer definition: an LLM that interprets requests and takes action via tools.
LLM::Function
Closures
The LLM::Function
interface allows you to define a tool
as a Ruby closure. This approach is ideal when you need dynamic,
locally-scoped functionality, such as evaluating code or performing quick,
one-off computations. Closures make it easy to embed context-dependent or
ad-hoc logic within your agent. They’re best suited for situations where the
tool’s logic is relatively simple and tightly coupled to the immediate agent
context.
LLM::Tool
Classes
LLM::Tool
is designed for building reusable,
stand-alone tools that encapsulate more complex logic, such as integrating
external APIs or handling structured data. By defining a tool as a Ruby class
we benefit from organization, reusability and the ability to manage
complexity. The choice between LLM::Function
and LLM::Tool
often depends on personal preferences,
and technical requirements. Under the hood – LLM::Tool
is implemented as a wrapper around a
LLM::Function
object.
Examples
StarGazer
The following class defines a Tool for fetching the number of stars for any public GitHub repository:
require "llm"
require "json"
class StarGazer < LLM::Tool
name "stargazer"
description "Fetch the number of stars for a public GitHub repo (e.g. owner/repo)"
params { |schema| schema.object(repo: schema.string.required) }
def call(repo:)
uri = URI("https://api.github.com/repos/#{repo}")
response = Net::HTTP.get_response(uri)
data = JSON.parse(response.body)
if response.is_a?(Net::HTTPSuccess) && data["stargazers_count"]
{ repo: repo, stars: data["stargazers_count"], url: data["html_url"] }
elsif data["message"]
{ error: "GitHub API: #{data["message"]}" }
else
{ error: "Could not retrieve stars." }
end
rescue => ex
{ error: ex.message }
end
end
Eval bot
This closure defines a tool to evaluate arbitrary Ruby code.
⚠️ Never use eval in production or with untrusted input!
eval = LLM.function(:eval) do |fn|
fn.description "Evaluate Ruby code"
fn.params { |schema| schema.object(code: schema.string.required) }
fn.define do |code:|
{ result: Kernel.eval(code).inspect }
rescue SystemExit
{ error: "Permission denied" }
rescue => ex
{ error: ex.message }
end
end
Loop
📝 Note
llm.rb never runs tools automatically. You’re always in control of when and how they’re invoked. For example, a tool call can be cancelled with LLM::Function#cancel instead of LLM::Function#call. The examples here keep it simple, but in real applications you might add validation, logging, or security checks before executing a tool.
Finally we combine both tools into an agent and demonstrate a usage loop. The agent can answer factual questions about GitHub repositories, as well as evaluate Ruby code on demand. Just remember: when an LLM suggests a function, you’ll need to either call it or cancel it. If you don’t, you’ll likely run into API errors:
##
# <insert tools here>
llm = LLM.openai(key: ENV["OPENAI_SECRET"])
bot = LLM::Bot.new(llm, stream: $stdout, tools: [StarGazer, eval])
loop do
print "> "
input = $stdin.gets&.chomp || break
bot.chat input, role: :user
# If tools are suggested by the model, call them automatically
bot.chat bot.functions.map(&:call)
bot.messages.flush
print "\n"
end
Conclusion
With just a few lines of Ruby, we’ve built a simple but flexible agent. From here you can expand with more tools, add safety checks, or plug into real-world APIs — the foundation is already in place. The rest is up to you. ☺️
Start the demo
