Class: LLM::OpenAI

Inherits:
Provider show all
Defined in:
lib/llm/providers/openai.rb,
lib/llm/providers/openai/audio.rb,
lib/llm/providers/openai/files.rb,
lib/llm/providers/openai/format.rb,
lib/llm/providers/openai/images.rb,
lib/llm/providers/openai/models.rb,
lib/llm/providers/openai/responses.rb,
lib/llm/providers/openai/moderations.rb,
lib/llm/providers/openai/error_handler.rb,
lib/llm/providers/openai/stream_parser.rb,
lib/llm/providers/openai/vector_stores.rb,
lib/llm/providers/openai/responses/stream_parser.rb

Overview

The OpenAI class implements a provider for OpenAI.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
bot = LLM::Bot.new(llm)
bot.chat ["Tell me about this photo", File.open("/images/capybara.jpg", "rb")]
bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }

Direct Known Subclasses

DeepSeek, LlamaCpp, XAI, ZAI

Defined Under Namespace

Modules: Response Classes: Audio, Files, Images, Models, Moderations, Responses, VectorStores

Constant Summary collapse

HOST =
"api.openai.com"

Instance Method Summary collapse

Methods inherited from Provider

#chat, clients, #inspect, #respond, #schema, #server_tool, #with

Constructor Details

#initializeOpenAI

Returns a new instance of OpenAI.

Parameters:

  • key (String, nil)

    The secret key for authentication



37
38
39
# File 'lib/llm/providers/openai.rb', line 37

def initialize(**)
  super(host: HOST, **)
end

Instance Method Details

#web_search(query:) ⇒ LLM::Response

A convenience method for performing a web search using the OpenAI web search tool.

Examples:

llm = LLM.openai(key: ENV["KEY"])
res = llm.web_search(query: "summarize today's news")
res.search_results.each { |item| print item.title, ": ", item.url, "\n" }

Parameters:

  • query (String)

    The search query.

Returns:



179
180
181
182
183
# File 'lib/llm/providers/openai.rb', line 179

def web_search(query:)
  responses
    .create(query, store: false, tools: [server_tools[:web_search]])
    .extend(LLM::OpenAI::Response::WebSearch)
end

#embed(input, model: "text-embedding-3-small", **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: "text-embedding-3-small")

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

See Also:



49
50
51
52
53
54
# File 'lib/llm/providers/openai.rb', line 49

def embed(input, model: "text-embedding-3-small", **params)
  req = Net::HTTP::Post.new("/v1/embeddings", headers)
  req.body = JSON.dump({input:, model:}.merge!(params))
  res = execute(request: req)
  LLM::Response.new(res).extend(LLM::OpenAI::Response::Embedding)
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

See Also:



66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# File 'lib/llm/providers/openai.rb', line 66

def complete(prompt, params = {})
  params = {role: :user, model: default_model}.merge!(params)
  tools  = resolve_tools(params.delete(:tools))
  params = [params, format_schema(params), format_tools(tools)].inject({}, &:merge!).compact
  role, stream = params.delete(:role), params.delete(:stream)
  params[:stream] = true if stream.respond_to?(:<<) || stream == true
  params[:stream_options] = {include_usage: true}.merge!(params[:stream_options] || {}) if params[:stream]
  req = Net::HTTP::Post.new(completions_path, headers)
  messages = [*(params.delete(:messages) || []), Message.new(role, prompt)]
  body = JSON.dump({messages: format(messages, :complete).flatten}.merge!(params))
  set_body_stream(req, StringIO.new(body))
  res = execute(request: req, stream:)
  LLM::Response.new(res)
    .extend(LLM::OpenAI::Response::Completion)
    .extend(Module.new { define_method(:__tools__) { tools } })
end

#responsesLLM::OpenAI::Responses

Provides an interface to OpenAI’s response API

Returns:

See Also:



87
88
89
# File 'lib/llm/providers/openai.rb', line 87

def responses
  LLM::OpenAI::Responses.new(self)
end

#imagesLLM::OpenAI::Images

Provides an interface to OpenAI’s image generation API

Returns:

See Also:



95
96
97
# File 'lib/llm/providers/openai.rb', line 95

def images
  LLM::OpenAI::Images.new(self)
end

#audioLLM::OpenAI::Audio

Provides an interface to OpenAI’s audio generation API

Returns:

See Also:



103
104
105
# File 'lib/llm/providers/openai.rb', line 103

def audio
  LLM::OpenAI::Audio.new(self)
end

#filesLLM::OpenAI::Files

Provides an interface to OpenAI’s files API

Returns:

See Also:



111
112
113
# File 'lib/llm/providers/openai.rb', line 111

def files
  LLM::OpenAI::Files.new(self)
end

#modelsLLM::OpenAI::Models

Provides an interface to OpenAI’s models API

Returns:

See Also:



119
120
121
# File 'lib/llm/providers/openai.rb', line 119

def models
  LLM::OpenAI::Models.new(self)
end

#moderationsLLM::OpenAI::Moderations

Provides an interface to OpenAI’s moderation API



128
129
130
# File 'lib/llm/providers/openai.rb', line 128

def moderations
  LLM::OpenAI::Moderations.new(self)
end

#vector_storesLLM::OpenAI::VectorStore

Provides an interface to OpenAI’s vector store API

Returns:

  • (LLM::OpenAI::VectorStore)

See Also:



136
137
138
# File 'lib/llm/providers/openai.rb', line 136

def vector_stores
  LLM::OpenAI::VectorStores.new(self)
end

#assistant_roleString

Returns the role of the assistant in the conversation. Usually “assistant” or “model”

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually “assistant” or “model”



142
143
144
# File 'lib/llm/providers/openai.rb', line 142

def assistant_role
  "assistant"
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

See Also:



150
151
152
# File 'lib/llm/providers/openai.rb', line 150

def default_model
  "gpt-4.1"
end

#server_toolsString => LLM::ServerTool

Note:

This method includes certain tools that require configuration through a set of options that are easier to set through the LLM::Provider#server_tool method.

Returns:



160
161
162
163
164
165
166
167
168
# File 'lib/llm/providers/openai.rb', line 160

def server_tools
  {
    web_search: server_tool(:web_search),
    file_search: server_tool(:file_search),
    image_generation: server_tool(:image_generation),
    code_interpreter: server_tool(:code_interpreter),
    computer_use: server_tool(:computer_use)
  }
end