Class: LLM::OpenAI

Inherits:
Provider show all
Defined in:
lib/llm/providers/openai.rb,
lib/llm/providers/openai/audio.rb,
lib/llm/providers/openai/files.rb,
lib/llm/providers/openai/images.rb,
lib/llm/providers/openai/models.rb,
lib/llm/providers/openai/responses.rb,
lib/llm/providers/openai/moderations.rb,
lib/llm/providers/openai/error_handler.rb,
lib/llm/providers/openai/stream_parser.rb,
lib/llm/providers/openai/vector_stores.rb,
lib/llm/providers/openai/request_adapter.rb,
lib/llm/providers/openai/response_adapter.rb,
lib/llm/providers/openai/responses/stream_parser.rb

Overview

The OpenAI class implements a provider for OpenAI.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
ses = LLM::Session.new(llm)
ses.talk ["Tell me about this photo", ses.local_file("/images/photo.png")]
ses.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }

Direct Known Subclasses

DeepSeek, LlamaCpp, XAI, ZAI

Defined Under Namespace

Classes: Audio, Files, Images, Models, Moderations, Responses, VectorStores

Constant Summary collapse

HOST =
"api.openai.com"

Instance Method Summary collapse

Methods inherited from Provider

#chat, clients, #developer_role, #inspect, #persist!, #respond, #schema, #server_tool, #system_role, #tool_role, #tracer, #tracer=, #user_role, #with

Constructor Details

#initializeOpenAI

Returns a new instance of OpenAI.

Parameters:

  • key (String, nil)

    The secret key for authentication



35
36
37
# File 'lib/llm/providers/openai.rb', line 35

def initialize(**)
  super(host: HOST, **)
end

Instance Method Details

#web_search(query:) ⇒ LLM::Response

A convenience method for performing a web search using the OpenAI web search tool.

Examples:

llm = LLM.openai(key: ENV["KEY"])
res = llm.web_search(query: "summarize today's news")
res.search_results.each { |item| print item.title, ": ", item.url, "\n" }

Parameters:

  • query (String)

    The search query.

Returns:



180
181
182
183
184
185
# File 'lib/llm/providers/openai.rb', line 180

def web_search(query:)
  ResponseAdapter.adapt(
    responses.create(query, store: false, tools: [server_tools[:web_search]]),
    type: :web_search
  )
end

#nameSymbol

Returns the provider's name

Returns:

  • (Symbol)

    Returns the provider's name



42
43
44
# File 'lib/llm/providers/openai.rb', line 42

def name
  :openai
end

#embed(input, model: "text-embedding-3-small", **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: "text-embedding-3-small")

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:

See Also:



54
55
56
57
58
59
60
61
# File 'lib/llm/providers/openai.rb', line 54

def embed(input, model: "text-embedding-3-small", **params)
  req = Net::HTTP::Post.new("/v1/embeddings", headers)
  req.body = LLM.json.dump({input:, model:}.merge!(params))
  res, span, tracer = execute(request: req, operation: "embeddings", model:)
  res = ResponseAdapter.adapt(res, type: :embedding)
  tracer.on_request_finish(operation: "embeddings", model:, res:, span:)
  res
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.messages[0].role}]", res.messages[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

See Also:



73
74
75
76
77
78
79
80
81
82
# File 'lib/llm/providers/openai.rb', line 73

def complete(prompt, params = {})
  params, stream, tools, role = normalize_complete_params(params)
  req, messages = build_complete_request(prompt, params, role)
  tracer.(user_input: extract_user_input(messages, fallback: prompt))
  res, span, tracer = execute(request: req, stream: stream, operation: "chat", model: params[:model])
  res = ResponseAdapter.adapt(res, type: :completion)
    .extend(Module.new { define_method(:__tools__) { tools } })
  tracer.on_request_finish(operation: "chat", model: params[:model], res:, span:)
  res
end

#responsesLLM::OpenAI::Responses

Provides an interface to OpenAI's response API

Returns:

See Also:



88
89
90
# File 'lib/llm/providers/openai.rb', line 88

def responses
  LLM::OpenAI::Responses.new(self)
end

#imagesLLM::OpenAI::Images

Provides an interface to OpenAI's image generation API

Returns:

See Also:



96
97
98
# File 'lib/llm/providers/openai.rb', line 96

def images
  LLM::OpenAI::Images.new(self)
end

#audioLLM::OpenAI::Audio

Provides an interface to OpenAI's audio generation API

Returns:

See Also:



104
105
106
# File 'lib/llm/providers/openai.rb', line 104

def audio
  LLM::OpenAI::Audio.new(self)
end

#filesLLM::OpenAI::Files

Provides an interface to OpenAI's files API

Returns:

See Also:



112
113
114
# File 'lib/llm/providers/openai.rb', line 112

def files
  LLM::OpenAI::Files.new(self)
end

#modelsLLM::OpenAI::Models

Provides an interface to OpenAI's models API

Returns:

See Also:



120
121
122
# File 'lib/llm/providers/openai.rb', line 120

def models
  LLM::OpenAI::Models.new(self)
end

#moderationsLLM::OpenAI::Moderations

Provides an interface to OpenAI's moderation API



129
130
131
# File 'lib/llm/providers/openai.rb', line 129

def moderations
  LLM::OpenAI::Moderations.new(self)
end

#vector_storesLLM::OpenAI::VectorStore

Provides an interface to OpenAI's vector store API

Returns:

  • (LLM::OpenAI::VectorStore)

See Also:



137
138
139
# File 'lib/llm/providers/openai.rb', line 137

def vector_stores
  LLM::OpenAI::VectorStores.new(self)
end

#assistant_roleString

Returns the role of the assistant in the conversation. Usually "assistant" or "model"

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually "assistant" or "model"



143
144
145
# File 'lib/llm/providers/openai.rb', line 143

def assistant_role
  "assistant"
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

See Also:



151
152
153
# File 'lib/llm/providers/openai.rb', line 151

def default_model
  "gpt-4.1"
end

#server_toolsString => LLM::ServerTool

Note:

This method includes certain tools that require configuration through a set of options that are easier to set through the LLM::Provider#server_tool method.

Returns:



161
162
163
164
165
166
167
168
169
# File 'lib/llm/providers/openai.rb', line 161

def server_tools
  {
    web_search: server_tool(:web_search),
    file_search: server_tool(:file_search),
    image_generation: server_tool(:image_generation),
    code_interpreter: server_tool(:code_interpreter),
    computer_use: server_tool(:computer_use)
  }
end