Class: LLM::OpenAI
- Defined in:
- lib/llm/providers/openai.rb,
lib/llm/providers/openai/audio.rb,
lib/llm/providers/openai/files.rb,
lib/llm/providers/openai/format.rb,
lib/llm/providers/openai/images.rb,
lib/llm/providers/openai/responses.rb,
lib/llm/providers/openai/error_handler.rb,
lib/llm/providers/openai/response_parser.rb
Overview
The OpenAI class implements a provider for OpenAI
Defined Under Namespace
Classes: Audio, Files, Images, Responses
Constant Summary collapse
- HOST =
"api.openai.com"
Instance Method Summary collapse
-
#models ⇒ Object
-
#embed(input, model: "text-embedding-3-small", **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#complete(prompt, role = :user, model: "gpt-4o-mini", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#responses ⇒ LLM::OpenAI::Responses
Provides an interface to OpenAI’s response API.
-
#images ⇒ LLM::OpenAI::Images
Provides an interface to OpenAI’s image generation API.
-
#audio ⇒ LLM::OpenAI::Audio
Provides an interface to OpenAI’s audio generation API.
-
#files ⇒ LLM::OpenAI::Files
Provides an interface to OpenAI’s files API.
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#initialize(secret) ⇒ OpenAI
constructor
A new instance of OpenAI.
Methods inherited from Provider
#chat, #chat!, #inspect, #respond, #respond!
Constructor Details
Instance Method Details
#models ⇒ Object
97 98 99 |
# File 'lib/llm/providers/openai.rb', line 97 def models @models ||= load_models!("openai") end |
#embed(input, model: "text-embedding-3-small", **params) ⇒ LLM::Response::Embedding
Provides an embedding
33 34 35 36 37 38 |
# File 'lib/llm/providers/openai.rb', line 33 def (input, model: "text-embedding-3-small", **params) req = Net::HTTP::Post.new("/v1/embeddings", headers) req.body = JSON.dump({input:, model:}.merge!(params)) res = request(@http, req) Response::Embedding.new(res).extend(response_parser) end |
#complete(prompt, role = :user, model: "gpt-4o-mini", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
50 51 52 53 54 55 56 57 |
# File 'lib/llm/providers/openai.rb', line 50 def complete(prompt, role = :user, model: "gpt-4o-mini", **params) params = {model:}.merge!(params) req = Net::HTTP::Post.new("/v1/chat/completions", headers) = [*(params.delete(:messages) || []), Message.new(role, prompt)] req.body = JSON.dump({messages: format(, :complete)}.merge!(params)) res = request(@http, req) Response::Completion.new(res).extend(response_parser) end |
#responses ⇒ LLM::OpenAI::Responses
Provides an interface to OpenAI’s response API
63 64 65 |
# File 'lib/llm/providers/openai.rb', line 63 def responses LLM::OpenAI::Responses.new(self) end |
#images ⇒ LLM::OpenAI::Images
Provides an interface to OpenAI’s image generation API
71 72 73 |
# File 'lib/llm/providers/openai.rb', line 71 def images LLM::OpenAI::Images.new(self) end |
#audio ⇒ LLM::OpenAI::Audio
Provides an interface to OpenAI’s audio generation API
79 80 81 |
# File 'lib/llm/providers/openai.rb', line 79 def audio LLM::OpenAI::Audio.new(self) end |
#files ⇒ LLM::OpenAI::Files
Provides an interface to OpenAI’s files API
87 88 89 |
# File 'lib/llm/providers/openai.rb', line 87 def files LLM::OpenAI::Files.new(self) end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
93 94 95 |
# File 'lib/llm/providers/openai.rb', line 93 def assistant_role "assistant" end |