Class: LLM::Gemini
- Defined in:
- lib/llm/providers/gemini.rb,
lib/llm/providers/gemini/audio.rb,
lib/llm/providers/gemini/files.rb,
lib/llm/providers/gemini/format.rb,
lib/llm/providers/gemini/images.rb,
lib/llm/providers/gemini/error_handler.rb,
lib/llm/providers/gemini/response_parser.rb
Overview
The Gemini class implements a provider for Gemini.
The Gemini provider can accept multiple inputs (text, images, audio, and video). The inputs can be provided inline via the prompt for files under 20MB or via the Gemini Files API for files that are over 20MB
Defined Under Namespace
Constant Summary collapse
- HOST =
"generativelanguage.googleapis.com"
Instance Method Summary collapse
-
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models.
-
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#complete(prompt, role = :user, model: "gemini-1.5-flash", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#audio ⇒ Object
Provides an interface to Gemini’s audio API.
-
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API.
-
#files ⇒ Object
Provides an interface to Gemini’s file management API.
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#initialize(secret) ⇒ Gemini
constructor
A new instance of Gemini.
Methods inherited from Provider
#chat, #chat!, #inspect, #respond, #respond!, #responses
Constructor Details
Instance Method Details
#models ⇒ Hash<String, LLM::Model>
Returns a hash of available models
112 113 114 |
# File 'lib/llm/providers/gemini.rb', line 112 def models @models ||= load_models!("gemini") end |
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding
54 55 56 57 58 59 60 |
# File 'lib/llm/providers/gemini.rb', line 54 def (input, model: "text-embedding-004", **params) path = ["/v1beta/models/#{model}", "embedContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) req.body = JSON.dump({content: {parts: [{text: input}]}}) res = request(@http, req) Response::Embedding.new(res).extend(response_parser) end |
#complete(prompt, role = :user, model: "gemini-1.5-flash", **params) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
72 73 74 75 76 77 78 79 80 |
# File 'lib/llm/providers/gemini.rb', line 72 def complete(prompt, role = :user, model: "gemini-1.5-flash", **params) path = ["/v1beta/models/#{model}", "generateContent?key=#{@secret}"].join(":") req = Net::HTTP::Post.new(path, headers) = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)] body = JSON.dump({contents: format()}).b req.body_stream = StringIO.new(body) res = request(@http, req) Response::Completion.new(res).extend(response_parser) end |
#audio ⇒ Object
Provides an interface to Gemini’s audio API
85 86 87 |
# File 'lib/llm/providers/gemini.rb', line 85 def audio LLM::Gemini::Audio.new(self) end |
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API
93 94 95 |
# File 'lib/llm/providers/gemini.rb', line 93 def images LLM::Gemini::Images.new(self) end |
#files ⇒ Object
Provides an interface to Gemini’s file management API
100 101 102 |
# File 'lib/llm/providers/gemini.rb', line 100 def files LLM::Gemini::Files.new(self) end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
106 107 108 |
# File 'lib/llm/providers/gemini.rb', line 106 def assistant_role "model" end |