Class: LLM::Gemini
- Defined in:
- lib/llm/providers/gemini.rb,
lib/llm/providers/gemini/audio.rb,
lib/llm/providers/gemini/files.rb,
lib/llm/providers/gemini/format.rb,
lib/llm/providers/gemini/images.rb,
lib/llm/providers/gemini/models.rb,
lib/llm/providers/gemini/error_handler.rb,
lib/llm/providers/gemini/stream_parser.rb,
lib/llm/providers/gemini/response_parser.rb
Overview
The Gemini class implements a provider for Gemini.
The Gemini provider can accept multiple inputs (text, images, audio, and video). The inputs can be provided inline via the prompt for files under 20MB or via the Gemini Files API for files that are over 20MB
Defined Under Namespace
Classes: Audio, Files, Images, Models
Constant Summary collapse
- HOST =
"generativelanguage.googleapis.com"
Instance Method Summary collapse
-
#default_model ⇒ String
Returns the default model for chat completions.
-
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding.
-
#complete(prompt, params = {}) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API.
-
#audio ⇒ Object
Provides an interface to Gemini’s audio API.
-
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API.
-
#files ⇒ Object
Provides an interface to Gemini’s file management API.
-
#models ⇒ Object
Provides an interface to Gemini’s models API.
-
#assistant_role ⇒ String
Returns the role of the assistant in the conversation.
-
#initialize ⇒ Gemini
constructor
A new instance of Gemini.
Methods inherited from Provider
#chat, #chat!, #inspect, #moderations, #respond, #respond!, #responses, #schema, #with
Constructor Details
Instance Method Details
#default_model ⇒ String
Returns the default model for chat completions
131 132 133 |
# File 'lib/llm/providers/gemini.rb', line 131 def default_model "gemini-1.5-flash" end |
#embed(input, model: "text-embedding-004", **params) ⇒ LLM::Response::Embedding
Provides an embedding
58 59 60 61 62 63 64 65 |
# File 'lib/llm/providers/gemini.rb', line 58 def (input, model: "text-embedding-004", **params) model = model.respond_to?(:id) ? model.id : model path = ["/v1beta/models/#{model}", "embedContent?key=#{@key}"].join(":") req = Net::HTTP::Post.new(path, headers) req.body = JSON.dump({content: {parts: [{text: input}]}}) res = execute(request: req) Response::Embedding.new(res).extend(response_parser) end |
#complete(prompt, params = {}) ⇒ LLM::Response::Completion
Provides an interface to the chat completions API
77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
# File 'lib/llm/providers/gemini.rb', line 77 def complete(prompt, params = {}) params = {role: :user, model: default_model}.merge!(params) params = [params, format_schema(params), format_tools(params)].inject({}, &:merge!).compact role, model, stream = [:role, :model, :stream].map { params.delete(_1) } action = stream ? "streamGenerateContent?key=#{@key}&alt=sse" : "generateContent?key=#{@key}" model.respond_to?(:id) ? model.id : model path = ["/v1beta/models/#{model}", action].join(":") req = Net::HTTP::Post.new(path, headers) = [*(params.delete(:messages) || []), LLM::Message.new(role, prompt)] body = JSON.dump({contents: format()}.merge!(params)) set_body_stream(req, StringIO.new(body)) res = execute(request: req, stream:) Response::Completion.new(res).extend(response_parser) end |
#audio ⇒ Object
Provides an interface to Gemini’s audio API
95 96 97 |
# File 'lib/llm/providers/gemini.rb', line 95 def audio LLM::Gemini::Audio.new(self) end |
#images ⇒ see LLM::Gemini::Images
Provides an interface to Gemini’s image generation API
103 104 105 |
# File 'lib/llm/providers/gemini.rb', line 103 def images LLM::Gemini::Images.new(self) end |
#files ⇒ Object
Provides an interface to Gemini’s file management API
110 111 112 |
# File 'lib/llm/providers/gemini.rb', line 110 def files LLM::Gemini::Files.new(self) end |
#models ⇒ Object
Provides an interface to Gemini’s models API
117 118 119 |
# File 'lib/llm/providers/gemini.rb', line 117 def models LLM::Gemini::Models.new(self) end |
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually “assistant” or “model”
123 124 125 |
# File 'lib/llm/providers/gemini.rb', line 123 def assistant_role "model" end |