Class: LLM::Google
- Defined in:
- lib/llm/providers/google.rb,
lib/llm/providers/google/audio.rb,
lib/llm/providers/google/files.rb,
lib/llm/providers/google/images.rb,
lib/llm/providers/google/models.rb,
lib/llm/providers/google/error_handler.rb,
lib/llm/providers/google/stream_parser.rb,
lib/llm/providers/google/request_adapter.rb,
lib/llm/providers/google/response_adapter.rb
Overview
The Google class implements a provider for Gemini. The Google provider can accept multiple inputs (text, images, audio, and video). The inputs can be provided inline via the prompt for files under 20MB or via the Gemini Files API for files that are over 20MB.
Defined Under Namespace
Classes: Audio, Files, Images, Models
Constant Summary collapse
- HOST =
-
"generativelanguage.googleapis.com"
Instance Method Summary collapse
-
#assistant_role
⇒ String
Returns the role of the assistant in the conversation.
-
#name ⇒
Symbol
Returns the provider's name.
-
#embed(input,
model: "gemini-embedding-001", **params) ⇒ LLM::Response
Provides an embedding.
-
#complete(prompt,
params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API.
-
#audio ⇒
LLM::Google::Audio
Provides an interface to Gemini's audio API.
-
#images ⇒ see
LLM::Google::Images
Provides an interface to Gemini's image generation API.
-
#files ⇒
LLM::Google::Files
Provides an interface to Gemini's file management API.
-
#models ⇒
LLM::Google::Models
Provides an interface to Gemini's models API.
-
#default_model ⇒
String
Returns the default model for chat completions.
- #server_tools ⇒ String => LLM::ServerTool
-
#web_search(query:)
⇒ LLM::Response
A convenience method for performing a web search using the Google Search tool.
-
#user_role ⇒
Symbol
Returns the providers user role.
-
#system_role ⇒
Symbol
Returns the providers system role.
-
#developer_role
⇒ Symbol
Returns the providers developer role.
-
#initialize ⇒
Google constructor
A new instance of Google.
Methods inherited from Provider
#chat, clients, #inspect, #moderations, #persist!, #respond, #responses, #schema, #server_tool, #tool_role, #tracer, #tracer=, #vector_stores, #with
Constructor Details
Instance Method Details
#assistant_role ⇒ String
Returns the role of the assistant in the conversation. Usually "assistant" or "model"
172 173 174 |
# File 'lib/llm/providers/google.rb', line 172 def assistant_role "model" end |
#name ⇒ Symbol
Returns the provider's name
43 44 45 |
# File 'lib/llm/providers/google.rb', line 43 def name :google end |
#embed(input, model: "gemini-embedding-001", **params) ⇒ LLM::Response
Provides an embedding
54 55 56 57 58 59 60 61 62 63 |
# File 'lib/llm/providers/google.rb', line 54 def (input, model: "gemini-embedding-001", **params) model = model.respond_to?(:id) ? model.id : model path = ["/v1beta/models/#{model}", "embedContent?key=#{@key}"].join(":") req = Net::HTTP::Post.new(path, headers) req.body = LLM.json.dump({content: {parts: [{text: input}]}}) res, span, tracer = execute(request: req, operation: "embeddings", model:) res = ResponseAdapter.adapt(res, type: :embedding) tracer.on_request_finish(operation: "embeddings", model:, res:, span:) res end |
#complete(prompt, params = {}) ⇒ LLM::Response
Provides an interface to the chat completions API
75 76 77 78 79 80 81 82 83 |
# File 'lib/llm/providers/google.rb', line 75 def complete(prompt, params = {}) params, stream, tools, role, model = normalize_complete_params(params) req = build_complete_request(prompt, params, role, model, stream) res, span, tracer = execute(request: req, stream: stream, operation: "chat", model:) res = ResponseAdapter.adapt(res, type: :completion) .extend(Module.new { define_method(:__tools__) { tools } }) tracer.on_request_finish(operation: "chat", model:, res:, span:) res end |
#audio ⇒ LLM::Google::Audio
Provides an interface to Gemini's audio API
89 90 91 |
# File 'lib/llm/providers/google.rb', line 89 def audio LLM::Google::Audio.new(self) end |
#images ⇒ see LLM::Google::Images
Provides an interface to Gemini's image generation API
97 98 99 |
# File 'lib/llm/providers/google.rb', line 97 def images LLM::Google::Images.new(self) end |
#files ⇒ LLM::Google::Files
Provides an interface to Gemini's file management API
105 106 107 |
# File 'lib/llm/providers/google.rb', line 105 def files LLM::Google::Files.new(self) end |
#models ⇒ LLM::Google::Models
Provides an interface to Gemini's models API
113 114 115 |
# File 'lib/llm/providers/google.rb', line 113 def models LLM::Google::Models.new(self) end |
#default_model ⇒ String
Returns the default model for chat completions
121 122 123 |
# File 'lib/llm/providers/google.rb', line 121 def default_model "gemini-2.5-flash" end |
#server_tools ⇒ String => LLM::ServerTool
This method includes certain tools that require configuration through a set of options that are easier to set through the LLM::Provider#server_tool method.
132 133 134 135 136 137 138 |
# File 'lib/llm/providers/google.rb', line 132 def server_tools { google_search: server_tool(:google_search), code_execution: server_tool(:code_execution), url_context: server_tool(:url_context) } end |
#web_search(query:) ⇒ LLM::Response
A convenience method for performing a web search using the Google Search tool.
145 146 147 |
# File 'lib/llm/providers/google.rb', line 145 def web_search(query:) ResponseAdapter.adapt(complete(query, tools: [server_tools[:google_search]]), type: :web_search) end |
#user_role ⇒ Symbol
Returns the providers user role
152 153 154 |
# File 'lib/llm/providers/google.rb', line 152 def user_role :user end |
#system_role ⇒ Symbol
Returns the providers system role
159 160 161 |
# File 'lib/llm/providers/google.rb', line 159 def system_role :user end |
#developer_role ⇒ Symbol
Returns the providers developer role
166 167 168 |
# File 'lib/llm/providers/google.rb', line 166 def developer_role :user end |