Class: LLM::LlamaCpp
- Defined in:
- lib/llm/providers/llamacpp.rb
Overview
The LlamaCpp class implements a provider for llama.cpp through the OpenAI-compatible API provided by the llama-server binary.
Constant Summary
Constants inherited from OpenAI
Instance Method Summary collapse
-
#initialize(host: "localhost", port: 8080, ssl: false) ⇒ LLM::LlamaCpp
constructor
-
#files ⇒ Object
-
#images ⇒ Object
-
#audio ⇒ Object
-
#default_model ⇒ String
Returns the default model for chat completions.
Methods inherited from OpenAI
#assistant_role, #complete, #embed, #models, #responses
Methods inherited from Provider
#assistant_role, #chat, #chat!, #complete, #embed, #inspect, #models, #respond, #respond!, #responses, #schema, #with
Constructor Details
#initialize(host: "localhost", port: 8080, ssl: false) ⇒ LLM::LlamaCpp
13 14 15 |
# File 'lib/llm/providers/llamacpp.rb', line 13 def initialize(host: "localhost", port: 8080, ssl: false, **) super end |
Instance Method Details
#files ⇒ Object
19 20 21 |
# File 'lib/llm/providers/llamacpp.rb', line 19 def files raise NotImplementedError end |
#images ⇒ Object
25 26 27 |
# File 'lib/llm/providers/llamacpp.rb', line 25 def images raise NotImplementedError end |
#audio ⇒ Object
31 32 33 |
# File 'lib/llm/providers/llamacpp.rb', line 31 def audio raise NotImplementedError end |
#default_model ⇒ String
Returns the default model for chat completions
39 40 41 |
# File 'lib/llm/providers/llamacpp.rb', line 39 def default_model "llama3.2" end |