Class: LLM::LlamaCpp

Inherits:
OpenAI show all
Defined in:
lib/llm/providers/llamacpp.rb

Overview

The LlamaCpp class implements a provider for llama.cpp through the OpenAI-compatible API provided by the llama-server binary.

Constant Summary

Constants inherited from OpenAI

OpenAI::HOST

Instance Method Summary collapse

Methods inherited from OpenAI

#assistant_role, #complete, #embed, #models, #responses

Methods inherited from Provider

#assistant_role, #chat, #chat!, #complete, #embed, #inspect, #models, #respond, #respond!, #responses, #schema, #with

Constructor Details

#initialize(host: "localhost", port: 8080, ssl: false) ⇒ LLM::LlamaCpp

Parameters:

  • key (String, nil)

    The secret key for authentication

  • host (String) (defaults to: "localhost")

    The host address of the LLM provider

  • port (Integer) (defaults to: 8080)

    The port number

  • timeout (Integer)

    The number of seconds to wait for a response

  • ssl (Boolean) (defaults to: false)

    Whether to use SSL for the connection



13
14
15
# File 'lib/llm/providers/llamacpp.rb', line 13

def initialize(host: "localhost", port: 8080, ssl: false, **)
  super
end

Instance Method Details

#filesObject

Raises:

  • (NotImplementedError)


19
20
21
# File 'lib/llm/providers/llamacpp.rb', line 19

def files
  raise NotImplementedError
end

#imagesObject

Raises:

  • (NotImplementedError)


25
26
27
# File 'lib/llm/providers/llamacpp.rb', line 25

def images
  raise NotImplementedError
end

#audioObject

Raises:

  • (NotImplementedError)


31
32
33
# File 'lib/llm/providers/llamacpp.rb', line 31

def audio
  raise NotImplementedError
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

See Also:



39
40
41
# File 'lib/llm/providers/llamacpp.rb', line 39

def default_model
  "llama3.2"
end