Class: LLM::OpenAI::Moderations

Inherits:
Object
  • Object
show all
Defined in:
lib/llm/providers/openai/moderations.rb

Overview

The LLM::OpenAI::Moderations class provides a moderations object for interacting with OpenAI’s moderations API. The moderations API can categorize content into different categories, such as hate speech, self-harm, and sexual content. It can also provide a confidence score for each category.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
mod = llm.moderations.create input: "I hate you"
print "categories: #{mod.categories}", "\n"
print "scores: #{mod.scores}", "\n"
#!/usr/bin/env ruby
require "llm"

llm = LLM.openai(key: ENV["KEY"])
mod = llm.moderations.create input: URI.parse("https://example.com/image.png")
print "categories: #{mod.categories}", "\n"
print "scores: #{mod.scores}", "\n"

See Also:

Instance Method Summary collapse

Constructor Details

#initialize(provider) ⇒ LLM::OpenAI::Moderations

Returns a new Moderations object

Parameters:



36
37
38
# File 'lib/llm/providers/openai/moderations.rb', line 36

def initialize(provider)
  @provider = provider
end

Instance Method Details

#create(input:, model: "omni-moderation-latest", **params) ⇒ LLM::Response::ModerationList::Moderation

Note:

Create a moderation Although OpenAI mentions an array as a valid input, and that it can return one or more moderations, in practice the API only returns one moderation object. We recommend using a single input string or URI, and to keep in mind that llm.rb returns a Moderation object but has code in place to return multiple objects in the future (in case OpenAI documentation ever matches the actual API).

Parameters:

  • input (String, URI, Array<String, URI>)
  • model (String, LLM::Model) (defaults to: "omni-moderation-latest")

    The model to use

Returns:

See Also:



53
54
55
56
57
58
59
# File 'lib/llm/providers/openai/moderations.rb', line 53

def create(input:, model: "omni-moderation-latest", **params)
  req = Net::HTTP::Post.new("/v1/moderations", headers)
  input = Format::ModerationFormat.new(input).format
  req.body = JSON.dump({input:, model:}.merge!(params))
  res = execute(request: req)
  LLM::Response::ModerationList.new(res).extend(response_parser).first
end