Class: LLM::OpenAI::Moderations
- Inherits:
-
Object
- Object
- LLM::OpenAI::Moderations
- Defined in:
- lib/llm/providers/openai/moderations.rb
Overview
The LLM::OpenAI::Moderations class provides a moderations object for interacting with OpenAI’s moderations API. The moderations API can categorize content into different categories, such as hate speech, self-harm, and sexual content. It can also provide a confidence score for each category.
Instance Method Summary collapse
-
#initialize(provider) ⇒ LLM::OpenAI::Moderations
constructor
Returns a new Moderations object.
-
#create(input:, model: "omni-moderation-latest", **params) ⇒ LLM::Response::ModerationList::Moderation
Create a moderation Although OpenAI mentions an array as a valid input, and that it can return one or more moderations, in practice the API only returns one moderation object.
Constructor Details
#initialize(provider) ⇒ LLM::OpenAI::Moderations
Returns a new Moderations object
36 37 38 |
# File 'lib/llm/providers/openai/moderations.rb', line 36 def initialize(provider) @provider = provider end |
Instance Method Details
#create(input:, model: "omni-moderation-latest", **params) ⇒ LLM::Response::ModerationList::Moderation
Create a moderation Although OpenAI mentions an array as a valid input, and that it can return one or more moderations, in practice the API only returns one moderation object. We recommend using a single input string or URI, and to keep in mind that llm.rb returns a Moderation object but has code in place to return multiple objects in the future (in case OpenAI documentation ever matches the actual API).
53 54 55 56 57 58 59 |
# File 'lib/llm/providers/openai/moderations.rb', line 53 def create(input:, model: "omni-moderation-latest", **params) req = Net::HTTP::Post.new("/v1/moderations", headers) input = Format::ModerationFormat.new(input).format req.body = JSON.dump({input:, model:}.merge!(params)) res = execute(request: req) LLM::Response::ModerationList.new(res).extend(response_parser).first end |