static GuardrailContentFilter.Builder |
GuardrailContentFilter.builder() |
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputAction(String inputAction) |
The action to take when harmful content is detected in the input.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputAction(GuardrailContentFilterAction inputAction) |
The action to take when harmful content is detected in the input.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputEnabled(Boolean inputEnabled) |
Indicates whether guardrail evaluation is enabled on the input.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputModalities(Collection<GuardrailModality> inputModalities) |
The input modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputModalities(GuardrailModality... inputModalities) |
The input modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputModalitiesWithStrings(String... inputModalities) |
The input modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputModalitiesWithStrings(Collection<String> inputModalities) |
The input modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputStrength(String inputStrength) |
The strength of the content filter to apply to prompts.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.inputStrength(GuardrailFilterStrength inputStrength) |
The strength of the content filter to apply to prompts.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputAction(String outputAction) |
The action to take when harmful content is detected in the output.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputAction(GuardrailContentFilterAction outputAction) |
The action to take when harmful content is detected in the output.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputEnabled(Boolean outputEnabled) |
Indicates whether guardrail evaluation is enabled on the output.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputModalities(Collection<GuardrailModality> outputModalities) |
The output modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputModalities(GuardrailModality... outputModalities) |
The output modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputModalitiesWithStrings(String... outputModalities) |
The output modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputModalitiesWithStrings(Collection<String> outputModalities) |
The output modalities selected for the guardrail content filter.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputStrength(String outputStrength) |
The strength of the content filter to apply to model responses.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.outputStrength(GuardrailFilterStrength outputStrength) |
The strength of the content filter to apply to model responses.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.toBuilder() |
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.type(String type) |
The harmful category that the content filter is applied to.
|
GuardrailContentFilter.Builder |
GuardrailContentFilter.Builder.type(GuardrailContentFilterType type) |
The harmful category that the content filter is applied to.
|