Uses of Class
software.amazon.awssdk.services.bedrock.model.GuardrailSensitiveInformationAction
-
Packages that use GuardrailSensitiveInformationAction Package Description software.amazon.awssdk.services.bedrock.model -
-
Uses of GuardrailSensitiveInformationAction in software.amazon.awssdk.services.bedrock.model
Methods in software.amazon.awssdk.services.bedrock.model that return GuardrailSensitiveInformationAction Modifier and Type Method Description GuardrailSensitiveInformationActionGuardrailPiiEntity. action()The configured guardrail action when PII entity is detected.GuardrailSensitiveInformationActionGuardrailPiiEntityConfig. action()Configure guardrail action when the PII entity is detected.GuardrailSensitiveInformationActionGuardrailRegex. action()The action taken when a match to the regular expression is detected.GuardrailSensitiveInformationActionGuardrailRegexConfig. action()The guardrail action to configure when matching regular expression is detected.static GuardrailSensitiveInformationActionGuardrailSensitiveInformationAction. fromValue(String value)Use this in place of valueOf to convert the raw string returned by the service into the enum value.GuardrailSensitiveInformationActionGuardrailPiiEntity. inputAction()The action to take when harmful content is detected in the input.GuardrailSensitiveInformationActionGuardrailPiiEntityConfig. inputAction()Specifies the action to take when harmful content is detected in the input.GuardrailSensitiveInformationActionGuardrailRegex. inputAction()The action to take when harmful content is detected in the input.GuardrailSensitiveInformationActionGuardrailRegexConfig. inputAction()Specifies the action to take when harmful content is detected in the input.GuardrailSensitiveInformationActionGuardrailPiiEntity. outputAction()The action to take when harmful content is detected in the output.GuardrailSensitiveInformationActionGuardrailPiiEntityConfig. outputAction()Specifies the action to take when harmful content is detected in the output.GuardrailSensitiveInformationActionGuardrailRegex. outputAction()The action to take when harmful content is detected in the output.GuardrailSensitiveInformationActionGuardrailRegexConfig. outputAction()Specifies the action to take when harmful content is detected in the output.static GuardrailSensitiveInformationActionGuardrailSensitiveInformationAction. valueOf(String name)Returns the enum constant of this type with the specified name.static GuardrailSensitiveInformationAction[]GuardrailSensitiveInformationAction. values()Returns an array containing the constants of this enum type, in the order they are declared.Methods in software.amazon.awssdk.services.bedrock.model that return types with arguments of type GuardrailSensitiveInformationAction Modifier and Type Method Description static Set<GuardrailSensitiveInformationAction>GuardrailSensitiveInformationAction. knownValues()Methods in software.amazon.awssdk.services.bedrock.model with parameters of type GuardrailSensitiveInformationAction Modifier and Type Method Description GuardrailPiiEntity.BuilderGuardrailPiiEntity.Builder. action(GuardrailSensitiveInformationAction action)The configured guardrail action when PII entity is detected.GuardrailPiiEntityConfig.BuilderGuardrailPiiEntityConfig.Builder. action(GuardrailSensitiveInformationAction action)Configure guardrail action when the PII entity is detected.GuardrailRegex.BuilderGuardrailRegex.Builder. action(GuardrailSensitiveInformationAction action)The action taken when a match to the regular expression is detected.GuardrailRegexConfig.BuilderGuardrailRegexConfig.Builder. action(GuardrailSensitiveInformationAction action)The guardrail action to configure when matching regular expression is detected.GuardrailPiiEntity.BuilderGuardrailPiiEntity.Builder. inputAction(GuardrailSensitiveInformationAction inputAction)The action to take when harmful content is detected in the input.GuardrailPiiEntityConfig.BuilderGuardrailPiiEntityConfig.Builder. inputAction(GuardrailSensitiveInformationAction inputAction)Specifies the action to take when harmful content is detected in the input.GuardrailRegex.BuilderGuardrailRegex.Builder. inputAction(GuardrailSensitiveInformationAction inputAction)The action to take when harmful content is detected in the input.GuardrailRegexConfig.BuilderGuardrailRegexConfig.Builder. inputAction(GuardrailSensitiveInformationAction inputAction)Specifies the action to take when harmful content is detected in the input.GuardrailPiiEntity.BuilderGuardrailPiiEntity.Builder. outputAction(GuardrailSensitiveInformationAction outputAction)The action to take when harmful content is detected in the output.GuardrailPiiEntityConfig.BuilderGuardrailPiiEntityConfig.Builder. outputAction(GuardrailSensitiveInformationAction outputAction)Specifies the action to take when harmful content is detected in the output.GuardrailRegex.BuilderGuardrailRegex.Builder. outputAction(GuardrailSensitiveInformationAction outputAction)The action to take when harmful content is detected in the output.GuardrailRegexConfig.BuilderGuardrailRegexConfig.Builder. outputAction(GuardrailSensitiveInformationAction outputAction)Specifies the action to take when harmful content is detected in the output.
-