Class AmazonTranscribeProcessorConfiguration

    • Method Detail

      • languageCode

        public final CallAnalyticsLanguageCode languageCode()

        The language code that represents the language spoken in your audio.

        If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

        For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, languageCode will return CallAnalyticsLanguageCode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from languageCodeAsString().

        Returns:
        The language code that represents the language spoken in your audio.

        If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

        For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

        See Also:
        CallAnalyticsLanguageCode
      • languageCodeAsString

        public final String languageCodeAsString()

        The language code that represents the language spoken in your audio.

        If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

        For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, languageCode will return CallAnalyticsLanguageCode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from languageCodeAsString().

        Returns:
        The language code that represents the language spoken in your audio.

        If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

        For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

        See Also:
        CallAnalyticsLanguageCode
      • vocabularyName

        public final String vocabularyName()

        The name of the custom vocabulary that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

        Returns:
        The name of the custom vocabulary that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • vocabularyFilterName

        public final String vocabularyFilterName()

        The name of the custom vocabulary filter that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

        Returns:
        The name of the custom vocabulary filter that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • vocabularyFilterMethodAsString

        public final String vocabularyFilterMethodAsString()

        The vocabulary filtering method used in your Call Analytics transcription.

        If the service returns an enum value that is not available in the current SDK version, vocabularyFilterMethod will return VocabularyFilterMethod.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from vocabularyFilterMethodAsString().

        Returns:
        The vocabulary filtering method used in your Call Analytics transcription.
        See Also:
        VocabularyFilterMethod
      • showSpeakerLabel

        public final Boolean showSpeakerLabel()

        Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

        For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

        Returns:
        Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

        For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

      • enablePartialResultsStabilization

        public final Boolean enablePartialResultsStabilization()

        Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        Returns:
        Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

      • partialResultsStability

        public final PartialResultsStability partialResultsStability()

        The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, partialResultsStability will return PartialResultsStability.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from partialResultsStabilityAsString().

        Returns:
        The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        See Also:
        PartialResultsStability
      • partialResultsStabilityAsString

        public final String partialResultsStabilityAsString()

        The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, partialResultsStability will return PartialResultsStability.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from partialResultsStabilityAsString().

        Returns:
        The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        See Also:
        PartialResultsStability
      • contentIdentificationType

        public final ContentType contentIdentificationType()

        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, contentIdentificationType will return ContentType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from contentIdentificationTypeAsString().

        Returns:
        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        See Also:
        ContentType
      • contentIdentificationTypeAsString

        public final String contentIdentificationTypeAsString()

        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, contentIdentificationType will return ContentType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from contentIdentificationTypeAsString().

        Returns:
        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        See Also:
        ContentType
      • contentRedactionType

        public final ContentType contentRedactionType()

        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, contentRedactionType will return ContentType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from contentRedactionTypeAsString().

        Returns:
        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        See Also:
        ContentType
      • contentRedactionTypeAsString

        public final String contentRedactionTypeAsString()

        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        If the service returns an enum value that is not available in the current SDK version, contentRedactionType will return ContentType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from contentRedactionTypeAsString().

        Returns:
        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

        See Also:
        ContentType
      • piiEntityTypes

        public final String piiEntityTypes()

        The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

        To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

        Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

        If you leave this parameter empty, the default behavior is equivalent to ALL.

        Returns:
        The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

        To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

        Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

        If you leave this parameter empty, the default behavior is equivalent to ALL.

      • languageModelName

        public final String languageModelName()

        The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

        The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

        For more information, see Custom language models in the Amazon Transcribe Developer Guide.

        Returns:
        The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

        The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

        For more information, see Custom language models in the Amazon Transcribe Developer Guide.

      • filterPartialResults

        public final Boolean filterPartialResults()

        If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

        Returns:
        If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.
      • identifyLanguage

        public final Boolean identifyLanguage()

        Turns language identification on or off.

        Returns:
        Turns language identification on or off.
      • languageOptions

        public final String languageOptions()

        The language options for the transcription, such as automatic language detection.

        Returns:
        The language options for the transcription, such as automatic language detection.
      • vocabularyNames

        public final String vocabularyNames()

        The names of the custom vocabulary or vocabularies used during transcription.

        Returns:
        The names of the custom vocabulary or vocabularies used during transcription.
      • vocabularyFilterNames

        public final String vocabularyFilterNames()

        The names of the custom vocabulary filter or filters using during transcription.

        Returns:
        The names of the custom vocabulary filter or filters using during transcription.
      • hashCode

        public final int hashCode()
        Overrides:
        hashCode in class Object
      • equals

        public final boolean equals​(Object obj)
        Overrides:
        equals in class Object
      • toString

        public final String toString()
        Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
        Overrides:
        toString in class Object
      • getValueForField

        public final <T> Optional<T> getValueForField​(String fieldName,
                                                      Class<T> clazz)