public static final class RecognitionConfig.Builder extends com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder> implements RecognitionConfigOrBuilder
Provides information to the recognizer that specifies how to process the request.Protobuf type
google.cloud.speech.v1p1beta1.RecognitionConfig| Modifier and Type | Method and Description |
|---|---|
RecognitionConfig.Builder |
addAllAlternativeLanguageCodes(Iterable<String> values)
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
RecognitionConfig.Builder |
addAllSpeechContexts(Iterable<? extends SpeechContext> values)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
addAlternativeLanguageCodes(String value)
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
RecognitionConfig.Builder |
addAlternativeLanguageCodesBytes(com.google.protobuf.ByteString value)
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
RecognitionConfig.Builder |
addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field,
Object value) |
RecognitionConfig.Builder |
addSpeechContexts(int index,
SpeechContext.Builder builderForValue)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
addSpeechContexts(int index,
SpeechContext value)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
addSpeechContexts(SpeechContext.Builder builderForValue)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
addSpeechContexts(SpeechContext value)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
SpeechContext.Builder |
addSpeechContextsBuilder()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
SpeechContext.Builder |
addSpeechContextsBuilder(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig |
build() |
RecognitionConfig |
buildPartial() |
RecognitionConfig.Builder |
clear() |
RecognitionConfig.Builder |
clearAdaptation()
Speech adaptation configuration improves the accuracy of speech
recognition.
|
RecognitionConfig.Builder |
clearAlternativeLanguageCodes()
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
RecognitionConfig.Builder |
clearAudioChannelCount()
The number of channels in the input audio data.
|
RecognitionConfig.Builder |
clearDiarizationConfig()
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
RecognitionConfig.Builder |
clearDiarizationSpeakerCount()
Deprecated.
google.cloud.speech.v1p1beta1.RecognitionConfig.diarization_speaker_count is
deprecated. See google/cloud/speech/v1p1beta1/cloud_speech.proto;l=406
|
RecognitionConfig.Builder |
clearEnableAutomaticPunctuation()
If 'true', adds punctuation to recognition result hypotheses.
|
RecognitionConfig.Builder |
clearEnableSeparateRecognitionPerChannel()
This needs to be set to `true` explicitly and `audio_channel_count` > 1
to get each channel recognized separately.
|
RecognitionConfig.Builder |
clearEnableSpeakerDiarization()
Deprecated.
google.cloud.speech.v1p1beta1.RecognitionConfig.enable_speaker_diarization is
deprecated. See google/cloud/speech/v1p1beta1/cloud_speech.proto;l=401
|
RecognitionConfig.Builder |
clearEnableSpokenEmojis()
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
RecognitionConfig.Builder |
clearEnableSpokenPunctuation()
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
RecognitionConfig.Builder |
clearEnableWordConfidence()
If `true`, the top result includes a list of words and the
confidence for those words.
|
RecognitionConfig.Builder |
clearEnableWordTimeOffsets()
If `true`, the top result includes a list of words and
the start and end time offsets (timestamps) for those words.
|
RecognitionConfig.Builder |
clearEncoding()
Encoding of audio data sent in all `RecognitionAudio` messages.
|
RecognitionConfig.Builder |
clearField(com.google.protobuf.Descriptors.FieldDescriptor field) |
RecognitionConfig.Builder |
clearLanguageCode()
Required.
|
RecognitionConfig.Builder |
clearMaxAlternatives()
Maximum number of recognition hypotheses to be returned.
|
RecognitionConfig.Builder |
clearMetadata()
Metadata regarding this request.
|
RecognitionConfig.Builder |
clearModel()
Which model to select for the given request.
|
RecognitionConfig.Builder |
clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof) |
RecognitionConfig.Builder |
clearProfanityFilter()
If set to `true`, the server will attempt to filter out
profanities, replacing all but the initial character in each filtered word
with asterisks, e.g.
|
RecognitionConfig.Builder |
clearSampleRateHertz()
Sample rate in Hertz of the audio data sent in all
`RecognitionAudio` messages.
|
RecognitionConfig.Builder |
clearSpeechContexts()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
clearTranscriptNormalization()
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
RecognitionConfig.Builder |
clearUseEnhanced()
Set to true to use an enhanced model for speech recognition.
|
RecognitionConfig.Builder |
clone() |
SpeechAdaptation |
getAdaptation()
Speech adaptation configuration improves the accuracy of speech
recognition.
|
SpeechAdaptation.Builder |
getAdaptationBuilder()
Speech adaptation configuration improves the accuracy of speech
recognition.
|
SpeechAdaptationOrBuilder |
getAdaptationOrBuilder()
Speech adaptation configuration improves the accuracy of speech
recognition.
|
String |
getAlternativeLanguageCodes(int index)
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
com.google.protobuf.ByteString |
getAlternativeLanguageCodesBytes(int index)
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
int |
getAlternativeLanguageCodesCount()
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
com.google.protobuf.ProtocolStringList |
getAlternativeLanguageCodesList()
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
int |
getAudioChannelCount()
The number of channels in the input audio data.
|
RecognitionConfig |
getDefaultInstanceForType() |
static com.google.protobuf.Descriptors.Descriptor |
getDescriptor() |
com.google.protobuf.Descriptors.Descriptor |
getDescriptorForType() |
SpeakerDiarizationConfig |
getDiarizationConfig()
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
SpeakerDiarizationConfig.Builder |
getDiarizationConfigBuilder()
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
SpeakerDiarizationConfigOrBuilder |
getDiarizationConfigOrBuilder()
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
int |
getDiarizationSpeakerCount()
Deprecated.
google.cloud.speech.v1p1beta1.RecognitionConfig.diarization_speaker_count is
deprecated. See google/cloud/speech/v1p1beta1/cloud_speech.proto;l=406
|
boolean |
getEnableAutomaticPunctuation()
If 'true', adds punctuation to recognition result hypotheses.
|
boolean |
getEnableSeparateRecognitionPerChannel()
This needs to be set to `true` explicitly and `audio_channel_count` > 1
to get each channel recognized separately.
|
boolean |
getEnableSpeakerDiarization()
Deprecated.
google.cloud.speech.v1p1beta1.RecognitionConfig.enable_speaker_diarization is
deprecated. See google/cloud/speech/v1p1beta1/cloud_speech.proto;l=401
|
com.google.protobuf.BoolValue |
getEnableSpokenEmojis()
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
com.google.protobuf.BoolValue.Builder |
getEnableSpokenEmojisBuilder()
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
com.google.protobuf.BoolValueOrBuilder |
getEnableSpokenEmojisOrBuilder()
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
com.google.protobuf.BoolValue |
getEnableSpokenPunctuation()
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
com.google.protobuf.BoolValue.Builder |
getEnableSpokenPunctuationBuilder()
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
com.google.protobuf.BoolValueOrBuilder |
getEnableSpokenPunctuationOrBuilder()
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
boolean |
getEnableWordConfidence()
If `true`, the top result includes a list of words and the
confidence for those words.
|
boolean |
getEnableWordTimeOffsets()
If `true`, the top result includes a list of words and
the start and end time offsets (timestamps) for those words.
|
RecognitionConfig.AudioEncoding |
getEncoding()
Encoding of audio data sent in all `RecognitionAudio` messages.
|
int |
getEncodingValue()
Encoding of audio data sent in all `RecognitionAudio` messages.
|
String |
getLanguageCode()
Required.
|
com.google.protobuf.ByteString |
getLanguageCodeBytes()
Required.
|
int |
getMaxAlternatives()
Maximum number of recognition hypotheses to be returned.
|
RecognitionMetadata |
getMetadata()
Metadata regarding this request.
|
RecognitionMetadata.Builder |
getMetadataBuilder()
Metadata regarding this request.
|
RecognitionMetadataOrBuilder |
getMetadataOrBuilder()
Metadata regarding this request.
|
String |
getModel()
Which model to select for the given request.
|
com.google.protobuf.ByteString |
getModelBytes()
Which model to select for the given request.
|
boolean |
getProfanityFilter()
If set to `true`, the server will attempt to filter out
profanities, replacing all but the initial character in each filtered word
with asterisks, e.g.
|
int |
getSampleRateHertz()
Sample rate in Hertz of the audio data sent in all
`RecognitionAudio` messages.
|
SpeechContext |
getSpeechContexts(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
SpeechContext.Builder |
getSpeechContextsBuilder(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
List<SpeechContext.Builder> |
getSpeechContextsBuilderList()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
int |
getSpeechContextsCount()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
List<SpeechContext> |
getSpeechContextsList()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
SpeechContextOrBuilder |
getSpeechContextsOrBuilder(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
List<? extends SpeechContextOrBuilder> |
getSpeechContextsOrBuilderList()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
TranscriptNormalization |
getTranscriptNormalization()
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
TranscriptNormalization.Builder |
getTranscriptNormalizationBuilder()
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
TranscriptNormalizationOrBuilder |
getTranscriptNormalizationOrBuilder()
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
boolean |
getUseEnhanced()
Set to true to use an enhanced model for speech recognition.
|
boolean |
hasAdaptation()
Speech adaptation configuration improves the accuracy of speech
recognition.
|
boolean |
hasDiarizationConfig()
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
boolean |
hasEnableSpokenEmojis()
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
boolean |
hasEnableSpokenPunctuation()
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
boolean |
hasMetadata()
Metadata regarding this request.
|
boolean |
hasTranscriptNormalization()
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable |
internalGetFieldAccessorTable() |
boolean |
isInitialized() |
RecognitionConfig.Builder |
mergeAdaptation(SpeechAdaptation value)
Speech adaptation configuration improves the accuracy of speech
recognition.
|
RecognitionConfig.Builder |
mergeDiarizationConfig(SpeakerDiarizationConfig value)
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
RecognitionConfig.Builder |
mergeEnableSpokenEmojis(com.google.protobuf.BoolValue value)
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
RecognitionConfig.Builder |
mergeEnableSpokenPunctuation(com.google.protobuf.BoolValue value)
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
RecognitionConfig.Builder |
mergeFrom(com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
RecognitionConfig.Builder |
mergeFrom(com.google.protobuf.Message other) |
RecognitionConfig.Builder |
mergeFrom(RecognitionConfig other) |
RecognitionConfig.Builder |
mergeMetadata(RecognitionMetadata value)
Metadata regarding this request.
|
RecognitionConfig.Builder |
mergeTranscriptNormalization(TranscriptNormalization value)
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
RecognitionConfig.Builder |
mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) |
RecognitionConfig.Builder |
removeSpeechContexts(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
setAdaptation(SpeechAdaptation.Builder builderForValue)
Speech adaptation configuration improves the accuracy of speech
recognition.
|
RecognitionConfig.Builder |
setAdaptation(SpeechAdaptation value)
Speech adaptation configuration improves the accuracy of speech
recognition.
|
RecognitionConfig.Builder |
setAlternativeLanguageCodes(int index,
String value)
A list of up to 3 additional
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags,
listing possible alternative languages of the supplied audio.
|
RecognitionConfig.Builder |
setAudioChannelCount(int value)
The number of channels in the input audio data.
|
RecognitionConfig.Builder |
setDiarizationConfig(SpeakerDiarizationConfig.Builder builderForValue)
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
RecognitionConfig.Builder |
setDiarizationConfig(SpeakerDiarizationConfig value)
Config to enable speaker diarization and set additional
parameters to make diarization better suited for your application.
|
RecognitionConfig.Builder |
setDiarizationSpeakerCount(int value)
Deprecated.
google.cloud.speech.v1p1beta1.RecognitionConfig.diarization_speaker_count is
deprecated. See google/cloud/speech/v1p1beta1/cloud_speech.proto;l=406
|
RecognitionConfig.Builder |
setEnableAutomaticPunctuation(boolean value)
If 'true', adds punctuation to recognition result hypotheses.
|
RecognitionConfig.Builder |
setEnableSeparateRecognitionPerChannel(boolean value)
This needs to be set to `true` explicitly and `audio_channel_count` > 1
to get each channel recognized separately.
|
RecognitionConfig.Builder |
setEnableSpeakerDiarization(boolean value)
Deprecated.
google.cloud.speech.v1p1beta1.RecognitionConfig.enable_speaker_diarization is
deprecated. See google/cloud/speech/v1p1beta1/cloud_speech.proto;l=401
|
RecognitionConfig.Builder |
setEnableSpokenEmojis(com.google.protobuf.BoolValue.Builder builderForValue)
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
RecognitionConfig.Builder |
setEnableSpokenEmojis(com.google.protobuf.BoolValue value)
The spoken emoji behavior for the call
If not set, uses default behavior based on model of choice
If 'true', adds spoken emoji formatting for the request.
|
RecognitionConfig.Builder |
setEnableSpokenPunctuation(com.google.protobuf.BoolValue.Builder builderForValue)
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
RecognitionConfig.Builder |
setEnableSpokenPunctuation(com.google.protobuf.BoolValue value)
The spoken punctuation behavior for the call
If not set, uses default behavior based on model of choice
e.g.
|
RecognitionConfig.Builder |
setEnableWordConfidence(boolean value)
If `true`, the top result includes a list of words and the
confidence for those words.
|
RecognitionConfig.Builder |
setEnableWordTimeOffsets(boolean value)
If `true`, the top result includes a list of words and
the start and end time offsets (timestamps) for those words.
|
RecognitionConfig.Builder |
setEncoding(RecognitionConfig.AudioEncoding value)
Encoding of audio data sent in all `RecognitionAudio` messages.
|
RecognitionConfig.Builder |
setEncodingValue(int value)
Encoding of audio data sent in all `RecognitionAudio` messages.
|
RecognitionConfig.Builder |
setField(com.google.protobuf.Descriptors.FieldDescriptor field,
Object value) |
RecognitionConfig.Builder |
setLanguageCode(String value)
Required.
|
RecognitionConfig.Builder |
setLanguageCodeBytes(com.google.protobuf.ByteString value)
Required.
|
RecognitionConfig.Builder |
setMaxAlternatives(int value)
Maximum number of recognition hypotheses to be returned.
|
RecognitionConfig.Builder |
setMetadata(RecognitionMetadata.Builder builderForValue)
Metadata regarding this request.
|
RecognitionConfig.Builder |
setMetadata(RecognitionMetadata value)
Metadata regarding this request.
|
RecognitionConfig.Builder |
setModel(String value)
Which model to select for the given request.
|
RecognitionConfig.Builder |
setModelBytes(com.google.protobuf.ByteString value)
Which model to select for the given request.
|
RecognitionConfig.Builder |
setProfanityFilter(boolean value)
If set to `true`, the server will attempt to filter out
profanities, replacing all but the initial character in each filtered word
with asterisks, e.g.
|
RecognitionConfig.Builder |
setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field,
int index,
Object value) |
RecognitionConfig.Builder |
setSampleRateHertz(int value)
Sample rate in Hertz of the audio data sent in all
`RecognitionAudio` messages.
|
RecognitionConfig.Builder |
setSpeechContexts(int index,
SpeechContext.Builder builderForValue)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
setSpeechContexts(int index,
SpeechContext value)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext].
|
RecognitionConfig.Builder |
setTranscriptNormalization(TranscriptNormalization.Builder builderForValue)
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
RecognitionConfig.Builder |
setTranscriptNormalization(TranscriptNormalization value)
Use transcription normalization to automatically replace parts of the
transcript with phrases of your choosing.
|
RecognitionConfig.Builder |
setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) |
RecognitionConfig.Builder |
setUseEnhanced(boolean value)
Set to true to use an enhanced model for speech recognition.
|
getAllFields, getField, getFieldBuilder, getOneofFieldDescriptor, getParentForChildren, getRepeatedField, getRepeatedFieldBuilder, getRepeatedFieldCount, getUnknownFields, getUnknownFieldSetBuilder, hasField, hasOneof, internalGetMapField, internalGetMapFieldReflection, internalGetMutableMapField, internalGetMutableMapFieldReflection, isClean, markClean, mergeUnknownLengthDelimitedField, mergeUnknownVarintField, newBuilderForField, onBuilt, onChanged, parseUnknownField, setUnknownFieldSetBuilder, setUnknownFieldsProto3findInitializationErrors, getInitializationErrorString, internalMergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, mergeFrom, newUninitializedMessageException, toStringaddAll, addAll, mergeDelimitedFrom, mergeDelimitedFrom, newUninitializedMessageExceptionequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitpublic static final com.google.protobuf.Descriptors.Descriptor getDescriptor()
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
internalGetFieldAccessorTable in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder clear()
clear in interface com.google.protobuf.Message.Builderclear in interface com.google.protobuf.MessageLite.Builderclear in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public com.google.protobuf.Descriptors.Descriptor getDescriptorForType()
getDescriptorForType in interface com.google.protobuf.Message.BuildergetDescriptorForType in interface com.google.protobuf.MessageOrBuildergetDescriptorForType in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig getDefaultInstanceForType()
getDefaultInstanceForType in interface com.google.protobuf.MessageLiteOrBuildergetDefaultInstanceForType in interface com.google.protobuf.MessageOrBuilderpublic RecognitionConfig build()
build in interface com.google.protobuf.Message.Builderbuild in interface com.google.protobuf.MessageLite.Builderpublic RecognitionConfig buildPartial()
buildPartial in interface com.google.protobuf.Message.BuilderbuildPartial in interface com.google.protobuf.MessageLite.Builderpublic RecognitionConfig.Builder clone()
clone in interface com.google.protobuf.Message.Builderclone in interface com.google.protobuf.MessageLite.Builderclone in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
setField in interface com.google.protobuf.Message.BuildersetField in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
clearField in interface com.google.protobuf.Message.BuilderclearField in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
clearOneof in interface com.google.protobuf.Message.BuilderclearOneof in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
setRepeatedField in interface com.google.protobuf.Message.BuildersetRepeatedField in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
addRepeatedField in interface com.google.protobuf.Message.BuilderaddRepeatedField in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder mergeFrom(com.google.protobuf.Message other)
mergeFrom in interface com.google.protobuf.Message.BuildermergeFrom in class com.google.protobuf.AbstractMessage.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder mergeFrom(RecognitionConfig other)
public final boolean isInitialized()
isInitialized in interface com.google.protobuf.MessageLiteOrBuilderisInitialized in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public RecognitionConfig.Builder mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws IOException
mergeFrom in interface com.google.protobuf.Message.BuildermergeFrom in interface com.google.protobuf.MessageLite.BuildermergeFrom in class com.google.protobuf.AbstractMessage.Builder<RecognitionConfig.Builder>IOExceptionpublic int getEncodingValue()
Encoding of audio data sent in all `RecognitionAudio` messages. This field is optional for `FLAC` and `WAV` audio files and required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
.google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding encoding = 1;getEncodingValue in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEncodingValue(int value)
Encoding of audio data sent in all `RecognitionAudio` messages. This field is optional for `FLAC` and `WAV` audio files and required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
.google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding encoding = 1;value - The enum numeric value on the wire for encoding to set.public RecognitionConfig.AudioEncoding getEncoding()
Encoding of audio data sent in all `RecognitionAudio` messages. This field is optional for `FLAC` and `WAV` audio files and required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
.google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding encoding = 1;getEncoding in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEncoding(RecognitionConfig.AudioEncoding value)
Encoding of audio data sent in all `RecognitionAudio` messages. This field is optional for `FLAC` and `WAV` audio files and required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
.google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding encoding = 1;value - The encoding to set.public RecognitionConfig.Builder clearEncoding()
Encoding of audio data sent in all `RecognitionAudio` messages. This field is optional for `FLAC` and `WAV` audio files and required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
.google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding encoding = 1;public int getSampleRateHertz()
Sample rate in Hertz of the audio data sent in all `RecognitionAudio` messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for FLAC and WAV audio files, but is required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
int32 sample_rate_hertz = 2;getSampleRateHertz in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setSampleRateHertz(int value)
Sample rate in Hertz of the audio data sent in all `RecognitionAudio` messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for FLAC and WAV audio files, but is required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
int32 sample_rate_hertz = 2;value - The sampleRateHertz to set.public RecognitionConfig.Builder clearSampleRateHertz()
Sample rate in Hertz of the audio data sent in all `RecognitionAudio` messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for FLAC and WAV audio files, but is required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1p1beta1.RecognitionConfig.AudioEncoding].
int32 sample_rate_hertz = 2;public int getAudioChannelCount()
The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16, OGG_OPUS and FLAC are `1`-`8`. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`. If `0` or omitted, defaults to one channel (mono). Note: We only recognize the first channel by default. To perform independent recognition on each channel set `enable_separate_recognition_per_channel` to 'true'.
int32 audio_channel_count = 7;getAudioChannelCount in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setAudioChannelCount(int value)
The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16, OGG_OPUS and FLAC are `1`-`8`. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`. If `0` or omitted, defaults to one channel (mono). Note: We only recognize the first channel by default. To perform independent recognition on each channel set `enable_separate_recognition_per_channel` to 'true'.
int32 audio_channel_count = 7;value - The audioChannelCount to set.public RecognitionConfig.Builder clearAudioChannelCount()
The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16, OGG_OPUS and FLAC are `1`-`8`. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`. If `0` or omitted, defaults to one channel (mono). Note: We only recognize the first channel by default. To perform independent recognition on each channel set `enable_separate_recognition_per_channel` to 'true'.
int32 audio_channel_count = 7;public boolean getEnableSeparateRecognitionPerChannel()
This needs to be set to `true` explicitly and `audio_channel_count` > 1 to get each channel recognized separately. The recognition result will contain a `channel_tag` field to state which channel that result belongs to. If this is not true, we will only recognize the first channel. The request is billed cumulatively for all channels recognized: `audio_channel_count` multiplied by the length of the audio.
bool enable_separate_recognition_per_channel = 12;getEnableSeparateRecognitionPerChannel in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEnableSeparateRecognitionPerChannel(boolean value)
This needs to be set to `true` explicitly and `audio_channel_count` > 1 to get each channel recognized separately. The recognition result will contain a `channel_tag` field to state which channel that result belongs to. If this is not true, we will only recognize the first channel. The request is billed cumulatively for all channels recognized: `audio_channel_count` multiplied by the length of the audio.
bool enable_separate_recognition_per_channel = 12;value - The enableSeparateRecognitionPerChannel to set.public RecognitionConfig.Builder clearEnableSeparateRecognitionPerChannel()
This needs to be set to `true` explicitly and `audio_channel_count` > 1 to get each channel recognized separately. The recognition result will contain a `channel_tag` field to state which channel that result belongs to. If this is not true, we will only recognize the first channel. The request is billed cumulatively for all channels recognized: `audio_channel_count` multiplied by the length of the audio.
bool enable_separate_recognition_per_channel = 12;public String getLanguageCode()
Required. The language of the supplied audio as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US". See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];getLanguageCode in interface RecognitionConfigOrBuilderpublic com.google.protobuf.ByteString getLanguageCodeBytes()
Required. The language of the supplied audio as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US". See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];getLanguageCodeBytes in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setLanguageCode(String value)
Required. The language of the supplied audio as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US". See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];value - The languageCode to set.public RecognitionConfig.Builder clearLanguageCode()
Required. The language of the supplied audio as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US". See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];public RecognitionConfig.Builder setLanguageCodeBytes(com.google.protobuf.ByteString value)
Required. The language of the supplied audio as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. Example: "en-US". See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes.
string language_code = 3 [(.google.api.field_behavior) = REQUIRED];value - The bytes for languageCode to set.public com.google.protobuf.ProtocolStringList getAlternativeLanguageCodesList()
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;getAlternativeLanguageCodesList in interface RecognitionConfigOrBuilderpublic int getAlternativeLanguageCodesCount()
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;getAlternativeLanguageCodesCount in interface RecognitionConfigOrBuilderpublic String getAlternativeLanguageCodes(int index)
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;getAlternativeLanguageCodes in interface RecognitionConfigOrBuilderindex - The index of the element to return.public com.google.protobuf.ByteString getAlternativeLanguageCodesBytes(int index)
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;getAlternativeLanguageCodesBytes in interface RecognitionConfigOrBuilderindex - The index of the value to return.public RecognitionConfig.Builder setAlternativeLanguageCodes(int index, String value)
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;index - The index to set the value at.value - The alternativeLanguageCodes to set.public RecognitionConfig.Builder addAlternativeLanguageCodes(String value)
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;value - The alternativeLanguageCodes to add.public RecognitionConfig.Builder addAllAlternativeLanguageCodes(Iterable<String> values)
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;values - The alternativeLanguageCodes to add.public RecognitionConfig.Builder clearAlternativeLanguageCodes()
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;public RecognitionConfig.Builder addAlternativeLanguageCodesBytes(com.google.protobuf.ByteString value)
A list of up to 3 additional [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, listing possible alternative languages of the supplied audio. See [Language Support](https://cloud.google.com/speech-to-text/docs/languages) for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
repeated string alternative_language_codes = 18;value - The bytes of the alternativeLanguageCodes to add.public int getMaxAlternatives()
Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of `SpeechRecognitionAlternative` messages within each `SpeechRecognitionResult`. The server may return fewer than `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of one. If omitted, will return a maximum of one.
int32 max_alternatives = 4;getMaxAlternatives in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setMaxAlternatives(int value)
Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of `SpeechRecognitionAlternative` messages within each `SpeechRecognitionResult`. The server may return fewer than `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of one. If omitted, will return a maximum of one.
int32 max_alternatives = 4;value - The maxAlternatives to set.public RecognitionConfig.Builder clearMaxAlternatives()
Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of `SpeechRecognitionAlternative` messages within each `SpeechRecognitionResult`. The server may return fewer than `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of one. If omitted, will return a maximum of one.
int32 max_alternatives = 4;public boolean getProfanityFilter()
If set to `true`, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to `false` or omitted, profanities won't be filtered out.
bool profanity_filter = 5;getProfanityFilter in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setProfanityFilter(boolean value)
If set to `true`, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to `false` or omitted, profanities won't be filtered out.
bool profanity_filter = 5;value - The profanityFilter to set.public RecognitionConfig.Builder clearProfanityFilter()
If set to `true`, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to `false` or omitted, profanities won't be filtered out.
bool profanity_filter = 5;public boolean hasAdaptation()
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;hasAdaptation in interface RecognitionConfigOrBuilderpublic SpeechAdaptation getAdaptation()
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;getAdaptation in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setAdaptation(SpeechAdaptation value)
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;public RecognitionConfig.Builder setAdaptation(SpeechAdaptation.Builder builderForValue)
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;public RecognitionConfig.Builder mergeAdaptation(SpeechAdaptation value)
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;public RecognitionConfig.Builder clearAdaptation()
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;public SpeechAdaptation.Builder getAdaptationBuilder()
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;public SpeechAdaptationOrBuilder getAdaptationOrBuilder()
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation) documentation. When speech adaptation is set it supersedes the `speech_contexts` field.
.google.cloud.speech.v1p1beta1.SpeechAdaptation adaptation = 20;getAdaptationOrBuilder in interface RecognitionConfigOrBuilderpublic boolean hasTranscriptNormalization()
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
hasTranscriptNormalization in interface RecognitionConfigOrBuilderpublic TranscriptNormalization getTranscriptNormalization()
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
getTranscriptNormalization in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setTranscriptNormalization(TranscriptNormalization value)
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
public RecognitionConfig.Builder setTranscriptNormalization(TranscriptNormalization.Builder builderForValue)
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
public RecognitionConfig.Builder mergeTranscriptNormalization(TranscriptNormalization value)
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
public RecognitionConfig.Builder clearTranscriptNormalization()
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
public TranscriptNormalization.Builder getTranscriptNormalizationBuilder()
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
public TranscriptNormalizationOrBuilder getTranscriptNormalizationOrBuilder()
Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
.google.cloud.speech.v1p1beta1.TranscriptNormalization transcript_normalization = 24;
getTranscriptNormalizationOrBuilder in interface RecognitionConfigOrBuilderpublic List<SpeechContext> getSpeechContextsList()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;getSpeechContextsList in interface RecognitionConfigOrBuilderpublic int getSpeechContextsCount()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;getSpeechContextsCount in interface RecognitionConfigOrBuilderpublic SpeechContext getSpeechContexts(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;getSpeechContexts in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setSpeechContexts(int index, SpeechContext value)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder setSpeechContexts(int index, SpeechContext.Builder builderForValue)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder addSpeechContexts(SpeechContext value)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder addSpeechContexts(int index, SpeechContext value)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder addSpeechContexts(SpeechContext.Builder builderForValue)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder addSpeechContexts(int index, SpeechContext.Builder builderForValue)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder addAllSpeechContexts(Iterable<? extends SpeechContext> values)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder clearSpeechContexts()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public RecognitionConfig.Builder removeSpeechContexts(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public SpeechContext.Builder getSpeechContextsBuilder(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public SpeechContextOrBuilder getSpeechContextsOrBuilder(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;getSpeechContextsOrBuilder in interface RecognitionConfigOrBuilderpublic List<? extends SpeechContextOrBuilder> getSpeechContextsOrBuilderList()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;getSpeechContextsOrBuilderList in interface RecognitionConfigOrBuilderpublic SpeechContext.Builder addSpeechContextsBuilder()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public SpeechContext.Builder addSpeechContextsBuilder(int index)
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public List<SpeechContext.Builder> getSpeechContextsBuilderList()
Array of [SpeechContext][google.cloud.speech.v1p1beta1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see [speech adaptation](https://cloud.google.com/speech-to-text/docs/adaptation).
repeated .google.cloud.speech.v1p1beta1.SpeechContext speech_contexts = 6;public boolean getEnableWordTimeOffsets()
If `true`, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If `false`, no word-level time offset information is returned. The default is `false`.
bool enable_word_time_offsets = 8;getEnableWordTimeOffsets in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEnableWordTimeOffsets(boolean value)
If `true`, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If `false`, no word-level time offset information is returned. The default is `false`.
bool enable_word_time_offsets = 8;value - The enableWordTimeOffsets to set.public RecognitionConfig.Builder clearEnableWordTimeOffsets()
If `true`, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If `false`, no word-level time offset information is returned. The default is `false`.
bool enable_word_time_offsets = 8;public boolean getEnableWordConfidence()
If `true`, the top result includes a list of words and the confidence for those words. If `false`, no word-level confidence information is returned. The default is `false`.
bool enable_word_confidence = 15;getEnableWordConfidence in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEnableWordConfidence(boolean value)
If `true`, the top result includes a list of words and the confidence for those words. If `false`, no word-level confidence information is returned. The default is `false`.
bool enable_word_confidence = 15;value - The enableWordConfidence to set.public RecognitionConfig.Builder clearEnableWordConfidence()
If `true`, the top result includes a list of words and the confidence for those words. If `false`, no word-level confidence information is returned. The default is `false`.
bool enable_word_confidence = 15;public boolean getEnableAutomaticPunctuation()
If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses.
bool enable_automatic_punctuation = 11;getEnableAutomaticPunctuation in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEnableAutomaticPunctuation(boolean value)
If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses.
bool enable_automatic_punctuation = 11;value - The enableAutomaticPunctuation to set.public RecognitionConfig.Builder clearEnableAutomaticPunctuation()
If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses.
bool enable_automatic_punctuation = 11;public boolean hasEnableSpokenPunctuation()
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;hasEnableSpokenPunctuation in interface RecognitionConfigOrBuilderpublic com.google.protobuf.BoolValue getEnableSpokenPunctuation()
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;getEnableSpokenPunctuation in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEnableSpokenPunctuation(com.google.protobuf.BoolValue value)
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;public RecognitionConfig.Builder setEnableSpokenPunctuation(com.google.protobuf.BoolValue.Builder builderForValue)
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;public RecognitionConfig.Builder mergeEnableSpokenPunctuation(com.google.protobuf.BoolValue value)
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;public RecognitionConfig.Builder clearEnableSpokenPunctuation()
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;public com.google.protobuf.BoolValue.Builder getEnableSpokenPunctuationBuilder()
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;public com.google.protobuf.BoolValueOrBuilder getEnableSpokenPunctuationOrBuilder()
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
.google.protobuf.BoolValue enable_spoken_punctuation = 22;getEnableSpokenPunctuationOrBuilder in interface RecognitionConfigOrBuilderpublic boolean hasEnableSpokenEmojis()
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;hasEnableSpokenEmojis in interface RecognitionConfigOrBuilderpublic com.google.protobuf.BoolValue getEnableSpokenEmojis()
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;getEnableSpokenEmojis in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setEnableSpokenEmojis(com.google.protobuf.BoolValue value)
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;public RecognitionConfig.Builder setEnableSpokenEmojis(com.google.protobuf.BoolValue.Builder builderForValue)
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;public RecognitionConfig.Builder mergeEnableSpokenEmojis(com.google.protobuf.BoolValue value)
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;public RecognitionConfig.Builder clearEnableSpokenEmojis()
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;public com.google.protobuf.BoolValue.Builder getEnableSpokenEmojisBuilder()
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;public com.google.protobuf.BoolValueOrBuilder getEnableSpokenEmojisOrBuilder()
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
.google.protobuf.BoolValue enable_spoken_emojis = 23;getEnableSpokenEmojisOrBuilder in interface RecognitionConfigOrBuilder@Deprecated public boolean getEnableSpeakerDiarization()
If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. Note: Use diarization_config instead.
bool enable_speaker_diarization = 16 [deprecated = true];getEnableSpeakerDiarization in interface RecognitionConfigOrBuilder@Deprecated public RecognitionConfig.Builder setEnableSpeakerDiarization(boolean value)
If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. Note: Use diarization_config instead.
bool enable_speaker_diarization = 16 [deprecated = true];value - The enableSpeakerDiarization to set.@Deprecated public RecognitionConfig.Builder clearEnableSpeakerDiarization()
If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. Note: Use diarization_config instead.
bool enable_speaker_diarization = 16 [deprecated = true];@Deprecated public int getDiarizationSpeakerCount()
If set, specifies the estimated number of speakers in the conversation. Defaults to '2'. Ignored unless enable_speaker_diarization is set to true. Note: Use diarization_config instead.
int32 diarization_speaker_count = 17 [deprecated = true];getDiarizationSpeakerCount in interface RecognitionConfigOrBuilder@Deprecated public RecognitionConfig.Builder setDiarizationSpeakerCount(int value)
If set, specifies the estimated number of speakers in the conversation. Defaults to '2'. Ignored unless enable_speaker_diarization is set to true. Note: Use diarization_config instead.
int32 diarization_speaker_count = 17 [deprecated = true];value - The diarizationSpeakerCount to set.@Deprecated public RecognitionConfig.Builder clearDiarizationSpeakerCount()
If set, specifies the estimated number of speakers in the conversation. Defaults to '2'. Ignored unless enable_speaker_diarization is set to true. Note: Use diarization_config instead.
int32 diarization_speaker_count = 17 [deprecated = true];public boolean hasDiarizationConfig()
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;hasDiarizationConfig in interface RecognitionConfigOrBuilderpublic SpeakerDiarizationConfig getDiarizationConfig()
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;getDiarizationConfig in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setDiarizationConfig(SpeakerDiarizationConfig value)
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;public RecognitionConfig.Builder setDiarizationConfig(SpeakerDiarizationConfig.Builder builderForValue)
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;public RecognitionConfig.Builder mergeDiarizationConfig(SpeakerDiarizationConfig value)
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;public RecognitionConfig.Builder clearDiarizationConfig()
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;public SpeakerDiarizationConfig.Builder getDiarizationConfigBuilder()
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;public SpeakerDiarizationConfigOrBuilder getDiarizationConfigOrBuilder()
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
.google.cloud.speech.v1p1beta1.SpeakerDiarizationConfig diarization_config = 19;getDiarizationConfigOrBuilder in interface RecognitionConfigOrBuilderpublic boolean hasMetadata()
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;hasMetadata in interface RecognitionConfigOrBuilderpublic RecognitionMetadata getMetadata()
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;getMetadata in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setMetadata(RecognitionMetadata value)
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;public RecognitionConfig.Builder setMetadata(RecognitionMetadata.Builder builderForValue)
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;public RecognitionConfig.Builder mergeMetadata(RecognitionMetadata value)
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;public RecognitionConfig.Builder clearMetadata()
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;public RecognitionMetadata.Builder getMetadataBuilder()
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;public RecognitionMetadataOrBuilder getMetadataOrBuilder()
Metadata regarding this request.
.google.cloud.speech.v1p1beta1.RecognitionMetadata metadata = 9;getMetadataOrBuilder in interface RecognitionConfigOrBuilderpublic String getModel()
Which model to select for the given request. Select the model
best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the RecognitionConfig.
<table>
<tr>
<td><b>Model</b></td>
<td><b>Description</b></td>
</tr>
<tr>
<td><code>latest_long</code></td>
<td>Best for long form content like media or conversation.</td>
</tr>
<tr>
<td><code>latest_short</code></td>
<td>Best for short form content like commands or single shot directed
speech.</td>
</tr>
<tr>
<td><code>command_and_search</code></td>
<td>Best for short queries such as voice commands or voice search.</td>
</tr>
<tr>
<td><code>phone_call</code></td>
<td>Best for audio that originated from a phone call (typically
recorded at an 8khz sampling rate).</td>
</tr>
<tr>
<td><code>video</code></td>
<td>Best for audio that originated from video or includes multiple
speakers. Ideally the audio is recorded at a 16khz or greater
sampling rate. This is a premium model that costs more than the
standard rate.</td>
</tr>
<tr>
<td><code>default</code></td>
<td>Best for audio that is not one of the specific audio models.
For example, long-form audio. Ideally the audio is high-fidelity,
recorded at a 16khz or greater sampling rate.</td>
</tr>
<tr>
<td><code>medical_conversation</code></td>
<td>Best for audio that originated from a conversation between a
medical provider and patient.</td>
</tr>
<tr>
<td><code>medical_dictation</code></td>
<td>Best for audio that originated from dictation notes by a medical
provider.</td>
</tr>
</table>
string model = 13;getModel in interface RecognitionConfigOrBuilderpublic com.google.protobuf.ByteString getModelBytes()
Which model to select for the given request. Select the model
best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the RecognitionConfig.
<table>
<tr>
<td><b>Model</b></td>
<td><b>Description</b></td>
</tr>
<tr>
<td><code>latest_long</code></td>
<td>Best for long form content like media or conversation.</td>
</tr>
<tr>
<td><code>latest_short</code></td>
<td>Best for short form content like commands or single shot directed
speech.</td>
</tr>
<tr>
<td><code>command_and_search</code></td>
<td>Best for short queries such as voice commands or voice search.</td>
</tr>
<tr>
<td><code>phone_call</code></td>
<td>Best for audio that originated from a phone call (typically
recorded at an 8khz sampling rate).</td>
</tr>
<tr>
<td><code>video</code></td>
<td>Best for audio that originated from video or includes multiple
speakers. Ideally the audio is recorded at a 16khz or greater
sampling rate. This is a premium model that costs more than the
standard rate.</td>
</tr>
<tr>
<td><code>default</code></td>
<td>Best for audio that is not one of the specific audio models.
For example, long-form audio. Ideally the audio is high-fidelity,
recorded at a 16khz or greater sampling rate.</td>
</tr>
<tr>
<td><code>medical_conversation</code></td>
<td>Best for audio that originated from a conversation between a
medical provider and patient.</td>
</tr>
<tr>
<td><code>medical_dictation</code></td>
<td>Best for audio that originated from dictation notes by a medical
provider.</td>
</tr>
</table>
string model = 13;getModelBytes in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setModel(String value)
Which model to select for the given request. Select the model
best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the RecognitionConfig.
<table>
<tr>
<td><b>Model</b></td>
<td><b>Description</b></td>
</tr>
<tr>
<td><code>latest_long</code></td>
<td>Best for long form content like media or conversation.</td>
</tr>
<tr>
<td><code>latest_short</code></td>
<td>Best for short form content like commands or single shot directed
speech.</td>
</tr>
<tr>
<td><code>command_and_search</code></td>
<td>Best for short queries such as voice commands or voice search.</td>
</tr>
<tr>
<td><code>phone_call</code></td>
<td>Best for audio that originated from a phone call (typically
recorded at an 8khz sampling rate).</td>
</tr>
<tr>
<td><code>video</code></td>
<td>Best for audio that originated from video or includes multiple
speakers. Ideally the audio is recorded at a 16khz or greater
sampling rate. This is a premium model that costs more than the
standard rate.</td>
</tr>
<tr>
<td><code>default</code></td>
<td>Best for audio that is not one of the specific audio models.
For example, long-form audio. Ideally the audio is high-fidelity,
recorded at a 16khz or greater sampling rate.</td>
</tr>
<tr>
<td><code>medical_conversation</code></td>
<td>Best for audio that originated from a conversation between a
medical provider and patient.</td>
</tr>
<tr>
<td><code>medical_dictation</code></td>
<td>Best for audio that originated from dictation notes by a medical
provider.</td>
</tr>
</table>
string model = 13;value - The model to set.public RecognitionConfig.Builder clearModel()
Which model to select for the given request. Select the model
best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the RecognitionConfig.
<table>
<tr>
<td><b>Model</b></td>
<td><b>Description</b></td>
</tr>
<tr>
<td><code>latest_long</code></td>
<td>Best for long form content like media or conversation.</td>
</tr>
<tr>
<td><code>latest_short</code></td>
<td>Best for short form content like commands or single shot directed
speech.</td>
</tr>
<tr>
<td><code>command_and_search</code></td>
<td>Best for short queries such as voice commands or voice search.</td>
</tr>
<tr>
<td><code>phone_call</code></td>
<td>Best for audio that originated from a phone call (typically
recorded at an 8khz sampling rate).</td>
</tr>
<tr>
<td><code>video</code></td>
<td>Best for audio that originated from video or includes multiple
speakers. Ideally the audio is recorded at a 16khz or greater
sampling rate. This is a premium model that costs more than the
standard rate.</td>
</tr>
<tr>
<td><code>default</code></td>
<td>Best for audio that is not one of the specific audio models.
For example, long-form audio. Ideally the audio is high-fidelity,
recorded at a 16khz or greater sampling rate.</td>
</tr>
<tr>
<td><code>medical_conversation</code></td>
<td>Best for audio that originated from a conversation between a
medical provider and patient.</td>
</tr>
<tr>
<td><code>medical_dictation</code></td>
<td>Best for audio that originated from dictation notes by a medical
provider.</td>
</tr>
</table>
string model = 13;public RecognitionConfig.Builder setModelBytes(com.google.protobuf.ByteString value)
Which model to select for the given request. Select the model
best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the RecognitionConfig.
<table>
<tr>
<td><b>Model</b></td>
<td><b>Description</b></td>
</tr>
<tr>
<td><code>latest_long</code></td>
<td>Best for long form content like media or conversation.</td>
</tr>
<tr>
<td><code>latest_short</code></td>
<td>Best for short form content like commands or single shot directed
speech.</td>
</tr>
<tr>
<td><code>command_and_search</code></td>
<td>Best for short queries such as voice commands or voice search.</td>
</tr>
<tr>
<td><code>phone_call</code></td>
<td>Best for audio that originated from a phone call (typically
recorded at an 8khz sampling rate).</td>
</tr>
<tr>
<td><code>video</code></td>
<td>Best for audio that originated from video or includes multiple
speakers. Ideally the audio is recorded at a 16khz or greater
sampling rate. This is a premium model that costs more than the
standard rate.</td>
</tr>
<tr>
<td><code>default</code></td>
<td>Best for audio that is not one of the specific audio models.
For example, long-form audio. Ideally the audio is high-fidelity,
recorded at a 16khz or greater sampling rate.</td>
</tr>
<tr>
<td><code>medical_conversation</code></td>
<td>Best for audio that originated from a conversation between a
medical provider and patient.</td>
</tr>
<tr>
<td><code>medical_dictation</code></td>
<td>Best for audio that originated from dictation notes by a medical
provider.</td>
</tr>
</table>
string model = 13;value - The bytes for model to set.public boolean getUseEnhanced()
Set to true to use an enhanced model for speech recognition. If `use_enhanced` is set to true and the `model` field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the audio. If `use_enhanced` is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model.
bool use_enhanced = 14;getUseEnhanced in interface RecognitionConfigOrBuilderpublic RecognitionConfig.Builder setUseEnhanced(boolean value)
Set to true to use an enhanced model for speech recognition. If `use_enhanced` is set to true and the `model` field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the audio. If `use_enhanced` is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model.
bool use_enhanced = 14;value - The useEnhanced to set.public RecognitionConfig.Builder clearUseEnhanced()
Set to true to use an enhanced model for speech recognition. If `use_enhanced` is set to true and the `model` field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the audio. If `use_enhanced` is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model.
bool use_enhanced = 14;public final RecognitionConfig.Builder setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
setUnknownFields in interface com.google.protobuf.Message.BuildersetUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>public final RecognitionConfig.Builder mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
mergeUnknownFields in interface com.google.protobuf.Message.BuildermergeUnknownFields in class com.google.protobuf.GeneratedMessageV3.Builder<RecognitionConfig.Builder>Copyright © 2024 Google LLC. All rights reserved.