public interface StreamingRecognizeResponseOrBuilder
extends com.google.protobuf.MessageOrBuilder
| Modifier and Type | Method and Description |
|---|---|
com.google.rpc.Status |
getError()
If set, returns a [google.rpc.Status][google.rpc.Status] message that
specifies the error for the operation.
|
com.google.rpc.StatusOrBuilder |
getErrorOrBuilder()
If set, returns a [google.rpc.Status][google.rpc.Status] message that
specifies the error for the operation.
|
long |
getRequestId()
The ID associated with the request.
|
StreamingRecognitionResult |
getResults(int index)
This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
|
int |
getResultsCount()
This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
|
List<StreamingRecognitionResult> |
getResultsList()
This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
|
StreamingRecognitionResultOrBuilder |
getResultsOrBuilder(int index)
This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
|
List<? extends StreamingRecognitionResultOrBuilder> |
getResultsOrBuilderList()
This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
|
SpeechAdaptationInfo |
getSpeechAdaptationInfo()
Provides information on adaptation behavior in response
.google.cloud.speech.v1.SpeechAdaptationInfo speech_adaptation_info = 9; |
SpeechAdaptationInfoOrBuilder |
getSpeechAdaptationInfoOrBuilder()
Provides information on adaptation behavior in response
.google.cloud.speech.v1.SpeechAdaptationInfo speech_adaptation_info = 9; |
com.google.protobuf.Duration |
getSpeechEventTime()
Time offset between the beginning of the audio and event emission.
|
com.google.protobuf.DurationOrBuilder |
getSpeechEventTimeOrBuilder()
Time offset between the beginning of the audio and event emission.
|
StreamingRecognizeResponse.SpeechEventType |
getSpeechEventType()
Indicates the type of speech event.
|
int |
getSpeechEventTypeValue()
Indicates the type of speech event.
|
com.google.protobuf.Duration |
getTotalBilledTime()
When available, billed audio seconds for the stream.
|
com.google.protobuf.DurationOrBuilder |
getTotalBilledTimeOrBuilder()
When available, billed audio seconds for the stream.
|
boolean |
hasError()
If set, returns a [google.rpc.Status][google.rpc.Status] message that
specifies the error for the operation.
|
boolean |
hasSpeechAdaptationInfo()
Provides information on adaptation behavior in response
.google.cloud.speech.v1.SpeechAdaptationInfo speech_adaptation_info = 9; |
boolean |
hasSpeechEventTime()
Time offset between the beginning of the audio and event emission.
|
boolean |
hasTotalBilledTime()
When available, billed audio seconds for the stream.
|
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneofboolean hasError()
If set, returns a [google.rpc.Status][google.rpc.Status] message that specifies the error for the operation.
.google.rpc.Status error = 1;com.google.rpc.Status getError()
If set, returns a [google.rpc.Status][google.rpc.Status] message that specifies the error for the operation.
.google.rpc.Status error = 1;com.google.rpc.StatusOrBuilder getErrorOrBuilder()
If set, returns a [google.rpc.Status][google.rpc.Status] message that specifies the error for the operation.
.google.rpc.Status error = 1;List<StreamingRecognitionResult> getResultsList()
This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one `is_final=true` result (the newly settled portion), followed by zero or more `is_final=false` results (the interim results).
repeated .google.cloud.speech.v1.StreamingRecognitionResult results = 2;StreamingRecognitionResult getResults(int index)
This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one `is_final=true` result (the newly settled portion), followed by zero or more `is_final=false` results (the interim results).
repeated .google.cloud.speech.v1.StreamingRecognitionResult results = 2;int getResultsCount()
This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one `is_final=true` result (the newly settled portion), followed by zero or more `is_final=false` results (the interim results).
repeated .google.cloud.speech.v1.StreamingRecognitionResult results = 2;List<? extends StreamingRecognitionResultOrBuilder> getResultsOrBuilderList()
This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one `is_final=true` result (the newly settled portion), followed by zero or more `is_final=false` results (the interim results).
repeated .google.cloud.speech.v1.StreamingRecognitionResult results = 2;StreamingRecognitionResultOrBuilder getResultsOrBuilder(int index)
This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one `is_final=true` result (the newly settled portion), followed by zero or more `is_final=false` results (the interim results).
repeated .google.cloud.speech.v1.StreamingRecognitionResult results = 2;int getSpeechEventTypeValue()
Indicates the type of speech event.
.google.cloud.speech.v1.StreamingRecognizeResponse.SpeechEventType speech_event_type = 4;
StreamingRecognizeResponse.SpeechEventType getSpeechEventType()
Indicates the type of speech event.
.google.cloud.speech.v1.StreamingRecognizeResponse.SpeechEventType speech_event_type = 4;
boolean hasSpeechEventTime()
Time offset between the beginning of the audio and event emission.
.google.protobuf.Duration speech_event_time = 8;com.google.protobuf.Duration getSpeechEventTime()
Time offset between the beginning of the audio and event emission.
.google.protobuf.Duration speech_event_time = 8;com.google.protobuf.DurationOrBuilder getSpeechEventTimeOrBuilder()
Time offset between the beginning of the audio and event emission.
.google.protobuf.Duration speech_event_time = 8;boolean hasTotalBilledTime()
When available, billed audio seconds for the stream. Set only if this is the last response in the stream.
.google.protobuf.Duration total_billed_time = 5;com.google.protobuf.Duration getTotalBilledTime()
When available, billed audio seconds for the stream. Set only if this is the last response in the stream.
.google.protobuf.Duration total_billed_time = 5;com.google.protobuf.DurationOrBuilder getTotalBilledTimeOrBuilder()
When available, billed audio seconds for the stream. Set only if this is the last response in the stream.
.google.protobuf.Duration total_billed_time = 5;boolean hasSpeechAdaptationInfo()
Provides information on adaptation behavior in response
.google.cloud.speech.v1.SpeechAdaptationInfo speech_adaptation_info = 9;SpeechAdaptationInfo getSpeechAdaptationInfo()
Provides information on adaptation behavior in response
.google.cloud.speech.v1.SpeechAdaptationInfo speech_adaptation_info = 9;SpeechAdaptationInfoOrBuilder getSpeechAdaptationInfoOrBuilder()
Provides information on adaptation behavior in response
.google.cloud.speech.v1.SpeechAdaptationInfo speech_adaptation_info = 9;long getRequestId()
The ID associated with the request. This is a unique ID specific only to the given request.
int64 request_id = 10;Copyright © 2024 Google LLC. All rights reserved.