public static final class PredictionServiceGrpc.PredictionServiceStub extends io.grpc.stub.AbstractStub<PredictionServiceGrpc.PredictionServiceStub>
AutoML Prediction API. On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
| Modifier and Type | Method and Description |
|---|---|
void |
batchPredict(BatchPredictRequest request,
io.grpc.stub.StreamObserver<com.google.longrunning.Operation> responseObserver)
Perform a batch prediction.
|
protected PredictionServiceGrpc.PredictionServiceStub |
build(io.grpc.Channel channel,
io.grpc.CallOptions callOptions) |
void |
predict(PredictRequest request,
io.grpc.stub.StreamObserver<PredictResponse> responseObserver)
Perform an online prediction.
|
getCallOptions, getChannel, newStub, newStub, withCallCredentials, withChannel, withCompression, withDeadline, withDeadlineAfter, withExecutor, withInterceptors, withMaxInboundMessageSize, withMaxOutboundMessageSize, withOption, withWaitForReadyprotected PredictionServiceGrpc.PredictionServiceStub build(io.grpc.Channel channel, io.grpc.CallOptions callOptions)
build in class io.grpc.stub.AbstractStub<PredictionServiceGrpc.PredictionServiceStub>public void predict(PredictRequest request, io.grpc.stub.StreamObserver<PredictResponse> responseObserver)
Perform an online prediction. The prediction result will be directly
returned in the response.
Available for following ML problems, and their expected request payloads:
* Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes
up to 30MB.
* Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes
up to 30MB.
* Text Classification - TextSnippet, content up to 60,000 characters,
UTF-8 encoded.
* Text Extraction - TextSnippet, content up to 30,000 characters,
UTF-8 NFC encoded.
* Translation - TextSnippet, content up to 25,000 characters, UTF-8
encoded.
* Tables - Row, with column values matching the columns of the model,
up to 5MB. Not available for FORECASTING
[prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type].
* Text Sentiment - TextSnippet, content up 500 characters, UTF-8
encoded.
public void batchPredict(BatchPredictRequest request, io.grpc.stub.StreamObserver<com.google.longrunning.Operation> responseObserver)
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1beta1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1beta1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML problems: * Image Classification * Image Object Detection * Video Classification * Video Object Tracking * Text Extraction * Tables
Copyright © 2020 Google LLC. All rights reserved.