Interface GetModelInvocationJobResponse.Builder
-
- All Superinterfaces:
AwsResponse.Builder,BedrockResponse.Builder,Buildable,CopyableBuilder<GetModelInvocationJobResponse.Builder,GetModelInvocationJobResponse>,SdkBuilder<GetModelInvocationJobResponse.Builder,GetModelInvocationJobResponse>,SdkPojo,SdkResponse.Builder
- Enclosing class:
- GetModelInvocationJobResponse
@Mutable @NotThreadSafe public static interface GetModelInvocationJobResponse.Builder extends BedrockResponse.Builder, SdkPojo, CopyableBuilder<GetModelInvocationJobResponse.Builder,GetModelInvocationJobResponse>
-
-
Method Summary
All Methods Instance Methods Abstract Methods Default Methods Modifier and Type Method Description GetModelInvocationJobResponse.BuilderclientRequestToken(String clientRequestToken)A unique, case-sensitive identifier to ensure that the API request completes no more than one time.GetModelInvocationJobResponse.BuilderendTime(Instant endTime)The time at which the batch inference job ended.default GetModelInvocationJobResponse.BuilderinputDataConfig(Consumer<ModelInvocationJobInputDataConfig.Builder> inputDataConfig)Details about the location of the input to the batch inference job.GetModelInvocationJobResponse.BuilderinputDataConfig(ModelInvocationJobInputDataConfig inputDataConfig)Details about the location of the input to the batch inference job.GetModelInvocationJobResponse.BuilderjobArn(String jobArn)The Amazon Resource Name (ARN) of the batch inference job.GetModelInvocationJobResponse.BuilderjobExpirationTime(Instant jobExpirationTime)The time at which the batch inference job times or timed out.GetModelInvocationJobResponse.BuilderjobName(String jobName)The name of the batch inference job.GetModelInvocationJobResponse.BuilderlastModifiedTime(Instant lastModifiedTime)The time at which the batch inference job was last modified.GetModelInvocationJobResponse.Buildermessage(String message)If the batch inference job failed, this field contains a message describing why the job failed.GetModelInvocationJobResponse.BuildermodelId(String modelId)The unique identifier of the foundation model used for model inference.default GetModelInvocationJobResponse.BuilderoutputDataConfig(Consumer<ModelInvocationJobOutputDataConfig.Builder> outputDataConfig)Details about the location of the output of the batch inference job.GetModelInvocationJobResponse.BuilderoutputDataConfig(ModelInvocationJobOutputDataConfig outputDataConfig)Details about the location of the output of the batch inference job.GetModelInvocationJobResponse.BuilderroleArn(String roleArn)The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference.GetModelInvocationJobResponse.Builderstatus(String status)The status of the batch inference job.GetModelInvocationJobResponse.Builderstatus(ModelInvocationJobStatus status)The status of the batch inference job.GetModelInvocationJobResponse.BuildersubmitTime(Instant submitTime)The time at which the batch inference job was submitted.GetModelInvocationJobResponse.BuildertimeoutDurationInHours(Integer timeoutDurationInHours)The number of hours after which batch inference job was set to time out.default GetModelInvocationJobResponse.BuildervpcConfig(Consumer<VpcConfig.Builder> vpcConfig)The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job.GetModelInvocationJobResponse.BuildervpcConfig(VpcConfig vpcConfig)The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job.-
Methods inherited from interface software.amazon.awssdk.services.bedrock.model.BedrockResponse.Builder
build, responseMetadata, responseMetadata
-
Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
-
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
-
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFieldNameToField, sdkFields
-
Methods inherited from interface software.amazon.awssdk.core.SdkResponse.Builder
sdkHttpResponse, sdkHttpResponse
-
-
-
-
Method Detail
-
jobArn
GetModelInvocationJobResponse.Builder jobArn(String jobArn)
The Amazon Resource Name (ARN) of the batch inference job.
- Parameters:
jobArn- The Amazon Resource Name (ARN) of the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
jobName
GetModelInvocationJobResponse.Builder jobName(String jobName)
The name of the batch inference job.
- Parameters:
jobName- The name of the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
modelId
GetModelInvocationJobResponse.Builder modelId(String modelId)
The unique identifier of the foundation model used for model inference.
- Parameters:
modelId- The unique identifier of the foundation model used for model inference.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
clientRequestToken
GetModelInvocationJobResponse.Builder clientRequestToken(String clientRequestToken)
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
- Parameters:
clientRequestToken- A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
roleArn
GetModelInvocationJobResponse.Builder roleArn(String roleArn)
The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
- Parameters:
roleArn- The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
status
GetModelInvocationJobResponse.Builder status(String status)
The status of the batch inference job.
The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInputvalue matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
- Parameters:
status- The status of the batch inference job.The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInputvalue matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
ModelInvocationJobStatus,ModelInvocationJobStatus
-
-
status
GetModelInvocationJobResponse.Builder status(ModelInvocationJobStatus status)
The status of the batch inference job.
The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInputvalue matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
- Parameters:
status- The status of the batch inference job.The following statuses are possible:
-
Submitted – This job has been submitted to a queue for validation.
-
Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:
-
Your IAM service role has access to the Amazon S3 buckets containing your files.
-
Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the
modelInputvalue matches the request body for the model. -
Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.
-
-
Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.
-
Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.
-
InProgress – This job has begun. You can start viewing the results in the output S3 location.
-
Completed – This job has successfully completed. View the output files in the output S3 location.
-
PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.
-
Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.
-
Stopped – This job was stopped by a user.
-
Stopping – This job is being stopped by a user.
-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
ModelInvocationJobStatus,ModelInvocationJobStatus
-
-
message
GetModelInvocationJobResponse.Builder message(String message)
If the batch inference job failed, this field contains a message describing why the job failed.
- Parameters:
message- If the batch inference job failed, this field contains a message describing why the job failed.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
submitTime
GetModelInvocationJobResponse.Builder submitTime(Instant submitTime)
The time at which the batch inference job was submitted.
- Parameters:
submitTime- The time at which the batch inference job was submitted.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
lastModifiedTime
GetModelInvocationJobResponse.Builder lastModifiedTime(Instant lastModifiedTime)
The time at which the batch inference job was last modified.
- Parameters:
lastModifiedTime- The time at which the batch inference job was last modified.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
endTime
GetModelInvocationJobResponse.Builder endTime(Instant endTime)
The time at which the batch inference job ended.
- Parameters:
endTime- The time at which the batch inference job ended.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
inputDataConfig
GetModelInvocationJobResponse.Builder inputDataConfig(ModelInvocationJobInputDataConfig inputDataConfig)
Details about the location of the input to the batch inference job.
- Parameters:
inputDataConfig- Details about the location of the input to the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
inputDataConfig
default GetModelInvocationJobResponse.Builder inputDataConfig(Consumer<ModelInvocationJobInputDataConfig.Builder> inputDataConfig)
Details about the location of the input to the batch inference job.
This is a convenience method that creates an instance of theModelInvocationJobInputDataConfig.Builderavoiding the need to create one manually viaModelInvocationJobInputDataConfig.builder().When the
Consumercompletes,SdkBuilder.build()is called immediately and its result is passed toinputDataConfig(ModelInvocationJobInputDataConfig).- Parameters:
inputDataConfig- a consumer that will call methods onModelInvocationJobInputDataConfig.Builder- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
inputDataConfig(ModelInvocationJobInputDataConfig)
-
outputDataConfig
GetModelInvocationJobResponse.Builder outputDataConfig(ModelInvocationJobOutputDataConfig outputDataConfig)
Details about the location of the output of the batch inference job.
- Parameters:
outputDataConfig- Details about the location of the output of the batch inference job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
outputDataConfig
default GetModelInvocationJobResponse.Builder outputDataConfig(Consumer<ModelInvocationJobOutputDataConfig.Builder> outputDataConfig)
Details about the location of the output of the batch inference job.
This is a convenience method that creates an instance of theModelInvocationJobOutputDataConfig.Builderavoiding the need to create one manually viaModelInvocationJobOutputDataConfig.builder().When the
Consumercompletes,SdkBuilder.build()is called immediately and its result is passed tooutputDataConfig(ModelInvocationJobOutputDataConfig).- Parameters:
outputDataConfig- a consumer that will call methods onModelInvocationJobOutputDataConfig.Builder- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
outputDataConfig(ModelInvocationJobOutputDataConfig)
-
vpcConfig
GetModelInvocationJobResponse.Builder vpcConfig(VpcConfig vpcConfig)
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
- Parameters:
vpcConfig- The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
vpcConfig
default GetModelInvocationJobResponse.Builder vpcConfig(Consumer<VpcConfig.Builder> vpcConfig)
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
This is a convenience method that creates an instance of theVpcConfig.Builderavoiding the need to create one manually viaVpcConfig.builder().When the
Consumercompletes,SdkBuilder.build()is called immediately and its result is passed tovpcConfig(VpcConfig).- Parameters:
vpcConfig- a consumer that will call methods onVpcConfig.Builder- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
vpcConfig(VpcConfig)
-
timeoutDurationInHours
GetModelInvocationJobResponse.Builder timeoutDurationInHours(Integer timeoutDurationInHours)
The number of hours after which batch inference job was set to time out.
- Parameters:
timeoutDurationInHours- The number of hours after which batch inference job was set to time out.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
jobExpirationTime
GetModelInvocationJobResponse.Builder jobExpirationTime(Instant jobExpirationTime)
The time at which the batch inference job times or timed out.
- Parameters:
jobExpirationTime- The time at which the batch inference job times or timed out.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
-