Class GetModelInvocationJobResponse

    • Method Detail

      • jobArn

        public final String jobArn()

        The Amazon Resource Name (ARN) of the batch inference job.

        Returns:
        The Amazon Resource Name (ARN) of the batch inference job.
      • jobName

        public final String jobName()

        The name of the batch inference job.

        Returns:
        The name of the batch inference job.
      • modelId

        public final String modelId()

        The unique identifier of the foundation model used for model inference.

        Returns:
        The unique identifier of the foundation model used for model inference.
      • clientRequestToken

        public final String clientRequestToken()

        A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.

        Returns:
        A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
      • roleArn

        public final String roleArn()

        The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.

        Returns:
        The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
      • status

        public final ModelInvocationJobStatus status()

        The status of the batch inference job.

        The following statuses are possible:

        • Submitted – This job has been submitted to a queue for validation.

        • Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:

          • Your IAM service role has access to the Amazon S3 buckets containing your files.

          • Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the modelInput value matches the request body for the model.

          • Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.

        • Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.

        • Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.

        • InProgress – This job has begun. You can start viewing the results in the output S3 location.

        • Completed – This job has successfully completed. View the output files in the output S3 location.

        • PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.

        • Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.

        • Stopped – This job was stopped by a user.

        • Stopping – This job is being stopped by a user.

        If the service returns an enum value that is not available in the current SDK version, status will return ModelInvocationJobStatus.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from statusAsString().

        Returns:
        The status of the batch inference job.

        The following statuses are possible:

        • Submitted – This job has been submitted to a queue for validation.

        • Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:

          • Your IAM service role has access to the Amazon S3 buckets containing your files.

          • Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the modelInput value matches the request body for the model.

          • Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.

        • Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.

        • Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.

        • InProgress – This job has begun. You can start viewing the results in the output S3 location.

        • Completed – This job has successfully completed. View the output files in the output S3 location.

        • PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.

        • Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.

        • Stopped – This job was stopped by a user.

        • Stopping – This job is being stopped by a user.

        See Also:
        ModelInvocationJobStatus
      • statusAsString

        public final String statusAsString()

        The status of the batch inference job.

        The following statuses are possible:

        • Submitted – This job has been submitted to a queue for validation.

        • Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:

          • Your IAM service role has access to the Amazon S3 buckets containing your files.

          • Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the modelInput value matches the request body for the model.

          • Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.

        • Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.

        • Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.

        • InProgress – This job has begun. You can start viewing the results in the output S3 location.

        • Completed – This job has successfully completed. View the output files in the output S3 location.

        • PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.

        • Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.

        • Stopped – This job was stopped by a user.

        • Stopping – This job is being stopped by a user.

        If the service returns an enum value that is not available in the current SDK version, status will return ModelInvocationJobStatus.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from statusAsString().

        Returns:
        The status of the batch inference job.

        The following statuses are possible:

        • Submitted – This job has been submitted to a queue for validation.

        • Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:

          • Your IAM service role has access to the Amazon S3 buckets containing your files.

          • Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the modelInput value matches the request body for the model.

          • Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.

        • Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.

        • Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.

        • InProgress – This job has begun. You can start viewing the results in the output S3 location.

        • Completed – This job has successfully completed. View the output files in the output S3 location.

        • PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.

        • Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Amazon Web ServicesSupport Center.

        • Stopped – This job was stopped by a user.

        • Stopping – This job is being stopped by a user.

        See Also:
        ModelInvocationJobStatus
      • message

        public final String message()

        If the batch inference job failed, this field contains a message describing why the job failed.

        Returns:
        If the batch inference job failed, this field contains a message describing why the job failed.
      • submitTime

        public final Instant submitTime()

        The time at which the batch inference job was submitted.

        Returns:
        The time at which the batch inference job was submitted.
      • lastModifiedTime

        public final Instant lastModifiedTime()

        The time at which the batch inference job was last modified.

        Returns:
        The time at which the batch inference job was last modified.
      • endTime

        public final Instant endTime()

        The time at which the batch inference job ended.

        Returns:
        The time at which the batch inference job ended.
      • inputDataConfig

        public final ModelInvocationJobInputDataConfig inputDataConfig()

        Details about the location of the input to the batch inference job.

        Returns:
        Details about the location of the input to the batch inference job.
      • outputDataConfig

        public final ModelInvocationJobOutputDataConfig outputDataConfig()

        Details about the location of the output of the batch inference job.

        Returns:
        Details about the location of the output of the batch inference job.
      • timeoutDurationInHours

        public final Integer timeoutDurationInHours()

        The number of hours after which batch inference job was set to time out.

        Returns:
        The number of hours after which batch inference job was set to time out.
      • jobExpirationTime

        public final Instant jobExpirationTime()

        The time at which the batch inference job times or timed out.

        Returns:
        The time at which the batch inference job times or timed out.
      • toString

        public final String toString()
        Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
        Overrides:
        toString in class Object