Interface TransformInput.Builder
-
- All Superinterfaces:
Buildable,CopyableBuilder<TransformInput.Builder,TransformInput>,SdkBuilder<TransformInput.Builder,TransformInput>,SdkPojo
- Enclosing class:
- TransformInput
public static interface TransformInput.Builder extends SdkPojo, CopyableBuilder<TransformInput.Builder,TransformInput>
-
-
Method Summary
All Methods Instance Methods Abstract Methods Default Methods Modifier and Type Method Description TransformInput.BuildercompressionType(String compressionType)If your transform data is compressed, specify the compression type.TransformInput.BuildercompressionType(CompressionType compressionType)If your transform data is compressed, specify the compression type.TransformInput.BuildercontentType(String contentType)The multipurpose internet mail extension (MIME) type of the data.default TransformInput.BuilderdataSource(Consumer<TransformDataSource.Builder> dataSource)Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.TransformInput.BuilderdataSource(TransformDataSource dataSource)Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.TransformInput.BuildersplitType(String splitType)The method to use to split the transform job's data files into smaller batches.TransformInput.BuildersplitType(SplitType splitType)The method to use to split the transform job's data files into smaller batches.-
Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
-
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
-
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
-
-
-
Method Detail
-
dataSource
TransformInput.Builder dataSource(TransformDataSource dataSource)
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
- Parameters:
dataSource- Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dataSource
default TransformInput.Builder dataSource(Consumer<TransformDataSource.Builder> dataSource)
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
This is a convenience method that creates an instance of theTransformDataSource.Builderavoiding the need to create one manually viaTransformDataSource.builder().When the
Consumercompletes,SdkBuilder.build()is called immediately and its result is passed todataSource(TransformDataSource).- Parameters:
dataSource- a consumer that will call methods onTransformDataSource.Builder- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
dataSource(TransformDataSource)
-
contentType
TransformInput.Builder contentType(String contentType)
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
- Parameters:
contentType- The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
compressionType
TransformInput.Builder compressionType(String compressionType)
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.- Parameters:
compressionType- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value isNone.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
CompressionType,CompressionType
-
compressionType
TransformInput.Builder compressionType(CompressionType compressionType)
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.- Parameters:
compressionType- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value isNone.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
CompressionType,CompressionType
-
splitType
TransformInput.Builder splitType(String splitType)
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.- Parameters:
splitType- The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value forSplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
SplitType,SplitType
-
-
splitType
TransformInput.Builder splitType(SplitType splitType)
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.- Parameters:
splitType- The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value forSplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.-
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
SplitType,SplitType
-
-
-