Interface ParquetSerDe.Builder
-
- All Superinterfaces:
Buildable,CopyableBuilder<ParquetSerDe.Builder,ParquetSerDe>,SdkBuilder<ParquetSerDe.Builder,ParquetSerDe>,SdkPojo
- Enclosing class:
- ParquetSerDe
public static interface ParquetSerDe.Builder extends SdkPojo, CopyableBuilder<ParquetSerDe.Builder,ParquetSerDe>
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description ParquetSerDe.BuilderblockSizeBytes(Integer blockSizeBytes)The Hadoop Distributed File System (HDFS) block size.ParquetSerDe.Buildercompression(String compression)The compression code to use over data blocks.ParquetSerDe.Buildercompression(ParquetCompression compression)The compression code to use over data blocks.ParquetSerDe.BuilderenableDictionaryCompression(Boolean enableDictionaryCompression)Indicates whether to enable dictionary compression.ParquetSerDe.BuildermaxPaddingBytes(Integer maxPaddingBytes)The maximum amount of padding to apply.ParquetSerDe.BuilderpageSizeBytes(Integer pageSizeBytes)The Parquet page size.ParquetSerDe.BuilderwriterVersion(String writerVersion)Indicates the version of row format to output.ParquetSerDe.BuilderwriterVersion(ParquetWriterVersion writerVersion)Indicates the version of row format to output.-
Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
-
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
-
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
-
-
-
Method Detail
-
blockSizeBytes
ParquetSerDe.Builder blockSizeBytes(Integer blockSizeBytes)
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
- Parameters:
blockSizeBytes- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
pageSizeBytes
ParquetSerDe.Builder pageSizeBytes(Integer pageSizeBytes)
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- Parameters:
pageSizeBytes- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
compression
ParquetSerDe.Builder compression(String compression)
The compression code to use over data blocks. The possible values are
UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.- Parameters:
compression- The compression code to use over data blocks. The possible values areUNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
ParquetCompression,ParquetCompression
-
compression
ParquetSerDe.Builder compression(ParquetCompression compression)
The compression code to use over data blocks. The possible values are
UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.- Parameters:
compression- The compression code to use over data blocks. The possible values areUNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
ParquetCompression,ParquetCompression
-
enableDictionaryCompression
ParquetSerDe.Builder enableDictionaryCompression(Boolean enableDictionaryCompression)
Indicates whether to enable dictionary compression.
- Parameters:
enableDictionaryCompression- Indicates whether to enable dictionary compression.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
maxPaddingBytes
ParquetSerDe.Builder maxPaddingBytes(Integer maxPaddingBytes)
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- Parameters:
maxPaddingBytes- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
writerVersion
ParquetSerDe.Builder writerVersion(String writerVersion)
Indicates the version of row format to output. The possible values are
V1andV2. The default isV1.- Parameters:
writerVersion- Indicates the version of row format to output. The possible values areV1andV2. The default isV1.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
ParquetWriterVersion,ParquetWriterVersion
-
writerVersion
ParquetSerDe.Builder writerVersion(ParquetWriterVersion writerVersion)
Indicates the version of row format to output. The possible values are
V1andV2. The default isV1.- Parameters:
writerVersion- Indicates the version of row format to output. The possible values areV1andV2. The default isV1.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
ParquetWriterVersion,ParquetWriterVersion
-
-