Class ParquetSerDe
- java.lang.Object
-
- software.amazon.awssdk.services.firehose.model.ParquetSerDe
-
- All Implemented Interfaces:
Serializable,SdkPojo,ToCopyableBuilder<ParquetSerDe.Builder,ParquetSerDe>
@Generated("software.amazon.awssdk:codegen") public final class ParquetSerDe extends Object implements SdkPojo, Serializable, ToCopyableBuilder<ParquetSerDe.Builder,ParquetSerDe>
A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet.
- See Also:
- Serialized Form
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static interfaceParquetSerDe.Builder
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description IntegerblockSizeBytes()The Hadoop Distributed File System (HDFS) block size.static ParquetSerDe.Builderbuilder()ParquetCompressioncompression()The compression code to use over data blocks.StringcompressionAsString()The compression code to use over data blocks.BooleanenableDictionaryCompression()Indicates whether to enable dictionary compression.booleanequals(Object obj)booleanequalsBySdkFields(Object obj)<T> Optional<T>getValueForField(String fieldName, Class<T> clazz)inthashCode()IntegermaxPaddingBytes()The maximum amount of padding to apply.IntegerpageSizeBytes()The Parquet page size.List<SdkField<?>>sdkFields()static Class<? extends ParquetSerDe.Builder>serializableBuilderClass()ParquetSerDe.BuildertoBuilder()StringtoString()Returns a string representation of this object.ParquetWriterVersionwriterVersion()Indicates the version of row format to output.StringwriterVersionAsString()Indicates the version of row format to output.-
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
-
-
-
Method Detail
-
blockSizeBytes
public final Integer blockSizeBytes()
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
- Returns:
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
-
pageSizeBytes
public final Integer pageSizeBytes()
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- Returns:
- The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
-
compression
public final ParquetCompression compression()
The compression code to use over data blocks. The possible values are
UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.If the service returns an enum value that is not available in the current SDK version,
compressionwill returnParquetCompression.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromcompressionAsString().- Returns:
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed. - See Also:
ParquetCompression
-
compressionAsString
public final String compressionAsString()
The compression code to use over data blocks. The possible values are
UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.If the service returns an enum value that is not available in the current SDK version,
compressionwill returnParquetCompression.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromcompressionAsString().- Returns:
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed. - See Also:
ParquetCompression
-
enableDictionaryCompression
public final Boolean enableDictionaryCompression()
Indicates whether to enable dictionary compression.
- Returns:
- Indicates whether to enable dictionary compression.
-
maxPaddingBytes
public final Integer maxPaddingBytes()
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- Returns:
- The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
-
writerVersion
public final ParquetWriterVersion writerVersion()
Indicates the version of row format to output. The possible values are
V1andV2. The default isV1.If the service returns an enum value that is not available in the current SDK version,
writerVersionwill returnParquetWriterVersion.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromwriterVersionAsString().- Returns:
- Indicates the version of row format to output. The possible values are
V1andV2. The default isV1. - See Also:
ParquetWriterVersion
-
writerVersionAsString
public final String writerVersionAsString()
Indicates the version of row format to output. The possible values are
V1andV2. The default isV1.If the service returns an enum value that is not available in the current SDK version,
writerVersionwill returnParquetWriterVersion.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromwriterVersionAsString().- Returns:
- Indicates the version of row format to output. The possible values are
V1andV2. The default isV1. - See Also:
ParquetWriterVersion
-
toBuilder
public ParquetSerDe.Builder toBuilder()
- Specified by:
toBuilderin interfaceToCopyableBuilder<ParquetSerDe.Builder,ParquetSerDe>
-
builder
public static ParquetSerDe.Builder builder()
-
serializableBuilderClass
public static Class<? extends ParquetSerDe.Builder> serializableBuilderClass()
-
equalsBySdkFields
public final boolean equalsBySdkFields(Object obj)
- Specified by:
equalsBySdkFieldsin interfaceSdkPojo
-
toString
public final String toString()
Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
-
-