| Package | Description |
|---|---|
| org.apache.parquet.filter2.bloomfilterlevel | |
| org.apache.parquet.filter2.dictionarylevel | |
| org.apache.parquet.filter2.statisticslevel | |
| org.apache.parquet.format.converter | |
| org.apache.parquet.hadoop |
Provides classes to store use Parquet files in Hadoop
In a map reduce job:
|
| org.apache.parquet.hadoop.metadata |
| Modifier and Type | Method and Description |
|---|---|
static boolean |
BloomFilterImpl.canDrop(FilterPredicate pred,
List<ColumnChunkMetaData> columns,
BloomFilterReader bloomFilterReader) |
| Modifier and Type | Method and Description |
|---|---|
static boolean |
DictionaryFilter.canDrop(FilterPredicate pred,
List<ColumnChunkMetaData> columns,
DictionaryPageReadStore dictionaries) |
| Modifier and Type | Method and Description |
|---|---|
static boolean |
StatisticsFilter.canDrop(FilterPredicate pred,
List<ColumnChunkMetaData> columns) |
| Modifier and Type | Method and Description |
|---|---|
ColumnChunkMetaData |
ParquetMetadataConverter.buildColumnChunkMetaData(ColumnMetaData metaData,
org.apache.parquet.hadoop.metadata.ColumnPath columnPath,
PrimitiveType type,
String createdBy) |
| Modifier and Type | Method and Description |
|---|---|
BloomFilter |
BloomFilterReader.readBloomFilter(ColumnChunkMetaData meta) |
BloomFilter |
ParquetFileReader.readBloomFilter(ColumnChunkMetaData meta)
Reads Bloom filter data for the given column chunk.
|
ColumnIndex |
ParquetFileReader.readColumnIndex(ColumnChunkMetaData column) |
OffsetIndex |
ParquetFileReader.readOffsetIndex(ColumnChunkMetaData column) |
| Modifier and Type | Method and Description |
|---|---|
static ColumnChunkMetaData |
ColumnChunkMetaData.get(org.apache.parquet.hadoop.metadata.ColumnPath path,
PrimitiveType.PrimitiveTypeName type,
org.apache.parquet.hadoop.metadata.CompressionCodecName codec,
EncodingStats encodingStats,
Set<Encoding> encodings,
Statistics statistics,
long firstDataPage,
long dictionaryPageOffset,
long valueCount,
long totalSize,
long totalUncompressedSize)
Deprecated.
will be removed in 2.0.0. Use
get(ColumnPath, PrimitiveType, CompressionCodecName, EncodingStats, Set, Statistics, long, long, long, long, long)
instead. |
static ColumnChunkMetaData |
ColumnChunkMetaData.get(org.apache.parquet.hadoop.metadata.ColumnPath path,
PrimitiveType.PrimitiveTypeName type,
org.apache.parquet.hadoop.metadata.CompressionCodecName codec,
Set<Encoding> encodings,
long firstDataPage,
long dictionaryPageOffset,
long valueCount,
long totalSize,
long totalUncompressedSize)
Deprecated.
|
static ColumnChunkMetaData |
ColumnChunkMetaData.get(org.apache.parquet.hadoop.metadata.ColumnPath path,
PrimitiveType.PrimitiveTypeName type,
org.apache.parquet.hadoop.metadata.CompressionCodecName codec,
Set<Encoding> encodings,
Statistics statistics,
long firstDataPage,
long dictionaryPageOffset,
long valueCount,
long totalSize,
long totalUncompressedSize)
Deprecated.
|
static ColumnChunkMetaData |
ColumnChunkMetaData.get(org.apache.parquet.hadoop.metadata.ColumnPath path,
PrimitiveType type,
org.apache.parquet.hadoop.metadata.CompressionCodecName codec,
EncodingStats encodingStats,
Set<Encoding> encodings,
Statistics statistics,
long firstDataPage,
long dictionaryPageOffset,
long valueCount,
long totalSize,
long totalUncompressedSize) |
static ColumnChunkMetaData |
ColumnChunkMetaData.getWithEncryptedMetadata(ParquetMetadataConverter parquetMetadataConverter,
org.apache.parquet.hadoop.metadata.ColumnPath path,
PrimitiveType type,
byte[] encryptedMetadata,
byte[] columnKeyMetadata,
InternalFileDecryptor fileDecryptor,
int rowGroupOrdinal,
int columnOrdinal,
String createdBy) |
| Modifier and Type | Method and Description |
|---|---|
List<ColumnChunkMetaData> |
BlockMetaData.getColumns() |
| Modifier and Type | Method and Description |
|---|---|
void |
BlockMetaData.addColumn(ColumnChunkMetaData column) |
Copyright © 2021 The Apache Software Foundation. All rights reserved.