public class OrcSplit extends org.apache.hadoop.mapred.FileSplit implements ColumnarSplit, LlapAwareSplit
| Modifier and Type | Class and Description |
|---|---|
static class |
OrcSplit.OffsetAndBucketProperty
Used for generating synthetic ROW__IDs for reading "original" files.
|
| Modifier | Constructor and Description |
|---|---|
protected |
OrcSplit() |
|
OrcSplit(org.apache.hadoop.fs.Path path,
Object fileId,
long offset,
long length,
String[] hosts,
org.apache.orc.impl.OrcTail orcTail,
boolean isOriginal,
boolean hasBase,
List<AcidInputFormat.DeltaMetaData> deltas,
long projectedDataSize,
long fileLen,
org.apache.hadoop.fs.Path rootDir,
OrcSplit.OffsetAndBucketProperty syntheticAcidProps) |
| Modifier and Type | Method and Description |
|---|---|
boolean |
canUseLlapIo(org.apache.hadoop.conf.Configuration conf) |
int |
getBucketId()
Note: this is the bucket number as seen in the file name that contains this split.
|
long |
getColumnarProjectionSize()
Return the estimation size of the column projections that will be read from this split.
|
List<AcidInputFormat.DeltaMetaData> |
getDeltas() |
Object |
getFileKey() |
long |
getFileLength() |
org.apache.orc.impl.OrcTail |
getOrcTail() |
long |
getProjectedColumnsUncompressedSize() |
org.apache.hadoop.fs.Path |
getRootDir() |
int |
getStatementId() |
OrcSplit.OffsetAndBucketProperty |
getSyntheticAcidProps() |
long |
getWriteId()
Note: this is the write id as seen in the file name that contains this split
For files that have min/max writeId, this is the starting one.
|
boolean |
hasBase() |
boolean |
hasFooter() |
boolean |
isAcid()
If this method returns true, then for sure it is ACID.
|
boolean |
isOriginal() |
void |
parse(org.apache.hadoop.conf.Configuration conf) |
void |
parse(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootPath) |
void |
readFields(DataInput in) |
String |
toString() |
void |
write(DataOutput out) |
protected OrcSplit()
public OrcSplit(org.apache.hadoop.fs.Path path,
Object fileId,
long offset,
long length,
String[] hosts,
org.apache.orc.impl.OrcTail orcTail,
boolean isOriginal,
boolean hasBase,
List<AcidInputFormat.DeltaMetaData> deltas,
long projectedDataSize,
long fileLen,
org.apache.hadoop.fs.Path rootDir,
OrcSplit.OffsetAndBucketProperty syntheticAcidProps)
public void write(DataOutput out) throws IOException
write in interface org.apache.hadoop.io.Writablewrite in class org.apache.hadoop.mapred.FileSplitIOExceptionpublic void readFields(DataInput in) throws IOException
readFields in interface org.apache.hadoop.io.WritablereadFields in class org.apache.hadoop.mapred.FileSplitIOExceptionpublic org.apache.orc.impl.OrcTail getOrcTail()
public boolean hasFooter()
public boolean isOriginal()
true if file schema doesn't have Acid metadata columns
Such file may be in a delta_x_y/ or base_x due to being added via
"load data" command. It could be at partition|table root due to table having
been converted from non-acid to acid table. It could even be something like
"warehouse/t/HIVE_UNION_SUBDIR_15/000000_0" if it was written by an
"insert into t select ... from A union all select ... from B"public boolean hasBase()
public org.apache.hadoop.fs.Path getRootDir()
public List<AcidInputFormat.DeltaMetaData> getDeltas()
public long getFileLength()
public boolean isAcid()
public long getProjectedColumnsUncompressedSize()
public Object getFileKey()
public OrcSplit.OffsetAndBucketProperty getSyntheticAcidProps()
public long getColumnarProjectionSize()
ColumnarSplitgetColumnarProjectionSize in interface ColumnarSplitpublic boolean canUseLlapIo(org.apache.hadoop.conf.Configuration conf)
canUseLlapIo in interface LlapAwareSplitpublic long getWriteId()
public int getStatementId()
public int getBucketId()
BucketCodec.V1 for details.public void parse(org.apache.hadoop.conf.Configuration conf)
throws IOException
IOExceptionpublic void parse(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path rootPath)
throws IOException
IOExceptionpublic String toString()
toString in class org.apache.hadoop.mapred.FileSplitCopyright © 2024 The Apache Software Foundation. All rights reserved.