| Package | Description |
|---|---|
| org.apache.hadoop.hive.ql | |
| org.apache.hadoop.hive.ql.exec |
Hive QL execution tasks, operators, functions and other handlers.
|
| org.apache.hadoop.hive.ql.io | |
| org.apache.hadoop.hive.ql.metadata | |
| org.apache.hadoop.hive.ql.optimizer | |
| org.apache.hadoop.hive.ql.plan |
| Modifier and Type | Method and Description |
|---|---|
Map<String,List<org.apache.hadoop.fs.Path>> |
QueryPlan.getDynamicPartitionSpecs(long writeId,
String moveTaskId,
AcidUtils.Operation acidOperation,
org.apache.hadoop.fs.Path path)
This method is to get the dynamic partition specifications inserted by the FileSinkOperator, a particular MoveTask belongs to.
|
Integer |
QueryPlan.getStatementIdForAcidWriteType(long writeId,
String moveTaskId,
AcidUtils.Operation acidOperation,
org.apache.hadoop.fs.Path path,
int originalStatementId)
This method is to get the proper statementId for the FileSinkOperator, a particular MoveTask belongs to.
|
| Modifier and Type | Method and Description |
|---|---|
static org.apache.hadoop.fs.Path[] |
Utilities.getDirectInsertDirectoryCandidates(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
int dpLevels,
org.apache.hadoop.fs.PathFilter filter,
long writeId,
int stmtId,
org.apache.hadoop.conf.Configuration conf,
Boolean isBaseDir,
AcidUtils.Operation acidOperation) |
static void |
Utilities.handleDirectInsertTableFinalPath(org.apache.hadoop.fs.Path specPath,
String unionSuffix,
org.apache.hadoop.conf.Configuration hconf,
boolean success,
int dpLevels,
int lbLevels,
Utilities.MissingBucketsContext mbc,
long writeId,
int stmtId,
org.apache.hadoop.mapred.Reporter reporter,
boolean isMmTable,
boolean isMmCtas,
boolean isInsertOverwrite,
boolean isDirectInsert,
String staticSpec,
AcidUtils.Operation acidOperation,
FileSinkDesc conf) |
| Constructor and Description |
|---|
FSPaths(org.apache.hadoop.fs.Path specPath,
boolean isMmTable,
boolean isDirectInsert,
boolean isInsertOverwrite,
AcidUtils.Operation acidOperation) |
| Modifier and Type | Method and Description |
|---|---|
static AcidUtils.Operation |
AcidUtils.Operation.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static AcidUtils.Operation[] |
AcidUtils.Operation.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
| Modifier and Type | Method and Description |
|---|---|
static org.apache.hadoop.hive.metastore.api.DataOperationType |
AcidUtils.toDataOperationType(AcidUtils.Operation op)
Logically this should have been defined in Operation but that causes a dependency
on metastore package from exec jar (from the cluster) which is not allowed.
|
| Modifier and Type | Method and Description |
|---|---|
Map<Map<String,String>,Partition> |
Hive.loadDynamicPartitions(LoadTableDesc tbd,
int numLB,
boolean isAcid,
long writeId,
int stmtId,
boolean resetStatistics,
AcidUtils.Operation operation,
Map<org.apache.hadoop.fs.Path,Utilities.PartitionDetails> partitionDetailsMap)
Given a source directory name of the load path, load all dynamically generated partitions
into the specified table and return a list of strings that represent the dynamic partition
paths.
|
| Modifier and Type | Method and Description |
|---|---|
static Task<MoveWork> |
GenMapRedUtils.findMoveTaskForFsopOutput(List<Task<MoveWork>> mvTasks,
org.apache.hadoop.fs.Path fsopFinalDir,
boolean isMmFsop,
boolean isDirectInsert,
String fsoMoveTaskId,
AcidUtils.Operation acidOperation) |
| Modifier and Type | Method and Description |
|---|---|
AcidUtils.Operation |
FileSinkDesc.getAcidOperation() |
AcidUtils.Operation |
ReduceSinkDesc.getWriteType() |
AcidUtils.Operation |
FileSinkDesc.getWriteType() |
AcidUtils.Operation |
LoadDesc.getWriteType() |
| Modifier and Type | Method and Description |
|---|---|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(List<ExprNodeDesc> keyCols,
int numKeys,
List<ExprNodeDesc> valueCols,
List<List<Integer>> distinctColIndices,
List<String> outputKeyColumnNames,
List<String> outputValueColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers,
AcidUtils.Operation writeType,
NullOrdering defaultNullOrder)
Create the reduce sink descriptor.
|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(List<ExprNodeDesc> keyCols,
int numKeys,
List<ExprNodeDesc> valueCols,
List<List<Integer>> distinctColIndices,
List<String> outputKeyColumnNames,
List<String> outputValueColumnNames,
boolean includeKeyCols,
int tag,
List<ExprNodeDesc> partitionCols,
String order,
String nullOrder,
NullOrdering defaultNullOrder,
int numReducers,
AcidUtils.Operation writeType)
Create the reduce sink descriptor.
|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(List<ExprNodeDesc> keyCols,
List<ExprNodeDesc> valueCols,
List<String> outputColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers,
AcidUtils.Operation writeType,
NullOrdering defaultNullOrder)
Create the reduce sink descriptor.
|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(List<ExprNodeDesc> keyCols,
List<ExprNodeDesc> valueCols,
List<String> outputColumnNames,
boolean includeKeyCols,
int tag,
List<ExprNodeDesc> partitionCols,
String order,
String nullOrder,
NullOrdering defaultNullOrder,
int numReducers,
AcidUtils.Operation writeType,
boolean isCompaction)
Create the reduce sink descriptor.
|
void |
FileSinkDesc.setAcidOperation(AcidUtils.Operation acidOperation) |
void |
FileSinkDesc.setWriteType(AcidUtils.Operation type) |
| Constructor and Description |
|---|
FileSinkDesc(org.apache.hadoop.fs.Path dirName,
TableDesc tableInfo,
boolean compressed,
int destTableId,
boolean multiFileSpray,
boolean canBeMerged,
int numFiles,
int totalFiles,
List<ExprNodeDesc> partitionCols,
DynamicPartitionCtx dpCtx,
org.apache.hadoop.fs.Path destPath,
Long mmWriteId,
boolean isMmCtas,
boolean isInsertOverwrite,
boolean isQuery,
boolean isCTASorCM,
boolean isDirectInsert,
AcidUtils.Operation acidOperation,
boolean deleteOfSplitUpdate) |
LoadFileDesc(CreateTableDesc createTableDesc,
CreateMaterializedViewDesc createViewDesc,
org.apache.hadoop.fs.Path sourcePath,
org.apache.hadoop.fs.Path targetDir,
boolean isDfsDir,
String columns,
String columnTypes,
AcidUtils.Operation writeType,
boolean isMmCtas) |
LoadTableDesc(org.apache.hadoop.fs.Path sourcePath,
TableDesc table,
DynamicPartitionCtx dpCtx,
AcidUtils.Operation writeType,
boolean isReplace,
Long writeId) |
LoadTableDesc(org.apache.hadoop.fs.Path sourcePath,
TableDesc table,
Map<String,String> partitionSpec,
AcidUtils.Operation writeType,
Long currentWriteId) |
LoadTableDesc(org.apache.hadoop.fs.Path sourcePath,
TableDesc table,
Map<String,String> partitionSpec,
LoadTableDesc.LoadFileType loadFileType,
AcidUtils.Operation writeType,
Long currentWriteId) |
ReduceSinkDesc(List<ExprNodeDesc> keyCols,
int numDistributionKeys,
List<ExprNodeDesc> valueCols,
List<String> outputKeyColumnNames,
List<List<Integer>> distinctColumnIndices,
List<String> outputValueColumnNames,
int tag,
List<ExprNodeDesc> partitionCols,
int numReducers,
TableDesc keySerializeInfo,
TableDesc valueSerializeInfo,
AcidUtils.Operation writeType) |
Copyright © 2024 The Apache Software Foundation. All rights reserved.