| Package | Description |
|---|---|
| org.apache.hudi.execution | |
| org.apache.hudi.io | |
| org.apache.hudi.table | |
| org.apache.hudi.table.action.commit |
| Modifier and Type | Field and Description |
|---|---|
protected WriteHandleFactory |
HoodieLazyInsertIterable.writeHandleFactory |
| Constructor and Description |
|---|
CopyOnWriteInsertHandler(HoodieWriteConfig config,
String instantTime,
boolean areRecordsSorted,
HoodieTable hoodieTable,
String idPrefix,
TaskContextSupplier taskContextSupplier,
WriteHandleFactory writeHandleFactory) |
HoodieLazyInsertIterable(Iterator<HoodieRecord<T>> recordItr,
boolean areRecordsSorted,
HoodieWriteConfig config,
String instantTime,
HoodieTable hoodieTable,
String idPrefix,
TaskContextSupplier taskContextSupplier,
WriteHandleFactory writeHandleFactory) |
| Modifier and Type | Class and Description |
|---|---|
class |
AppendHandleFactory<T,I,K,O> |
class |
CreateHandleFactory<T,I,K,O> |
class |
SingleFileHandleCreateFactory<T,I,K,O>
A SingleFileHandleCreateFactory is used to write all data in the spark partition into a single data file.
|
| Modifier and Type | Method and Description |
|---|---|
default Option<WriteHandleFactory> |
BulkInsertPartitioner.getWriteHandleFactory(int partitionId)
Return write handle factory for the given partition.
|
Option<WriteHandleFactory> |
BucketIndexBulkInsertPartitioner.getWriteHandleFactory(int idx) |
| Modifier and Type | Method and Description |
|---|---|
abstract O |
BaseBulkInsertHelper.bulkInsert(I inputRecords,
String instantTime,
HoodieTable<T,I,K,O> table,
HoodieWriteConfig config,
boolean performDedupe,
BulkInsertPartitioner partitioner,
boolean addMetadataFields,
int parallelism,
WriteHandleFactory writeHandleFactory)
Only write input records.
|
Copyright © 2023 The Apache Software Foundation. All rights reserved.