| Modifier and Type | Method and Description |
|---|---|
static HiveColumnHandle |
HiveColumnHandle.bucketColumnHandle()
The column indicating the bucket id.
|
static HiveColumnHandle |
HiveColumnHandle.createBaseColumn(String topLevelColumnName,
int topLevelColumnIndex,
HiveType hiveType,
Type type,
HiveColumnHandle.ColumnType columnType,
Optional<String> comment) |
static HiveColumnHandle |
HiveColumnHandle.fileModifiedTimeColumnHandle() |
static HiveColumnHandle |
HiveColumnHandle.fileSizeColumnHandle() |
HiveColumnHandle |
HiveColumnHandle.getBaseColumn() |
HiveColumnHandle |
HivePageSourceProvider.ColumnMapping.getHiveColumnHandle() |
static HiveColumnHandle |
HiveColumnHandle.pathColumnHandle() |
HiveColumnHandle |
ReaderProjections.readerColumnForHiveColumnAt(int index)
For a column required by the
HivePageSource, returns the column read by the delegate page source or record cursor. |
static HiveColumnHandle |
HiveColumnHandle.updateRowIdHandle() |
| Modifier and Type | Method and Description |
|---|---|
List<HiveColumnHandle> |
HiveSplit.BucketConversion.getBucketColumnHandles() |
List<HiveColumnHandle> |
BackgroundHiveSplitLoader.BucketSplitInfo.getBucketColumns() |
List<HiveColumnHandle> |
HiveBucketHandle.getColumns() |
TupleDomain<HiveColumnHandle> |
HivePartitionResult.getCompactEffectivePredicate() |
TupleDomain<HiveColumnHandle> |
HiveTableHandle.getCompactEffectivePredicate() |
List<HiveColumnHandle> |
HiveWritableTableHandle.getInputColumns() |
List<HiveColumnHandle> |
HivePartitionResult.getPartitionColumns() |
List<HiveColumnHandle> |
HiveTableHandle.getPartitionColumns() |
List<HiveColumnHandle> |
ReaderProjections.getReaderColumns()
returns the actual list of columns being read by underlying page source or record cursor in order.
|
static List<HiveColumnHandle> |
HivePageSourceProvider.ColumnMapping.toColumnHandles(List<HivePageSourceProvider.ColumnMapping> regularColumnMappings,
boolean doCoercion,
TypeManager typeManager) |
| Modifier and Type | Method and Description |
|---|---|
static HivePageSourceProvider.ColumnMapping |
HivePageSourceProvider.ColumnMapping.empty(HiveColumnHandle hiveColumnHandle) |
static HivePageSourceProvider.ColumnMapping |
HivePageSourceProvider.ColumnMapping.interim(HiveColumnHandle hiveColumnHandle,
int index,
Optional<HiveType> baseTypeCoercionFrom) |
static boolean |
HiveColumnHandle.isBucketColumnHandle(HiveColumnHandle column) |
static boolean |
HiveColumnHandle.isFileModifiedTimeColumnHandle(HiveColumnHandle column) |
static boolean |
HiveColumnHandle.isFileSizeColumnHandle(HiveColumnHandle column) |
static boolean |
HiveColumnHandle.isPathColumnHandle(HiveColumnHandle column) |
static HivePageSourceProvider.ColumnMapping |
HivePageSourceProvider.ColumnMapping.prefilled(HiveColumnHandle hiveColumnHandle,
String prefilledValue,
Optional<HiveType> baseTypeCoercionFrom) |
static HivePageSourceProvider.ColumnMapping |
HivePageSourceProvider.ColumnMapping.regular(HiveColumnHandle hiveColumnHandle,
int index,
Optional<HiveType> baseTypeCoercionFrom) |
| Modifier and Type | Method and Description |
|---|---|
static List<HivePageSourceProvider.ColumnMapping> |
HivePageSourceProvider.ColumnMapping.buildColumnMappings(List<HivePartitionKey> partitionKeys,
List<HiveColumnHandle> columns,
List<HiveColumnHandle> requiredInterimColumns,
TableToPartitionMapping tableToPartitionMapping,
org.apache.hadoop.fs.Path path,
OptionalInt bucketNumber,
long fileSize,
long fileModifiedTime) |
static List<HivePageSourceProvider.ColumnMapping> |
HivePageSourceProvider.ColumnMapping.buildColumnMappings(List<HivePartitionKey> partitionKeys,
List<HiveColumnHandle> columns,
List<HiveColumnHandle> requiredInterimColumns,
TableToPartitionMapping tableToPartitionMapping,
org.apache.hadoop.fs.Path path,
OptionalInt bucketNumber,
long fileSize,
long fileModifiedTime) |
String |
IonSqlQueryBuilder.buildSql(List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> tupleDomain) |
String |
IonSqlQueryBuilder.buildSql(List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> tupleDomain) |
static Optional<ConnectorPageSource> |
HivePageSourceProvider.createHivePageSource(Set<HivePageSourceFactory> pageSourceFactories,
Set<HiveRecordCursorProvider> cursorProviders,
org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
OptionalInt bucketNumber,
long start,
long length,
long fileSize,
long fileModifiedTime,
Properties schema,
TupleDomain<HiveColumnHandle> effectivePredicate,
List<HiveColumnHandle> columns,
List<HivePartitionKey> partitionKeys,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
TableToPartitionMapping tableToPartitionMapping,
Optional<HiveSplit.BucketConversion> bucketConversion,
boolean s3SelectPushdownEnabled,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
static Optional<ConnectorPageSource> |
HivePageSourceProvider.createHivePageSource(Set<HivePageSourceFactory> pageSourceFactories,
Set<HiveRecordCursorProvider> cursorProviders,
org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
OptionalInt bucketNumber,
long start,
long length,
long fileSize,
long fileModifiedTime,
Properties schema,
TupleDomain<HiveColumnHandle> effectivePredicate,
List<HiveColumnHandle> columns,
List<HivePartitionKey> partitionKeys,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
TableToPartitionMapping tableToPartitionMapping,
Optional<HiveSplit.BucketConversion> bucketConversion,
boolean s3SelectPushdownEnabled,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
HivePageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
HivePageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
Optional<HiveRecordCursorProvider.ReaderRecordCursorWithProjections> |
HiveRecordCursorProvider.createRecordCursor(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
boolean s3SelectPushdownEnabled) |
Optional<HiveRecordCursorProvider.ReaderRecordCursorWithProjections> |
HiveRecordCursorProvider.createRecordCursor(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
boolean s3SelectPushdownEnabled) |
Optional<HiveRecordCursorProvider.ReaderRecordCursorWithProjections> |
GenericHiveRecordCursorProvider.createRecordCursor(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
boolean s3SelectPushdownEnabled) |
Optional<HiveRecordCursorProvider.ReaderRecordCursorWithProjections> |
GenericHiveRecordCursorProvider.createRecordCursor(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
boolean s3SelectPushdownEnabled) |
static HivePartition |
HivePartitionManager.parsePartition(SchemaTableName tableName,
String partitionName,
List<HiveColumnHandle> partitionColumns,
List<Type> partitionColumnTypes,
org.joda.time.DateTimeZone timeZone) |
static Optional<ReaderProjections> |
ReaderProjections.projectBaseColumns(List<HiveColumnHandle> columns)
Creates a mapping between the input and base columns if required.
|
static Optional<ReaderProjections> |
ReaderProjections.projectSufficientColumns(List<HiveColumnHandle> columns)
Creates a set of sufficient columns for the input projected columns and prepares a mapping between the two.
|
void |
HiveStorageFormat.validateColumns(List<HiveColumnHandle> handles) |
| Constructor and Description |
|---|
BucketConversion(HiveBucketing.BucketingVersion bucketingVersion,
int tableBucketCount,
int partitionBucketCount,
List<HiveColumnHandle> bucketColumnHandles) |
GenericHiveRecordCursor(org.apache.hadoop.conf.Configuration configuration,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.mapred.RecordReader<K,V> recordReader,
long totalBytes,
Properties splitSchema,
List<HiveColumnHandle> columns,
org.joda.time.DateTimeZone hiveStorageTimeZone) |
HiveBucketHandle(List<HiveColumnHandle> columns,
HiveBucketing.BucketingVersion bucketingVersion,
int tableBucketCount,
int readBucketCount) |
HiveInsertTableHandle(String schemaName,
String tableName,
List<HiveColumnHandle> inputColumns,
HivePageSinkMetadata pageSinkMetadata,
LocationHandle locationHandle,
Optional<HiveBucketProperty> bucketProperty,
HiveStorageFormat tableStorageFormat,
HiveStorageFormat partitionStorageFormat) |
HiveOutputTableHandle(String schemaName,
String tableName,
List<HiveColumnHandle> inputColumns,
HivePageSinkMetadata pageSinkMetadata,
LocationHandle locationHandle,
HiveStorageFormat tableStorageFormat,
HiveStorageFormat partitionStorageFormat,
List<String> partitionedBy,
Optional<HiveBucketProperty> bucketProperty,
String tableOwner,
Map<String,String> additionalTableParameters) |
HivePageSink(HiveWriterFactory writerFactory,
List<HiveColumnHandle> inputColumns,
Optional<HiveBucketProperty> bucketProperty,
PageIndexerFactory pageIndexerFactory,
HdfsEnvironment hdfsEnvironment,
int maxOpenWriters,
com.google.common.util.concurrent.ListeningExecutorService writeVerificationExecutor,
io.airlift.json.JsonCodec<PartitionUpdate> partitionUpdateCodec,
ConnectorSession session) |
HivePartitionResult(List<HiveColumnHandle> partitionColumns,
Iterable<HivePartition> partitions,
TupleDomain<HiveColumnHandle> compactEffectivePredicate,
TupleDomain<ColumnHandle> unenforcedConstraint,
TupleDomain<ColumnHandle> enforcedConstraint,
Optional<HiveBucketHandle> bucketHandle,
Optional<HiveBucketing.HiveBucketFilter> bucketFilter) |
HivePartitionResult(List<HiveColumnHandle> partitionColumns,
Iterable<HivePartition> partitions,
TupleDomain<HiveColumnHandle> compactEffectivePredicate,
TupleDomain<ColumnHandle> unenforcedConstraint,
TupleDomain<ColumnHandle> enforcedConstraint,
Optional<HiveBucketHandle> bucketHandle,
Optional<HiveBucketing.HiveBucketFilter> bucketFilter) |
HiveTableHandle(String schemaName,
String tableName,
List<HiveColumnHandle> partitionColumns,
TupleDomain<HiveColumnHandle> compactEffectivePredicate,
TupleDomain<ColumnHandle> enforcedConstraint,
Optional<HiveBucketHandle> bucketHandle,
Optional<HiveBucketing.HiveBucketFilter> bucketFilter,
Optional<List<List<String>>> analyzePartitionValues,
Optional<Set<String>> analyzeColumnNames) |
HiveTableHandle(String schemaName,
String tableName,
List<HiveColumnHandle> partitionColumns,
TupleDomain<HiveColumnHandle> compactEffectivePredicate,
TupleDomain<ColumnHandle> enforcedConstraint,
Optional<HiveBucketHandle> bucketHandle,
Optional<HiveBucketing.HiveBucketFilter> bucketFilter,
Optional<List<List<String>>> analyzePartitionValues,
Optional<Set<String>> analyzeColumnNames) |
HiveTableHandle(String schemaName,
String tableName,
Map<String,String> tableParameters,
List<HiveColumnHandle> partitionColumns,
Optional<HiveBucketHandle> bucketHandle) |
HiveTableHandle(String schemaName,
String tableName,
Optional<Map<String,String>> tableParameters,
List<HiveColumnHandle> partitionColumns,
Optional<List<HivePartition>> partitions,
TupleDomain<HiveColumnHandle> compactEffectivePredicate,
TupleDomain<ColumnHandle> enforcedConstraint,
Optional<HiveBucketHandle> bucketHandle,
Optional<HiveBucketing.HiveBucketFilter> bucketFilter,
Optional<List<List<String>>> analyzePartitionValues,
Optional<Set<String>> analyzeColumnNames,
Optional<Set<ColumnHandle>> constraintColumns) |
HiveTableHandle(String schemaName,
String tableName,
Optional<Map<String,String>> tableParameters,
List<HiveColumnHandle> partitionColumns,
Optional<List<HivePartition>> partitions,
TupleDomain<HiveColumnHandle> compactEffectivePredicate,
TupleDomain<ColumnHandle> enforcedConstraint,
Optional<HiveBucketHandle> bucketHandle,
Optional<HiveBucketing.HiveBucketFilter> bucketFilter,
Optional<List<List<String>>> analyzePartitionValues,
Optional<Set<String>> analyzeColumnNames,
Optional<Set<ColumnHandle>> constraintColumns) |
HiveWritableTableHandle(String schemaName,
String tableName,
List<HiveColumnHandle> inputColumns,
HivePageSinkMetadata pageSinkMetadata,
LocationHandle locationHandle,
Optional<HiveBucketProperty> bucketProperty,
HiveStorageFormat tableStorageFormat,
HiveStorageFormat partitionStorageFormat) |
HiveWriterFactory(Set<HiveFileWriterFactory> fileWriterFactories,
String schemaName,
String tableName,
boolean isCreateTable,
List<HiveColumnHandle> inputColumns,
HiveStorageFormat tableStorageFormat,
HiveStorageFormat partitionStorageFormat,
Map<String,String> additionalTableParameters,
OptionalInt bucketCount,
List<SortingColumn> sortedBy,
LocationHandle locationHandle,
LocationService locationService,
String queryId,
HivePageSinkMetadataProvider pageSinkMetadataProvider,
TypeManager typeManager,
HdfsEnvironment hdfsEnvironment,
PageSorter pageSorter,
io.airlift.units.DataSize sortBufferSize,
int maxOpenSortFiles,
boolean immutablePartitions,
ConnectorSession session,
NodeManager nodeManager,
io.airlift.event.client.EventClient eventClient,
HiveSessionProperties hiveSessionProperties,
HiveWriterStats hiveWriterStats) |
ReaderProjectionsAdapter(List<HiveColumnHandle> expectedHiveColumns,
ReaderProjections readerProjections) |
| Modifier and Type | Method and Description |
|---|---|
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
OrcPageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
OrcPageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
| Modifier and Type | Method and Description |
|---|---|
static Optional<org.apache.parquet.schema.Type> |
ParquetPageSourceFactory.getColumnType(HiveColumnHandle column,
org.apache.parquet.schema.MessageType messageType,
boolean useParquetColumnNames) |
static Optional<org.apache.parquet.schema.Type> |
ParquetPageSourceFactory.getParquetType(org.apache.parquet.schema.GroupType groupType,
boolean useParquetColumnNames,
HiveColumnHandle column) |
| Modifier and Type | Method and Description |
|---|---|
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
ParquetPageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
ParquetPageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
static HivePageSourceFactory.ReaderPageSourceWithProjections |
ParquetPageSourceFactory.createPageSource(org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
boolean useColumnNames,
HdfsEnvironment hdfsEnvironment,
org.apache.hadoop.conf.Configuration configuration,
String user,
FileFormatDataSourceStats stats,
ParquetReaderOptions options)
This method is available for other callers to use directly.
|
static HivePageSourceFactory.ReaderPageSourceWithProjections |
ParquetPageSourceFactory.createPageSource(org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
boolean useColumnNames,
HdfsEnvironment hdfsEnvironment,
org.apache.hadoop.conf.Configuration configuration,
String user,
FileFormatDataSourceStats stats,
ParquetReaderOptions options)
This method is available for other callers to use directly.
|
static TupleDomain<org.apache.parquet.column.ColumnDescriptor> |
ParquetPageSourceFactory.getParquetTupleDomain(Map<List<String>,RichColumnDescriptor> descriptorsByPath,
TupleDomain<HiveColumnHandle> effectivePredicate) |
| Modifier and Type | Method and Description |
|---|---|
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
RcFilePageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
Optional<HivePageSourceFactory.ReaderPageSourceWithProjections> |
RcFilePageSourceFactory.createPageSource(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
Optional<DeleteDeltaLocations> deleteDeltaLocations) |
| Constructor and Description |
|---|
RcFilePageSource(RcFileReader rcFileReader,
List<HiveColumnHandle> columns) |
| Modifier and Type | Method and Description |
|---|---|
Optional<HiveRecordCursorProvider.ReaderRecordCursorWithProjections> |
S3SelectRecordCursorProvider.createRecordCursor(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
boolean s3SelectPushdownEnabled) |
Optional<HiveRecordCursorProvider.ReaderRecordCursorWithProjections> |
S3SelectRecordCursorProvider.createRecordCursor(org.apache.hadoop.conf.Configuration configuration,
ConnectorSession session,
org.apache.hadoop.fs.Path path,
long start,
long length,
long fileSize,
Properties schema,
List<HiveColumnHandle> columns,
TupleDomain<HiveColumnHandle> effectivePredicate,
org.joda.time.DateTimeZone hiveStorageTimeZone,
TypeManager typeManager,
boolean s3SelectPushdownEnabled) |
| Modifier and Type | Method and Description |
|---|---|
static List<HiveColumnHandle> |
HiveUtil.getPartitionKeyColumnHandles(Table table,
TypeManager typeManager) |
static List<HiveColumnHandle> |
HiveUtil.getRegularColumnHandles(Table table,
TypeManager typeManager) |
static List<HiveColumnHandle> |
HiveUtil.hiveColumnHandles(Table table,
TypeManager typeManager) |
| Modifier and Type | Method and Description |
|---|---|
static String |
HiveUtil.getPrefilledColumnValue(HiveColumnHandle columnHandle,
HivePartitionKey partitionKey,
org.apache.hadoop.fs.Path path,
OptionalInt bucketNumber,
long fileSize,
long fileModifiedTime) |
| Modifier and Type | Method and Description |
|---|---|
static org.apache.hadoop.mapred.RecordReader<?,?> |
HiveUtil.createRecordReader(org.apache.hadoop.conf.Configuration configuration,
org.apache.hadoop.fs.Path path,
long start,
long length,
Properties schema,
List<HiveColumnHandle> columns) |
| Constructor and Description |
|---|
InternalHiveSplitFactory(org.apache.hadoop.fs.FileSystem fileSystem,
String partitionName,
org.apache.hadoop.mapred.InputFormat<?,?> inputFormat,
Properties schema,
List<HivePartitionKey> partitionKeys,
TupleDomain<HiveColumnHandle> effectivePredicate,
TableToPartitionMapping tableToPartitionMapping,
Optional<HiveSplit.BucketConversion> bucketConversion,
boolean forceLocalScheduling,
boolean s3SelectPushdownEnabled) |
Copyright © 2012–2020. All rights reserved.