Package io.trino.plugin.hive
Class HiveTableHandle
java.lang.Object
io.trino.plugin.hive.HiveTableHandle
- All Implemented Interfaces:
ConnectorTableHandle
-
Constructor Summary
ConstructorsConstructorDescriptionHiveTableHandle(String schemaName, String tableName, List<HiveColumnHandle> partitionColumns, List<HiveColumnHandle> dataColumns, TupleDomain<HiveColumnHandle> compactEffectivePredicate, TupleDomain<ColumnHandle> enforcedConstraint, Optional<HiveBucketHandle> bucketHandle, Optional<HiveBucketing.HiveBucketFilter> bucketFilter, Optional<List<List<String>>> analyzePartitionValues, AcidTransaction transaction) HiveTableHandle(String schemaName, String tableName, Map<String, String> tableParameters, List<HiveColumnHandle> partitionColumns, List<HiveColumnHandle> dataColumns, Optional<HiveBucketHandle> bucketHandle) HiveTableHandle(String schemaName, String tableName, Optional<Map<String, String>> tableParameters, List<HiveColumnHandle> partitionColumns, List<HiveColumnHandle> dataColumns, Optional<List<String>> partitionNames, Optional<List<HivePartition>> partitions, TupleDomain<HiveColumnHandle> compactEffectivePredicate, TupleDomain<ColumnHandle> enforcedConstraint, Optional<HiveBucketHandle> bucketHandle, Optional<HiveBucketing.HiveBucketFilter> bucketFilter, Optional<List<List<String>>> analyzePartitionValues, Set<ColumnHandle> constraintColumns, Set<ColumnHandle> projectedColumns, AcidTransaction transaction, boolean recordScannedFiles, Optional<Long> maxSplitFileSize) -
Method Summary
Modifier and TypeMethodDescriptionbooleanRepresents raw partition information as String.Represents parsed partition information (which is derived from raw partition string).longinthashCode()booleanbooleanbooleantoString()withAnalyzePartitionValues(List<List<String>> analyzePartitionValues) withMaxScannedFileSize(Optional<Long> maxScannedFileSize) withProjectedColumns(Set<ColumnHandle> projectedColumns) withRecordScannedFiles(boolean recordScannedFiles) withTransaction(AcidTransaction transaction)
-
Constructor Details
-
HiveTableHandle
public HiveTableHandle(String schemaName, String tableName, List<HiveColumnHandle> partitionColumns, List<HiveColumnHandle> dataColumns, TupleDomain<HiveColumnHandle> compactEffectivePredicate, TupleDomain<ColumnHandle> enforcedConstraint, Optional<HiveBucketHandle> bucketHandle, Optional<HiveBucketing.HiveBucketFilter> bucketFilter, Optional<List<List<String>>> analyzePartitionValues, AcidTransaction transaction) -
HiveTableHandle
public HiveTableHandle(String schemaName, String tableName, Map<String, String> tableParameters, List<HiveColumnHandle> partitionColumns, List<HiveColumnHandle> dataColumns, Optional<HiveBucketHandle> bucketHandle) -
HiveTableHandle
public HiveTableHandle(String schemaName, String tableName, Optional<Map<String, String>> tableParameters, List<HiveColumnHandle> partitionColumns, List<HiveColumnHandle> dataColumns, Optional<List<String>> partitionNames, Optional<List<HivePartition>> partitions, TupleDomain<HiveColumnHandle> compactEffectivePredicate, TupleDomain<ColumnHandle> enforcedConstraint, Optional<HiveBucketHandle> bucketHandle, Optional<HiveBucketing.HiveBucketFilter> bucketFilter, Optional<List<List<String>>> analyzePartitionValues, Set<ColumnHandle> constraintColumns, Set<ColumnHandle> projectedColumns, AcidTransaction transaction, boolean recordScannedFiles, Optional<Long> maxSplitFileSize)
-
-
Method Details
-
withAnalyzePartitionValues
-
withTransaction
-
withProjectedColumns
-
withRecordScannedFiles
-
withMaxScannedFileSize
-
getSchemaName
-
getTableName
-
getTableParameters
-
getPartitionColumns
-
getDataColumns
-
getPartitionNames
Represents raw partition information as String. These are partially satisfied by the table filter criteria. This will be set to `Optional#empty` if parsed partition information are loaded. Skip serialization as they are not needed on workers -
getPartitions
Represents parsed partition information (which is derived from raw partition string). These are fully satisfied by the table filter criteria. Skip serialization as they are not needed on workers -
getCompactEffectivePredicate
-
getEnforcedConstraint
-
getBucketHandle
-
getBucketFilter
-
getAnalyzePartitionValues
-
getTransaction
-
getConstraintColumns
-
getProjectedColumns
-
getSchemaTableName
-
isAcidMerge
public boolean isAcidMerge() -
isInAcidTransaction
public boolean isInAcidTransaction() -
getWriteId
public long getWriteId() -
isRecordScannedFiles
public boolean isRecordScannedFiles() -
getMaxScannedFileSize
-
equals
-
hashCode
public int hashCode() -
toString
-