- call(InternalRow) - Method in class org.apache.iceberg.spark.procedures.AncestorsOfProcedure
-
- call(InternalRow) - Method in class org.apache.iceberg.spark.procedures.CreateChangelogViewProcedure
-
- call(InternalRow) - Method in class org.apache.iceberg.spark.procedures.ExpireSnapshotsProcedure
-
- call(InternalRow) - Method in class org.apache.iceberg.spark.procedures.RemoveOrphanFilesProcedure
-
- call(InternalRow) - Method in interface org.apache.spark.sql.connector.iceberg.catalog.Procedure
-
Executes this procedure.
- canDeleteWhere(Filter[]) - Method in class org.apache.iceberg.spark.source.SparkTable
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.BucketFunction.BucketBinary
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.BucketFunction.BucketDecimal
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.BucketFunction.BucketInt
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.BucketFunction.BucketLong
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.BucketFunction.BucketString
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.TruncateFunction.TruncateBigInt
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.TruncateFunction.TruncateBinary
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.TruncateFunction.TruncateDecimal
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.TruncateFunction.TruncateInt
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.TruncateFunction.TruncateSmallInt
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.TruncateFunction.TruncateString
-
- canonicalName() - Method in class org.apache.iceberg.spark.functions.TruncateFunction.TruncateTinyInt
-
- capabilities() - Method in class org.apache.iceberg.spark.RollbackStagedTable
-
- capabilities() - Method in class org.apache.iceberg.spark.source.SparkChangelogTable
-
- capabilities() - Method in class org.apache.iceberg.spark.source.SparkTable
-
- caseSensitive(boolean) - Method in class org.apache.iceberg.spark.source.SparkScanBuilder
-
- caseSensitive() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- caseSensitive(SparkSession) - Static method in class org.apache.iceberg.spark.SparkUtil
-
- catalog() - Method in class org.apache.iceberg.spark.Spark3Util.CatalogAndIdentifier
-
- catalogAndIdentifier(SparkSession, String) - Static method in class org.apache.iceberg.spark.Spark3Util
-
- catalogAndIdentifier(SparkSession, String, CatalogPlugin) - Static method in class org.apache.iceberg.spark.Spark3Util
-
- catalogAndIdentifier(String, SparkSession, String) - Static method in class org.apache.iceberg.spark.Spark3Util
-
- catalogAndIdentifier(String, SparkSession, String, CatalogPlugin) - Static method in class org.apache.iceberg.spark.Spark3Util
-
- catalogAndIdentifier(SparkSession, List<String>) - Static method in class org.apache.iceberg.spark.Spark3Util
-
- catalogAndIdentifier(SparkSession, List<String>, CatalogPlugin) - Static method in class org.apache.iceberg.spark.Spark3Util
-
A modified version of Spark's LookupCatalog.CatalogAndIdentifier.unapply Attempts to find the
catalog and identifier a multipart identifier represents
- CatalogAndIdentifier(CatalogPlugin, Identifier) - Constructor for class org.apache.iceberg.spark.Spark3Util.CatalogAndIdentifier
-
- CatalogAndIdentifier(Pair<CatalogPlugin, Identifier>) - Constructor for class org.apache.iceberg.spark.Spark3Util.CatalogAndIdentifier
-
- catalogAndIdentifier(List<String>, Function<String, C>, BiFunction<String[], String, T>, C, String[]) - Static method in class org.apache.iceberg.spark.SparkUtil
-
A modified version of Spark's LookupCatalog.CatalogAndIdentifier.unapply Attempts to find the
catalog and identifier a multipart identifier represents
- ChangelogIterator - Class in org.apache.iceberg.spark
-
An iterator that transforms rows from changelog tables within a single Spark task.
- ChangelogIterator(Iterator<Row>, StructType) - Constructor for class org.apache.iceberg.spark.ChangelogIterator
-
- changeTypeIndex() - Method in class org.apache.iceberg.spark.ChangelogIterator
-
- CHECK_NULLABILITY - Static variable in class org.apache.iceberg.spark.SparkSQLProperties
-
- CHECK_NULLABILITY - Static variable in class org.apache.iceberg.spark.SparkWriteOptions
-
- CHECK_NULLABILITY_DEFAULT - Static variable in class org.apache.iceberg.spark.SparkSQLProperties
-
- CHECK_ORDERING - Static variable in class org.apache.iceberg.spark.SparkSQLProperties
-
- CHECK_ORDERING - Static variable in class org.apache.iceberg.spark.SparkWriteOptions
-
- CHECK_ORDERING_DEFAULT - Static variable in class org.apache.iceberg.spark.SparkSQLProperties
-
- checkNullability() - Method in class org.apache.iceberg.spark.SparkWriteConf
-
- checkOrdering() - Method in class org.apache.iceberg.spark.SparkWriteConf
-
- checkSourceCatalog(CatalogPlugin) - Method in class org.apache.iceberg.spark.actions.MigrateTableSparkAction
-
- checkSourceCatalog(CatalogPlugin) - Method in class org.apache.iceberg.spark.actions.SnapshotTableSparkAction
-
- clearRewrite(Table, String) - Method in class org.apache.iceberg.spark.FileRewriteCoordinator
-
- close() - Method in class org.apache.iceberg.spark.data.vectorized.DeletedColumnVector
-
- close() - Method in class org.apache.iceberg.spark.data.vectorized.IcebergArrowColumnVector
-
- close() - Method in class org.apache.iceberg.spark.data.vectorized.RowPositionColumnVector
-
- ColumnarBatchReader - Class in org.apache.iceberg.spark.data.vectorized
-
VectorizedReader that returns Spark's ColumnarBatch to support Spark's vectorized
read path.
- ColumnarBatchReader(List<VectorizedReader<?>>) - Constructor for class org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader
-
- columnSizes() - Method in class org.apache.iceberg.spark.SparkDataFile
-
- ColumnVectorWithFilter - Class in org.apache.iceberg.spark.data.vectorized
-
- ColumnVectorWithFilter(VectorHolder, int[]) - Constructor for class org.apache.iceberg.spark.data.vectorized.ColumnVectorWithFilter
-
- command() - Method in interface org.apache.spark.sql.connector.iceberg.write.RowLevelOperation
-
Returns the actual SQL operation being performed.
- command() - Method in interface org.apache.spark.sql.connector.iceberg.write.RowLevelOperationInfo
-
Returns the SQL command (e.g.
- commit(Offset) - Method in class org.apache.iceberg.spark.source.SparkMicroBatchStream
-
- CommitMetadata - Class in org.apache.iceberg.spark
-
utility class to accept thread local commit properties
- commitProperties() - Static method in class org.apache.iceberg.spark.CommitMetadata
-
- commitStagedChanges() - Method in class org.apache.iceberg.spark.RollbackStagedTable
-
- commitStagedChanges() - Method in class org.apache.iceberg.spark.source.StagedSparkTable
-
- compareToFileList(Dataset<Row>) - Method in class org.apache.iceberg.spark.actions.DeleteOrphanFilesSparkAction
-
- COMPRESSION_FACTOR - Static variable in class org.apache.iceberg.spark.actions.SparkSortStrategy
-
The number of shuffle partitions and consequently the number of output files created by the
Spark Sort is based on the size of the input data files used in this rewrite operation.
- ComputeUpdateIterator - Class in org.apache.iceberg.spark
-
An iterator that finds delete/insert rows which represent an update, and converts them into
update records from changelog tables within a single Spark task.
- computeUpdates(Iterator<Row>, StructType, String[]) - Static method in class org.apache.iceberg.spark.ChangelogIterator
-
Creates an iterator composing
RemoveCarryoverIterator and
ComputeUpdateIterator
to remove carry-over rows and compute update rows
- contains(String) - Method in class org.apache.iceberg.spark.SparkTableCache
-
- content() - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- convert(SortOrder) - Static method in class org.apache.iceberg.spark.SparkDistributionAndOrderingUtil
-
- convert(Filter[]) - Static method in class org.apache.iceberg.spark.SparkFilters
-
- convert(Filter) - Static method in class org.apache.iceberg.spark.SparkFilters
-
- convert(Schema) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Convert a Schema to a Spark type.
- convert(Type) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Convert a Type to a Spark type.
- convert(StructType) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Convert a Spark struct to a Schema with new field ids.
- convert(StructType, boolean) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Convert a Spark struct to a Schema with new field ids.
- convert(DataType) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Convert a Spark struct to a Type with new field ids.
- convert(Schema, StructType) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Convert a Spark struct to a Schema based on the given schema.
- convert(Schema, Row) - Static method in class org.apache.iceberg.spark.SparkValueConverter
-
- convert(Type, Object) - Static method in class org.apache.iceberg.spark.SparkValueConverter
-
- convertWithFreshIds(Schema, StructType) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Convert a Spark struct to a Schema based on the given schema.
- copy() - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- copy() - Method in class org.apache.iceberg.spark.actions.SetAccumulator
-
- copy() - Method in class org.apache.iceberg.spark.SparkDataFile
-
- copyOf(Table) - Static method in class org.apache.iceberg.spark.source.SerializableTableWithSize
-
- copyOnWriteMergeDistributionMode() - Method in class org.apache.iceberg.spark.SparkWriteConf
-
- copyWithoutStats() - Method in class org.apache.iceberg.spark.SparkDataFile
-
- createBatchWriterFactory(PhysicalWriteInfo) - Method in interface org.apache.spark.sql.connector.iceberg.write.DeltaBatchWrite
-
- CreateChangelogViewProcedure - Class in org.apache.iceberg.spark.procedures
-
A procedure that creates a view for changed rows.
- createNamespace(String[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkCatalog
-
- createNamespace(String[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkSessionCatalog
-
- createReaderFactory() - Method in class org.apache.iceberg.spark.source.SparkMicroBatchStream
-
- createTable(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkCachedTableCatalog
-
- createTable(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkCatalog
-
- createTable(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkSessionCatalog
-
- createWriter(int, long) - Method in interface org.apache.spark.sql.connector.iceberg.write.DeltaWriterFactory
-
- currentPath() - Method in class org.apache.iceberg.spark.data.ParquetWithSparkSchemaVisitor
-
- schema(Schema, Supplier<Type>) - Method in class org.apache.iceberg.spark.PruneColumnsWithoutReordering
-
- schema(Schema, Supplier<Type>) - Method in class org.apache.iceberg.spark.PruneColumnsWithReordering
-
- schema() - Method in class org.apache.iceberg.spark.RollbackStagedTable
-
- schema() - Method in class org.apache.iceberg.spark.source.SparkChangelogTable
-
- schema() - Method in class org.apache.iceberg.spark.source.SparkTable
-
- schema(Schema, String) - Method in class org.apache.iceberg.spark.Spark3Util.DescribeSchemaVisitor
-
- schemaForTable(SparkSession, String) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Returns a Schema for the given table with fresh field ids.
- self() - Method in class org.apache.iceberg.spark.actions.DeleteOrphanFilesSparkAction
-
- self() - Method in class org.apache.iceberg.spark.actions.DeleteReachableFilesSparkAction
-
- self() - Method in class org.apache.iceberg.spark.actions.ExpireSnapshotsSparkAction
-
- self() - Method in class org.apache.iceberg.spark.actions.MigrateTableSparkAction
-
- self() - Method in class org.apache.iceberg.spark.actions.RewriteDataFilesSparkAction
-
- self() - Method in class org.apache.iceberg.spark.actions.RewriteManifestsSparkAction
-
- self() - Method in class org.apache.iceberg.spark.actions.SnapshotTableSparkAction
-
- sequenceNumber() - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- serializableFileIO(Table) - Static method in class org.apache.iceberg.spark.SparkUtil
-
- SerializableMetadataTableWithSize(BaseMetadataTable) - Constructor for class org.apache.iceberg.spark.source.SerializableTableWithSize.SerializableMetadataTableWithSize
-
- SerializableTableWithSize - Class in org.apache.iceberg.spark.source
-
This class provides a serializable table with a known size estimate.
- SerializableTableWithSize(Table) - Constructor for class org.apache.iceberg.spark.source.SerializableTableWithSize
-
- SerializableTableWithSize.SerializableMetadataTableWithSize - Class in org.apache.iceberg.spark.source
-
- set(int, T) - Method in class org.apache.iceberg.spark.SparkStructLike
-
- SetAccumulator<T> - Class in org.apache.iceberg.spark.actions
-
- SetAccumulator() - Constructor for class org.apache.iceberg.spark.actions.SetAccumulator
-
- setAddedSnapshotId(Long) - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- setAuthority(String) - Method in class org.apache.iceberg.spark.actions.DeleteOrphanFilesSparkAction.FileURI
-
- setBatchContext(long) - Method in class org.apache.iceberg.spark.data.SparkOrcReader
-
- setContent(Integer) - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- setDelegateCatalog(CatalogPlugin) - Method in class org.apache.iceberg.spark.SparkSessionCatalog
-
- setDeleteFilter(DeleteFilter<InternalRow>) - Method in class org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader
-
- setJobGroupInfo(SparkContext, JobGroupInfo) - Static method in class org.apache.iceberg.spark.JobGroupUtils
-
- setLength(Long) - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- setOption(String, String, CaseInsensitiveStringMap) - Static method in class org.apache.iceberg.spark.Spark3Util
-
- setPartitionSpecId(Integer) - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- setPath(String) - Method in class org.apache.iceberg.spark.actions.DeleteOrphanFilesSparkAction.FileURI
-
- setPath(String) - Method in class org.apache.iceberg.spark.actions.FileInfo
-
- setPath(String) - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- setRowGroupInfo(PageReadStore, Map<ColumnPath, ColumnChunkMetaData>, long) - Method in class org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader
-
- setRowPositionSupplier(Supplier<Long>) - Method in class org.apache.iceberg.spark.data.SparkAvroReader
-
- setSchema(Schema) - Method in class org.apache.iceberg.spark.data.SparkAvroReader
-
- setSchema(Schema) - Method in class org.apache.iceberg.spark.data.SparkAvroWriter
-
- setScheme(String) - Method in class org.apache.iceberg.spark.actions.DeleteOrphanFilesSparkAction.FileURI
-
- setType(String) - Method in class org.apache.iceberg.spark.actions.FileInfo
-
- setUriAsString(String) - Method in class org.apache.iceberg.spark.actions.DeleteOrphanFilesSparkAction.FileURI
-
- shortName() - Method in class org.apache.iceberg.spark.source.IcebergSource
-
- size() - Method in class org.apache.iceberg.spark.SparkStructLike
-
- size() - Method in class org.apache.iceberg.spark.SparkTableCache
-
- sizeEstimateMultiple() - Method in class org.apache.iceberg.spark.actions.SparkSortStrategy
-
- SNAPSHOT_ID - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- SNAPSHOT_PROPERTY_PREFIX - Static variable in class org.apache.iceberg.spark.SparkWriteOptions
-
- snapshotId() - Method in class org.apache.iceberg.spark.actions.ManifestFileBean
-
- snapshotId() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- snapshotTable(String) - Method in class org.apache.iceberg.spark.actions.SparkActions
-
- SnapshotTableSparkAction - Class in org.apache.iceberg.spark.actions
-
Creates a new Iceberg table based on a source Spark table.
- sort(SortOrder) - Method in class org.apache.iceberg.spark.actions.RewriteDataFilesSparkAction
-
- sort() - Method in class org.apache.iceberg.spark.actions.RewriteDataFilesSparkAction
-
- sortOrder() - Method in class org.apache.iceberg.spark.actions.SparkZOrderStrategy
-
- sortOrderId() - Method in class org.apache.iceberg.spark.SparkDataFile
-
- sortPlan(Distribution, SortOrder[], LogicalPlan, SQLConf) - Method in class org.apache.iceberg.spark.actions.SparkSortStrategy
-
- spark() - Method in class org.apache.iceberg.spark.actions.SparkSortStrategy
-
- Spark3Util - Class in org.apache.iceberg.spark
-
- Spark3Util.CatalogAndIdentifier - Class in org.apache.iceberg.spark
-
This mimics a class inside of Spark which is private inside of LookupCatalog.
- Spark3Util.DescribeSchemaVisitor - Class in org.apache.iceberg.spark
-
- SPARK_MERGE_SCHEMA - Static variable in class org.apache.iceberg.spark.SparkWriteOptions
-
- SparkActions - Class in org.apache.iceberg.spark.actions
-
An implementation of ActionsProvider for Spark.
- SparkAvroReader - Class in org.apache.iceberg.spark.data
-
- SparkAvroReader(Schema, Schema) - Constructor for class org.apache.iceberg.spark.data.SparkAvroReader
-
- SparkAvroReader(Schema, Schema, Map<Integer, ?>) - Constructor for class org.apache.iceberg.spark.data.SparkAvroReader
-
- SparkAvroWriter - Class in org.apache.iceberg.spark.data
-
- SparkAvroWriter(StructType) - Constructor for class org.apache.iceberg.spark.data.SparkAvroWriter
-
- SparkBinPackStrategy - Class in org.apache.iceberg.spark.actions
-
- SparkBinPackStrategy(Table, SparkSession) - Constructor for class org.apache.iceberg.spark.actions.SparkBinPackStrategy
-
- SparkCachedTableCatalog - Class in org.apache.iceberg.spark
-
An internal table catalog that is capable of loading tables from a cache.
- SparkCachedTableCatalog() - Constructor for class org.apache.iceberg.spark.SparkCachedTableCatalog
-
- SparkCatalog - Class in org.apache.iceberg.spark
-
A Spark TableCatalog implementation that wraps an Iceberg Catalog.
- SparkCatalog() - Constructor for class org.apache.iceberg.spark.SparkCatalog
-
- SparkChangelogTable - Class in org.apache.iceberg.spark.source
-
- SparkChangelogTable(Table, boolean) - Constructor for class org.apache.iceberg.spark.source.SparkChangelogTable
-
- SparkDataFile - Class in org.apache.iceberg.spark
-
- SparkDataFile(Types.StructType, StructType) - Constructor for class org.apache.iceberg.spark.SparkDataFile
-
- SparkDataFile(Types.StructType, Types.StructType, StructType) - Constructor for class org.apache.iceberg.spark.SparkDataFile
-
- SparkDistributionAndOrderingUtil - Class in org.apache.iceberg.spark
-
- SparkExceptionUtil - Class in org.apache.iceberg.spark
-
- SparkFilters - Class in org.apache.iceberg.spark
-
- SparkFunctions - Class in org.apache.iceberg.spark.functions
-
- SparkMetadataColumn - Class in org.apache.iceberg.spark.source
-
- SparkMetadataColumn(String, DataType, boolean) - Constructor for class org.apache.iceberg.spark.source.SparkMetadataColumn
-
- SparkMicroBatchStream - Class in org.apache.iceberg.spark.source
-
- SparkOrcReader - Class in org.apache.iceberg.spark.data
-
Converts the OrcIterator, which returns ORC's VectorizedRowBatch to a set of Spark's UnsafeRows.
- SparkOrcReader(Schema, TypeDescription) - Constructor for class org.apache.iceberg.spark.data.SparkOrcReader
-
- SparkOrcReader(Schema, TypeDescription, Map<Integer, ?>) - Constructor for class org.apache.iceberg.spark.data.SparkOrcReader
-
- SparkOrcValueReaders - Class in org.apache.iceberg.spark.data
-
- SparkOrcWriter - Class in org.apache.iceberg.spark.data
-
This class acts as an adaptor from an OrcFileAppender to a FileAppender<InternalRow>.
- SparkOrcWriter(Schema, TypeDescription) - Constructor for class org.apache.iceberg.spark.data.SparkOrcWriter
-
- SparkParquetReaders - Class in org.apache.iceberg.spark.data
-
- SparkParquetWriters - Class in org.apache.iceberg.spark.data
-
- SparkPartition(Map<String, String>, String, String) - Constructor for class org.apache.iceberg.spark.SparkTableUtil.SparkPartition
-
- SparkPartitionedFanoutWriter - Class in org.apache.iceberg.spark.source
-
- SparkPartitionedFanoutWriter(PartitionSpec, FileFormat, FileAppenderFactory<InternalRow>, OutputFileFactory, FileIO, long, Schema, StructType) - Constructor for class org.apache.iceberg.spark.source.SparkPartitionedFanoutWriter
-
- SparkPartitionedWriter - Class in org.apache.iceberg.spark.source
-
- SparkPartitionedWriter(PartitionSpec, FileFormat, FileAppenderFactory<InternalRow>, OutputFileFactory, FileIO, long, Schema, StructType) - Constructor for class org.apache.iceberg.spark.source.SparkPartitionedWriter
-
- SparkProcedures - Class in org.apache.iceberg.spark.procedures
-
- SparkProcedures.ProcedureBuilder - Interface in org.apache.iceberg.spark.procedures
-
- SparkReadConf - Class in org.apache.iceberg.spark
-
A class for common Iceberg configs for Spark reads.
- SparkReadConf(SparkSession, Table, Map<String, String>) - Constructor for class org.apache.iceberg.spark.SparkReadConf
-
- SparkReadOptions - Class in org.apache.iceberg.spark
-
Spark DF read options
- SparkScanBuilder - Class in org.apache.iceberg.spark.source
-
- SparkSchemaUtil - Class in org.apache.iceberg.spark
-
Helper methods for working with Spark/Hive metadata.
- SparkSessionCatalog<T extends org.apache.spark.sql.connector.catalog.TableCatalog & org.apache.spark.sql.connector.catalog.SupportsNamespaces> - Class in org.apache.iceberg.spark
-
A Spark catalog that can also load non-Iceberg tables.
- SparkSessionCatalog() - Constructor for class org.apache.iceberg.spark.SparkSessionCatalog
-
- SparkSortStrategy - Class in org.apache.iceberg.spark.actions
-
- SparkSortStrategy(Table, SparkSession) - Constructor for class org.apache.iceberg.spark.actions.SparkSortStrategy
-
- SparkSQLProperties - Class in org.apache.iceberg.spark
-
- SparkStructLike - Class in org.apache.iceberg.spark
-
- SparkStructLike(Types.StructType) - Constructor for class org.apache.iceberg.spark.SparkStructLike
-
- SparkTable - Class in org.apache.iceberg.spark.source
-
- SparkTable(Table, boolean) - Constructor for class org.apache.iceberg.spark.source.SparkTable
-
- SparkTable(Table, Long, boolean) - Constructor for class org.apache.iceberg.spark.source.SparkTable
-
- SparkTableCache - Class in org.apache.iceberg.spark
-
- SparkTableCache() - Constructor for class org.apache.iceberg.spark.SparkTableCache
-
- SparkTableUtil - Class in org.apache.iceberg.spark
-
Java version of the original SparkTableUtil.scala
https://github.com/apache/iceberg/blob/apache-iceberg-0.8.0-incubating/spark/src/main/scala/org/apache/iceberg/spark/SparkTableUtil.scala
- SparkTableUtil.SparkPartition - Class in org.apache.iceberg.spark
-
Class representing a table partition.
- SparkUtil - Class in org.apache.iceberg.spark
-
- SparkValueConverter - Class in org.apache.iceberg.spark
-
A utility class that converts Spark values to Iceberg's internal representation.
- SparkValueReaders - Class in org.apache.iceberg.spark.data
-
- SparkValueWriters - Class in org.apache.iceberg.spark.data
-
- SparkWriteConf - Class in org.apache.iceberg.spark
-
A class for common Iceberg configs for Spark writes.
- SparkWriteConf(SparkSession, Table, Map<String, String>) - Constructor for class org.apache.iceberg.spark.SparkWriteConf
-
- SparkWriteOptions - Class in org.apache.iceberg.spark
-
Spark DF write options
- SparkZOrderStrategy - Class in org.apache.iceberg.spark.actions
-
- SparkZOrderStrategy(Table, SparkSession, List<String>) - Constructor for class org.apache.iceberg.spark.actions.SparkZOrderStrategy
-
- specForTable(SparkSession, String) - Static method in class org.apache.iceberg.spark.SparkSchemaUtil
-
Returns a PartitionSpec for the given table.
- specId(int) - Method in class org.apache.iceberg.spark.actions.RewriteManifestsSparkAction
-
- specId() - Method in class org.apache.iceberg.spark.SparkDataFile
-
- SPLIT_SIZE - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- splitLookback() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- splitLookbackOption() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- splitOffsets() - Method in class org.apache.iceberg.spark.SparkDataFile
-
- splitOpenFileCost() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- splitOpenFileCostOption() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- splitSize() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- splitSizeOption() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- stageCreate(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkCatalog
-
- stageCreate(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkSessionCatalog
-
- stageCreateOrReplace(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkCatalog
-
- stageCreateOrReplace(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkSessionCatalog
-
- StagedSparkTable - Class in org.apache.iceberg.spark.source
-
- StagedSparkTable(Transaction) - Constructor for class org.apache.iceberg.spark.source.StagedSparkTable
-
- stageReplace(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkCatalog
-
- stageReplace(Identifier, StructType, Transform[], Map<String, String>) - Method in class org.apache.iceberg.spark.SparkSessionCatalog
-
- stageRewrite(Table, String, Set<DataFile>) - Method in class org.apache.iceberg.spark.FileRewriteCoordinator
-
Called to persist the output of a rewrite action for a specific group.
- stageTasks(Table, String, List<FileScanTask>) - Method in class org.apache.iceberg.spark.FileScanTaskSetManager
-
- stagingLocation(String) - Method in class org.apache.iceberg.spark.actions.RewriteManifestsSparkAction
-
- START_SNAPSHOT_ID - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- START_TIMESTAMP - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- startSnapshotId() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- startTimestamp() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- stop() - Method in class org.apache.iceberg.spark.source.SparkMicroBatchStream
-
- STREAM_FROM_TIMESTAMP - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- STREAM_RESULTS - Static variable in class org.apache.iceberg.spark.actions.DeleteReachableFilesSparkAction
-
- STREAM_RESULTS - Static variable in class org.apache.iceberg.spark.actions.ExpireSnapshotsSparkAction
-
- STREAM_RESULTS_DEFAULT - Static variable in class org.apache.iceberg.spark.actions.DeleteReachableFilesSparkAction
-
- STREAM_RESULTS_DEFAULT - Static variable in class org.apache.iceberg.spark.actions.ExpireSnapshotsSparkAction
-
- streamFromTimestamp() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- STREAMING_SKIP_DELETE_SNAPSHOTS - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- STREAMING_SKIP_DELETE_SNAPSHOTS_DEFAULT - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- STREAMING_SKIP_OVERWRITE_SNAPSHOTS - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- STREAMING_SKIP_OVERWRITE_SNAPSHOTS_DEFAULT - Static variable in class org.apache.iceberg.spark.SparkReadOptions
-
- streamingSkipDeleteSnapshots() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- streamingSkipOverwriteSnapshots() - Method in class org.apache.iceberg.spark.SparkReadConf
-
- struct(StructType, GroupType, List<T>) - Method in class org.apache.iceberg.spark.data.ParquetWithSparkSchemaVisitor
-
- struct(Types.StructType, Iterable<Type>) - Method in class org.apache.iceberg.spark.PruneColumnsWithoutReordering
-
- struct(Types.StructType, Iterable<Type>) - Method in class org.apache.iceberg.spark.PruneColumnsWithReordering
-
- struct(Types.StructType, List<String>) - Method in class org.apache.iceberg.spark.Spark3Util.DescribeSchemaVisitor
-
- SupportsDelta - Interface in org.apache.spark.sql.connector.iceberg.write
-
A mix-in interface for RowLevelOperation.
- supportsExternalMetadata() - Method in class org.apache.iceberg.spark.source.IcebergSource
-
- SupportsRowLevelOperations - Interface in org.apache.spark.sql.connector.iceberg.catalog
-
A mix-in interface for row-level operations support.