object DeltaLog extends DeltaLogging
- Alphabetic
- By Inheritance
- DeltaLog
- DeltaLogging
- DatabricksLogging
- DeltaProgressReporter
- LoggingShims
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- implicit class LogStringContext extends AnyRef
- Definition Classes
- LoggingShims
- type CacheKey = (Path, Map[String, String])
We create only a single DeltaLog for any given
DeltaLogCacheKeyto avoid wasted work in reconstructing the log.
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def assertRemovable(snapshot: Snapshot): Unit
Checks whether this table only accepts appends.
Checks whether this table only accepts appends. If so it will throw an error in operations that can remove data such as DELETE/UPDATE/MERGE.
- def clearCache(): Unit
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def deltaAssert(check: => Boolean, name: String, msg: String, deltaLog: DeltaLog = null, data: AnyRef = null, path: Option[Path] = None): Unit
Helper method to check invariants in Delta code.
Helper method to check invariants in Delta code. Fails when running in tests, records a delta assertion event and logs a warning otherwise.
- Attributes
- protected
- Definition Classes
- DeltaLogging
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def filterFileList(partitionSchema: StructType, files: DataFrame, partitionFilters: Seq[Expression], partitionColumnPrefixes: Seq[String] = Nil, shouldRewritePartitionFilters: Boolean = true): DataFrame
Filters the given Dataset by the given
partitionFilters, returning those that match.Filters the given Dataset by the given
partitionFilters, returning those that match.- files
The active files in the DeltaLog state, which contains the partition value information
- partitionFilters
Filters on the partition columns
- partitionColumnPrefixes
The path to the
partitionValuescolumn, if it's nested- shouldRewritePartitionFilters
Whether to rewrite
partitionFiltersto be over the AddFile schema
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- def forTable(spark: SparkSession, table: CatalogTable, clock: Clock): DeltaLog
Helper for creating a log for the table.
- def forTable(spark: SparkSession, table: CatalogTable, options: Map[String, String]): DeltaLog
Helper for creating a log for the table.
- def forTable(spark: SparkSession, tableName: TableIdentifier, clock: Clock): DeltaLog
Helper for creating a log for the table.
- def forTable(spark: SparkSession, table: CatalogTable): DeltaLog
Helper for creating a log for the table.
- def forTable(spark: SparkSession, tableName: TableIdentifier): DeltaLog
Helper for creating a log for the table.
- def forTable(spark: SparkSession, dataPath: Path, clock: Clock): DeltaLog
Helper for creating a log when it stored at the root of the data.
- def forTable(spark: SparkSession, dataPath: Path, options: Map[String, String]): DeltaLog
Helper for creating a log when it stored at the root of the data.
- def forTable(spark: SparkSession, dataPath: Path): DeltaLog
Helper for creating a log when it stored at the root of the data.
- def forTable(spark: SparkSession, dataPath: String): DeltaLog
Helper for creating a log when it stored at the root of the data.
- def forTableWithSnapshot(spark: SparkSession, table: CatalogTable, options: Map[String, String]): (DeltaLog, Snapshot)
Helper for getting a log, as well as the latest snapshot, of the table
- def forTableWithSnapshot(spark: SparkSession, dataPath: Path, options: Map[String, String]): (DeltaLog, Snapshot)
Helper for getting a log, as well as the latest snapshot, of the table
- def forTableWithSnapshot(spark: SparkSession, tableName: TableIdentifier): (DeltaLog, Snapshot)
Helper for getting a log, as well as the latest snapshot, of the table
- def forTableWithSnapshot(spark: SparkSession, dataPath: Path): (DeltaLog, Snapshot)
Helper for getting a log, as well as the latest snapshot, of the table
- def forTableWithSnapshot(spark: SparkSession, dataPath: String): (DeltaLog, Snapshot)
Helper for getting a log, as well as the latest snapshot, of the table
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getCommonTags(deltaLog: DeltaLog, tahoeId: String): Map[TagDefinition, String]
- Definition Classes
- DeltaLogging
- def getErrorData(e: Throwable): Map[String, Any]
- Definition Classes
- DeltaLogging
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def indexToRelation(spark: SparkSession, index: DeltaLogFileIndex, additionalOptions: Map[String, String], schema: StructType = Action.logSchema): LogicalRelation
Creates a LogicalRelation for a given DeltaLogFileIndex, with all necessary file source options taken from the Delta Log.
Creates a LogicalRelation for a given DeltaLogFileIndex, with all necessary file source options taken from the Delta Log. All reads of Delta metadata files should use this method.
- def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def invalidateCache(spark: SparkSession, dataPath: Path): Unit
Invalidate the cached DeltaLog object for the given
dataPath. - final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- val jsonCommitParseOption: Map[String, String]
- def log: Logger
- Attributes
- protected
- Definition Classes
- Logging
- def logConsole(line: String): Unit
- Definition Classes
- DatabricksLogging
- def logDebug(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logDebug(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logDebug(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logError(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logError(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logInfo(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logInfo(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logName: String
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logTrace(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logTrace(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logWarning(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logWarning(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def minSetTransactionRetentionInterval(metadata: Metadata): Option[Long]
How long to keep around SetTransaction actions before physically deleting them.
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit
Used to record the occurrence of a single event or report detailed, operation specific statistics.
Used to record the occurrence of a single event or report detailed, operation specific statistics.
- path
Used to log the path of the delta table when
deltaLogis null.
- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: => A): A
Used to report the duration as well as the success or failure of an operation on a
deltaLog.Used to report the duration as well as the success or failure of an operation on a
deltaLog.- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: => A): A
Used to report the duration as well as the success or failure of an operation on a
tahoePath.Used to report the duration as well as the success or failure of an operation on a
tahoePath.- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
- Definition Classes
- DatabricksLogging
- def recordFrameProfile[T](group: String, name: String)(thunk: => T): T
- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: => S): S
- Definition Classes
- DatabricksLogging
- def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
- Definition Classes
- DatabricksLogging
- def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
- Definition Classes
- DatabricksLogging
- def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
- Definition Classes
- DatabricksLogging
- def rewritePartitionFilters(partitionSchema: StructType, resolver: Resolver, partitionFilters: Seq[Expression], partitionColumnPrefixes: Seq[String] = Nil): Seq[Expression]
Rewrite the given
partitionFiltersto be used for filtering partition values.Rewrite the given
partitionFiltersto be used for filtering partition values. We need to explicitly resolve the partitioning columns here because the partition columns are stored as keys of a Map type instead of attributes in the AddFile schema (below) and thus cannot be resolved automatically.- partitionFilters
Filters on the partition columns
- partitionColumnPrefixes
The path to the
partitionValuescolumn, if it's nested
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- def tombstoneRetentionMillis(metadata: Metadata): Long
How long to keep around logically deleted files before physically deleting them.
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: => T): T
Report a log to indicate some command is running.
Report a log to indicate some command is running.
- Definition Classes
- DeltaProgressReporter