object CoordinatedCommitsUtils extends DeltaLogging
- Alphabetic
- By Inheritance
- CoordinatedCommitsUtils
- DeltaLogging
- DatabricksLogging
- DeltaProgressReporter
- LoggingShims
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
-
implicit
class
LogStringContext extends AnyRef
- Definition Classes
- LoggingShims
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- val ICT_TABLE_PROPERTY_CONFS: Seq[DeltaConfig[_ >: Option[Long] with Boolean]]
-
val
ICT_TABLE_PROPERTY_KEYS: Seq[String]
The main ICT table properties used as dependencies for Coordinated Commits.
- val TABLE_PROPERTY_CONFS: Seq[DeltaConfig[_ >: Map[String, String] with Option[String] <: Equals]]
-
val
TABLE_PROPERTY_KEYS: Seq[String]
The main table properties used to instantiate a TableCommitCoordinatorClient.
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
backfillWhenCoordinatedCommitsDisabled(snapshot: Snapshot): Unit
This method takes care of backfilling any unbackfilled delta files when coordinated commits is not enabled on the table (i.e.
This method takes care of backfilling any unbackfilled delta files when coordinated commits is not enabled on the table (i.e. commit-coordinator is not present) but there are still unbackfilled delta files in the table. This can happen if an error occurred during the CC -> FS commit where the commit-coordinator was able to register the downgrade commit but it failed to backfill it. This method must be invoked before doing the next commit as otherwise there will be a gap in the backfilled commit sequence.
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
commitFilesIterator(deltaLog: DeltaLog, startVersion: Long): Iterator[(FileStatus, Long)]
Returns an iterator of commit files starting from startVersion.
Returns an iterator of commit files starting from startVersion. If the iterator is consumed beyond what the file system listing shows, this method do a deltaLog.update() to find the latest version and returns listing results upto that version.
- returns
an iterator of (file status, version) pair corresponding to commit files
-
def
deltaAssert(check: ⇒ Boolean, name: String, msg: String, deltaLog: DeltaLog = null, data: AnyRef = null, path: Option[Path] = None): Unit
Helper method to check invariants in Delta code.
Helper method to check invariants in Delta code. Fails when running in tests, records a delta assertion event and logs a warning otherwise.
- Attributes
- protected
- Definition Classes
- DeltaLogging
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getCommitCoordinatorClient(spark: SparkSession, deltaLog: DeltaLog, metadata: Metadata, protocol: Protocol, failIfImplUnavailable: Boolean): Option[CommitCoordinatorClient]
-
def
getCommitsFromCommitCoordinatorWithUsageLogs(deltaLog: DeltaLog, tableCommitCoordinatorClient: TableCommitCoordinatorClient, catalogTableOpt: Option[CatalogTable], startVersion: Long, versionToLoad: Option[Long], isAsyncRequest: Boolean): GetCommitsResponse
Returns the CommitCoordinatorClient.getCommits response for the given startVersion and versionToLoad.
-
def
getCommonTags(deltaLog: DeltaLog, tahoeId: String): Map[TagDefinition, String]
- Definition Classes
- DeltaLogging
- def getCoordinatedCommitsConfs(metadata: Metadata): (Option[String], Map[String, String])
-
def
getDefaultCCConfigurations(spark: SparkSession, withDefaultKey: Boolean = false): Map[String, String]
Fetches the SparkSession default configurations for Coordinated Commits.
Fetches the SparkSession default configurations for Coordinated Commits. The
withDefaultKeyflag controls whether the keys in the returned map should have the default prefix or not. For example, if property 'coordinatedCommits.commitCoordinator-preview' is set to 'dynamodb' in SparkSession default, then- fetchDefaultCoordinatedCommitsConfigurations(spark) => Map("delta.coordinatedCommits.commitCoordinator-preview" -> "dynamodb")
- fetchDefaultCoordinatedCommitsConfigurations(spark, withDefaultKey = true) => Map("spark.databricks.delta.properties.defaults .coordinatedCommits.commitCoordinator-preview" -> "dynamodb")
-
def
getErrorData(e: Throwable): Map[String, Any]
- Definition Classes
- DeltaLogging
-
def
getExplicitCCConfigurations(properties: Map[String, String]): Map[String, String]
Extracts the Coordinated Commits configurations from the provided properties.
-
def
getExplicitICTConfigurations(properties: Map[String, String]): Map[String, String]
Extracts the ICT configurations from the provided properties.
-
def
getLastBackfilledFile(deltas: Seq[FileStatus]): Option[FileStatus]
Returns the last backfilled file in the given list of
deltasif it exists.Returns the last backfilled file in the given list of
deltasif it exists. This could be 1. A backfilled delta 2. A minor compaction -
def
getTableCommitCoordinator(spark: SparkSession, deltaLog: DeltaLog, snapshotDescriptor: SnapshotDescriptor, failIfImplUnavailable: Boolean): Option[TableCommitCoordinatorClient]
Get the table commit coordinator client from the provided snapshot descriptor.
Get the table commit coordinator client from the provided snapshot descriptor. Returns None if either this is not a coordinated-commits table. Also returns None when
failIfImplUnavailableis false and the commit-coordinator implementation is not available. -
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logConsole(line: String): Unit
- Definition Classes
- DatabricksLogging
-
def
logDebug(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logDebug(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logError(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logInfo(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logTrace(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logWarning(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit
Used to record the occurrence of a single event or report detailed, operation specific statistics.
Used to record the occurrence of a single event or report detailed, operation specific statistics.
- path
Used to log the path of the delta table when
deltaLogis null.
- Attributes
- protected
- Definition Classes
- DeltaLogging
-
def
recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A
Used to report the duration as well as the success or failure of an operation on a
deltaLog.Used to report the duration as well as the success or failure of an operation on a
deltaLog.- Attributes
- protected
- Definition Classes
- DeltaLogging
-
def
recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A
Used to report the duration as well as the success or failure of an operation on a
tahoePath.Used to report the duration as well as the success or failure of an operation on a
tahoePath.- Attributes
- protected
- Definition Classes
- DeltaLogging
-
def
recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
- Definition Classes
- DatabricksLogging
-
def
recordFrameProfile[T](group: String, name: String)(thunk: ⇒ T): T
- Attributes
- protected
- Definition Classes
- DeltaLogging
-
def
recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: ⇒ S): S
- Definition Classes
- DatabricksLogging
-
def
recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
- Definition Classes
- DatabricksLogging
-
def
recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
- Definition Classes
- DatabricksLogging
-
def
recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
- Definition Classes
- DatabricksLogging
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
tablePropertiesPresent(metadata: Metadata): Boolean
Returns true if any CoordinatedCommits-related table properties is present in the metadata.
-
def
toCCTableIdentifier(catalystTableIdentifierOpt: Option[TableIdentifier]): Optional[TableIdentifier]
Converts a given Spark CatalystTableIdentifier to Coordinated Commits TableIdentifier
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
def
unbackfilledCommitsPresent(snapshot: Snapshot): Boolean
Returns true if the snapshot is backed by unbackfilled commits.
-
def
validateConfigurationsForAlterTableSetPropertiesDeltaCommand(existingConfs: Map[String, String], propertyOverrides: Map[String, String]): Unit
Validates the Coordinated Commits configurations in explicit command overrides for
AlterTableSetPropertiesDeltaCommand.Validates the Coordinated Commits configurations in explicit command overrides for
AlterTableSetPropertiesDeltaCommand.If the table already has Coordinated Commits configurations present, then we do not allow users to override them via
ALTER TABLE t SET TBLPROPERTIES .... Users must downgrade the table and then upgrade it with the new Coordinated Commits configurations. If the table is a Coordinated Commits table or will be one via this ALTER command, then we do not allow users to disable any ICT properties that Coordinated Commits depends on. -
def
validateConfigurationsForAlterTableUnsetPropertiesDeltaCommand(existingConfs: Map[String, String], propKeysToUnset: Seq[String]): Unit
Validates the configurations to unset for
AlterTableUnsetPropertiesDeltaCommand.Validates the configurations to unset for
AlterTableUnsetPropertiesDeltaCommand.If the table already has Coordinated Commits configurations present, then we do not allow users to unset them via
ALTER TABLE t UNSET TBLPROPERTIES .... Users could only downgrade the table viaALTER TABLE t DROP FEATURE .... We also do not allow users to unset any ICT properties that Coordinated Commits depends on. -
def
validateConfigurationsForCreateDeltaTableCommand(spark: SparkSession, tableExists: Boolean, query: Option[LogicalPlan], catalogTableProperties: Map[String, String]): Unit
Validates the Coordinated Commits configurations in explicit command overrides and default SparkSession properties for
CreateDeltaTableCommand.Validates the Coordinated Commits configurations in explicit command overrides and default SparkSession properties for
CreateDeltaTableCommand. SeevalidateConfigurationsForCreateDeltaTableCommandImplfor details. -
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: ⇒ T): T
Report a log to indicate some command is running.
Report a log to indicate some command is running.
- Definition Classes
- DeltaProgressReporter