o

org.apache.spark.sql.delta.coordinatedcommits

CoordinatedCommitsUtils

object CoordinatedCommitsUtils extends DeltaLogging

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. CoordinatedCommitsUtils
  2. DeltaLogging
  3. DatabricksLogging
  4. DeltaProgressReporter
  5. LoggingShims
  6. Logging
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    LoggingShims

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val ICT_TABLE_PROPERTY_CONFS: Seq[DeltaConfig[_ >: Option[Long] with Boolean]]
  5. val ICT_TABLE_PROPERTY_KEYS: Seq[String]

    The main ICT table properties used as dependencies for Coordinated Commits.

  6. val TABLE_PROPERTY_CONFS: Seq[DeltaConfig[_ >: Map[String, String] with Option[String] <: Equals]]
  7. val TABLE_PROPERTY_KEYS: Seq[String]

    The main table properties used to instantiate a TableCommitCoordinatorClient.

  8. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  9. def backfillWhenCoordinatedCommitsDisabled(snapshot: Snapshot): Unit

    This method takes care of backfilling any unbackfilled delta files when coordinated commits is not enabled on the table (i.e.

    This method takes care of backfilling any unbackfilled delta files when coordinated commits is not enabled on the table (i.e. commit-coordinator is not present) but there are still unbackfilled delta files in the table. This can happen if an error occurred during the CC -> FS commit where the commit-coordinator was able to register the downgrade commit but it failed to backfill it. This method must be invoked before doing the next commit as otherwise there will be a gap in the backfilled commit sequence.

  10. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  11. def commitFilesIterator(deltaLog: DeltaLog, startVersion: Long): Iterator[(FileStatus, Long)]

    Returns an iterator of commit files starting from startVersion.

    Returns an iterator of commit files starting from startVersion. If the iterator is consumed beyond what the file system listing shows, this method do a deltaLog.update() to find the latest version and returns listing results upto that version.

    returns

    an iterator of (file status, version) pair corresponding to commit files

  12. def deltaAssert(check: ⇒ Boolean, name: String, msg: String, deltaLog: DeltaLog = null, data: AnyRef = null, path: Option[Path] = None): Unit

    Helper method to check invariants in Delta code.

    Helper method to check invariants in Delta code. Fails when running in tests, records a delta assertion event and logs a warning otherwise.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  13. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  15. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  16. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  17. def getCommitCoordinatorClient(spark: SparkSession, deltaLog: DeltaLog, metadata: Metadata, protocol: Protocol, failIfImplUnavailable: Boolean): Option[CommitCoordinatorClient]
  18. def getCommitsFromCommitCoordinatorWithUsageLogs(deltaLog: DeltaLog, tableCommitCoordinatorClient: TableCommitCoordinatorClient, catalogTableOpt: Option[CatalogTable], startVersion: Long, versionToLoad: Option[Long], isAsyncRequest: Boolean): GetCommitsResponse

    Returns the CommitCoordinatorClient.getCommits response for the given startVersion and versionToLoad.

  19. def getCommonTags(deltaLog: DeltaLog, tahoeId: String): Map[TagDefinition, String]
    Definition Classes
    DeltaLogging
  20. def getCoordinatedCommitsConfs(metadata: Metadata): (Option[String], Map[String, String])
  21. def getDefaultCCConfigurations(spark: SparkSession, withDefaultKey: Boolean = false): Map[String, String]

    Fetches the SparkSession default configurations for Coordinated Commits.

    Fetches the SparkSession default configurations for Coordinated Commits. The withDefaultKey flag controls whether the keys in the returned map should have the default prefix or not. For example, if property 'coordinatedCommits.commitCoordinator-preview' is set to 'dynamodb' in SparkSession default, then

    • fetchDefaultCoordinatedCommitsConfigurations(spark) => Map("delta.coordinatedCommits.commitCoordinator-preview" -> "dynamodb")
    • fetchDefaultCoordinatedCommitsConfigurations(spark, withDefaultKey = true) => Map("spark.databricks.delta.properties.defaults .coordinatedCommits.commitCoordinator-preview" -> "dynamodb")
  22. def getErrorData(e: Throwable): Map[String, Any]
    Definition Classes
    DeltaLogging
  23. def getExplicitCCConfigurations(properties: Map[String, String]): Map[String, String]

    Extracts the Coordinated Commits configurations from the provided properties.

  24. def getExplicitICTConfigurations(properties: Map[String, String]): Map[String, String]

    Extracts the ICT configurations from the provided properties.

  25. def getLastBackfilledFile(deltas: Seq[FileStatus]): Option[FileStatus]

    Returns the last backfilled file in the given list of deltas if it exists.

    Returns the last backfilled file in the given list of deltas if it exists. This could be 1. A backfilled delta 2. A minor compaction

  26. def getTableCommitCoordinator(spark: SparkSession, deltaLog: DeltaLog, snapshotDescriptor: SnapshotDescriptor, failIfImplUnavailable: Boolean): Option[TableCommitCoordinatorClient]

    Get the table commit coordinator client from the provided snapshot descriptor.

    Get the table commit coordinator client from the provided snapshot descriptor. Returns None if either this is not a coordinated-commits table. Also returns None when failIfImplUnavailable is false and the commit-coordinator implementation is not available.

  27. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  28. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  29. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  31. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  32. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  33. def logConsole(line: String): Unit
    Definition Classes
    DatabricksLogging
  34. def logDebug(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  35. def logDebug(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  36. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logError(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  39. def logError(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  40. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logInfo(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  43. def logInfo(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  44. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  47. def logTrace(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  48. def logTrace(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  49. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  50. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  51. def logWarning(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  52. def logWarning(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  53. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  54. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  55. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  56. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  57. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  58. def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    path

    Used to log the path of the delta table when deltaLog is null.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  59. def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  60. def recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  61. def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  62. def recordFrameProfile[T](group: String, name: String)(thunk: ⇒ T): T
    Attributes
    protected
    Definition Classes
    DeltaLogging
  63. def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: ⇒ S): S
    Definition Classes
    DatabricksLogging
  64. def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  65. def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  66. def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  67. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  68. def tablePropertiesPresent(metadata: Metadata): Boolean

    Returns true if any CoordinatedCommits-related table properties is present in the metadata.

  69. def toCCTableIdentifier(catalystTableIdentifierOpt: Option[TableIdentifier]): Optional[TableIdentifier]

    Converts a given Spark CatalystTableIdentifier to Coordinated Commits TableIdentifier

  70. def toString(): String
    Definition Classes
    AnyRef → Any
  71. def unbackfilledCommitsPresent(snapshot: Snapshot): Boolean

    Returns true if the snapshot is backed by unbackfilled commits.

  72. def validateConfigurationsForAlterTableSetPropertiesDeltaCommand(existingConfs: Map[String, String], propertyOverrides: Map[String, String]): Unit

    Validates the Coordinated Commits configurations in explicit command overrides for AlterTableSetPropertiesDeltaCommand.

    Validates the Coordinated Commits configurations in explicit command overrides for AlterTableSetPropertiesDeltaCommand.

    If the table already has Coordinated Commits configurations present, then we do not allow users to override them via ALTER TABLE t SET TBLPROPERTIES .... Users must downgrade the table and then upgrade it with the new Coordinated Commits configurations. If the table is a Coordinated Commits table or will be one via this ALTER command, then we do not allow users to disable any ICT properties that Coordinated Commits depends on.

  73. def validateConfigurationsForAlterTableUnsetPropertiesDeltaCommand(existingConfs: Map[String, String], propKeysToUnset: Seq[String]): Unit

    Validates the configurations to unset for AlterTableUnsetPropertiesDeltaCommand.

    Validates the configurations to unset for AlterTableUnsetPropertiesDeltaCommand.

    If the table already has Coordinated Commits configurations present, then we do not allow users to unset them via ALTER TABLE t UNSET TBLPROPERTIES .... Users could only downgrade the table via ALTER TABLE t DROP FEATURE .... We also do not allow users to unset any ICT properties that Coordinated Commits depends on.

  74. def validateConfigurationsForCreateDeltaTableCommand(spark: SparkSession, tableExists: Boolean, query: Option[LogicalPlan], catalogTableProperties: Map[String, String]): Unit

    Validates the Coordinated Commits configurations in explicit command overrides and default SparkSession properties for CreateDeltaTableCommand.

    Validates the Coordinated Commits configurations in explicit command overrides and default SparkSession properties for CreateDeltaTableCommand. See validateConfigurationsForCreateDeltaTableCommandImpl for details.

  75. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  76. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  77. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  78. def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: ⇒ T): T

    Report a log to indicate some command is running.

    Report a log to indicate some command is running.

    Definition Classes
    DeltaProgressReporter

Inherited from DeltaLogging

Inherited from DatabricksLogging

Inherited from DeltaProgressReporter

Inherited from LoggingShims

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped