Packages

t

org.apache.spark.sql.delta

DeltaConfigsBase

trait DeltaConfigsBase extends DeltaLogging

Contains list of reservoir configs and validation checks.

Linear Supertypes
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DeltaConfigsBase
  2. DeltaLogging
  3. DatabricksLogging
  4. DeltaProgressReporter
  5. LoggingShims
  6. Logging
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    LoggingShims

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val AUTO_COMPACT: DeltaConfig[Option[String]]

    Enable auto compaction for a Delta table.

    Enable auto compaction for a Delta table. When enabled, we will check if files already written to a Delta table can leverage compaction after a commit. If so, we run a post-commit hook to compact the files. It can be enabled by setting the property to true Note that the behavior from table property can be overridden by the config: org.apache.spark.sql.delta.sources.DeltaSQLConf.DELTA_AUTO_COMPACT_ENABLED

  5. val AUTO_OPTIMIZE: DeltaConfig[Option[Boolean]]

    Whether this table will automatically optimize the layout of files during writes.

  6. val CHANGE_DATA_FEED: DeltaConfig[Boolean]

    Enable change data feed output.

    Enable change data feed output. When enabled, DELETE, UPDATE, and MERGE INTO operations will need to do additional work to output their change data in an efficiently readable format.

  7. val CHECKPOINT_INTERVAL: DeltaConfig[Int]

    How often to checkpoint the delta log.

  8. val CHECKPOINT_POLICY: DeltaConfig[Policy]

    Policy to decide what kind of checkpoint to write to a table.

  9. val CHECKPOINT_RETENTION_DURATION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep checkpoint files around before deleting them.

    The shortest duration we have to keep checkpoint files around before deleting them. Note that we'll never delete the most recent checkpoint. We may keep checkpoint files beyond this duration until the next calendar day.

  10. val CHECKPOINT_WRITE_STATS_AS_JSON: DeltaConfig[Boolean]

    When enabled, we will write file statistics in the checkpoint in JSON format as the "stats" column.

  11. val CHECKPOINT_WRITE_STATS_AS_STRUCT: DeltaConfig[Boolean]

    When enabled, we will write file statistics in the checkpoint in the struct format in the "stats_parsed" column.

    When enabled, we will write file statistics in the checkpoint in the struct format in the "stats_parsed" column. We will also write partition values as a struct as "partitionValues_parsed".

  12. val COLUMN_MAPPING_MAX_ID: DeltaConfig[Long]

    Maximum columnId used in the schema so far for column mapping.

    Maximum columnId used in the schema so far for column mapping. Internal property that cannot be set by users.

  13. val COLUMN_MAPPING_MODE: DeltaConfig[DeltaColumnMappingMode]
  14. val COORDINATED_COMMITS_COORDINATOR_CONF: DeltaConfig[Map[String, String]]
  15. val COORDINATED_COMMITS_COORDINATOR_NAME: DeltaConfig[Option[String]]
  16. val COORDINATED_COMMITS_TABLE_CONF: DeltaConfig[Map[String, String]]
  17. val CREATE_TABLE_IGNORE_PROTOCOL_DEFAULTS: DeltaConfig[Boolean]

    Ignore protocol-related configs set in SQL config.

    Ignore protocol-related configs set in SQL config. When set to true, CREATE TABLE and REPLACE TABLE commands will not consider default protocol versions and table features in the current Spark session.

  18. val DATA_SKIPPING_NUM_INDEXED_COLS: DeltaConfig[Int]

    The number of columns to collect stats on for data skipping.

    The number of columns to collect stats on for data skipping. A value of -1 means collecting stats for all columns. Updating this conf does not trigger stats re-collection, but redefines the stats schema of table, i.e., it will change the behavior of future stats collection (e.g., in append and OPTIMIZE) as well as data skipping (e.g., the column stats beyond this number will be ignored even when they exist).

  19. val DATA_SKIPPING_STATS_COLUMNS: DeltaConfig[Option[String]]

    The names of specific columns to collect stats on for data skipping.

    The names of specific columns to collect stats on for data skipping. If present, it takes precedences over dataSkippingNumIndexedCols config, and the system will only collect stats for columns that exactly match those specified. If a nested column is specified, the system will collect stats for all leaf fields of that column. If a non-existent column is specified, it will be ignored. Updating this conf does not trigger stats re-collection, but redefines the stats schema of table, i.e., it will change the behavior of future stats collection (e.g., in append and OPTIMIZE) as well as data skipping (e.g., the column stats not mentioned by this config will be ignored even if they exist).

  20. final val DELTA_UNIVERSAL_FORMAT_CONFIG_PREFIX: String("delta.universalformat.config.")

    The prefix for a category of special configs for delta universal format to support the user facing config naming convention for different table formats: "delta.universalFormat.config.[iceberg/hudi].[config_name]" Note that config_name can be arbitrary.

  21. final val DELTA_UNIVERSAL_FORMAT_ICEBERG_CONFIG_PREFIX: String
  22. val ENABLE_DELETION_VECTORS_CREATION: DeltaConfig[Boolean]

    Whether commands modifying this Delta table are allowed to create new deletion vectors.

  23. val ENABLE_EXPIRED_LOG_CLEANUP: DeltaConfig[Boolean]

    Whether to clean up expired checkpoints and delta logs.

  24. val ENABLE_FULL_RETENTION_ROLLBACK: DeltaConfig[Boolean]

    If true, a delta table can be rolled back to any point within LOG_RETENTION.

    If true, a delta table can be rolled back to any point within LOG_RETENTION. Leaving this on requires converting the oldest delta file we have into a checkpoint, which we do once a day. If doing that operation is too expensive, it can be turned off, but the table can only be rolled back CHECKPOINT_RETENTION_DURATION ago instead of LOG_RETENTION ago.

  25. val ENABLE_TYPE_WIDENING: DeltaConfig[Boolean]

    Whether widening the type of an existing column or field is allowed, either manually using ALTER TABLE CHANGE COLUMN or automatically if automatic schema evolution is enabled.

  26. val ICEBERG_COMPAT_V1_ENABLED: DeltaConfig[Option[Boolean]]
  27. val ICEBERG_COMPAT_V2_ENABLED: DeltaConfig[Option[Boolean]]
  28. val IN_COMMIT_TIMESTAMPS_ENABLED: DeltaConfig[Boolean]
  29. val IN_COMMIT_TIMESTAMP_ENABLEMENT_TIMESTAMP: DeltaConfig[Option[Long]]

    This table property is used to track the timestamp at which inCommitTimestamps were enabled.

    This table property is used to track the timestamp at which inCommitTimestamps were enabled. More specifically, it is the inCommitTimestamp of the commit with the version specified in IN_COMMIT_TIMESTAMP_ENABLEMENT_VERSION.

  30. val IN_COMMIT_TIMESTAMP_ENABLEMENT_VERSION: DeltaConfig[Option[Long]]

    This table property is used to track the version of the table at which inCommitTimestamps were enabled.

  31. val ISOLATION_LEVEL: DeltaConfig[IsolationLevel]

    The isolation level of a table defines the degree to which a transaction must be isolated from modifications made by concurrent transactions.

    The isolation level of a table defines the degree to which a transaction must be isolated from modifications made by concurrent transactions. Delta currently supports one isolation level: Serializable.

  32. val IS_APPEND_ONLY: DeltaConfig[Boolean]

    Whether this Delta table is append-only.

    Whether this Delta table is append-only. Files can't be deleted, or values can't be updated.

  33. val LOG_RETENTION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep delta files around before deleting them.

    The shortest duration we have to keep delta files around before deleting them. We can only delete delta files that are before a compaction. We may keep files beyond this duration until the next calendar day.

  34. val METASTORE_LAST_COMMIT_TIMESTAMP: String
  35. val METASTORE_LAST_UPDATE_VERSION: String
  36. val MIN_READER_VERSION: DeltaConfig[Int]

    The protocol reader version modelled as a table property.

    The protocol reader version modelled as a table property. This property is *not* stored as a table property in the Metadata action. It is stored as its own action. Having it modelled as a table property makes it easier to upgrade, and view the version.

  37. val MIN_WRITER_VERSION: DeltaConfig[Int]

    The protocol reader version modelled as a table property.

    The protocol reader version modelled as a table property. This property is *not* stored as a table property in the Metadata action. It is stored as its own action. Having it modelled as a table property makes it easier to upgrade, and view the version.

  38. val OPTIMIZE_WRITE: DeltaConfig[Option[Boolean]]

    Enable optimized writes into a Delta table.

    Enable optimized writes into a Delta table. Optimized writes adds an adaptive shuffle before the write to write compacted files into a Delta table during a write.

  39. val RANDOMIZE_FILE_PREFIXES: DeltaConfig[Boolean]

    Whether to use a random prefix in a file path instead of partition information.

    Whether to use a random prefix in a file path instead of partition information. This is required for very high volume S3 calls to better be partitioned across S3 servers.

  40. val RANDOM_PREFIX_LENGTH: DeltaConfig[Int]

    Whether to use a random prefix in a file path instead of partition information.

    Whether to use a random prefix in a file path instead of partition information. This is required for very high volume S3 calls to better be partitioned across S3 servers.

  41. val REDIRECT_READER_WRITER: DeltaConfig[Option[String]]

    This is the property that describes the table redirection detail.

    This is the property that describes the table redirection detail. It is a JSON string format of the TableRedirectConfiguration class, which includes following attributes: - type(String): The type of redirection. - state(String): The current state of the redirection: ENABLE-REDIRECT-IN-PROGRESS, REDIRECT-READY, DROP-REDIRECT-IN-PROGRESS. - spec(JSON String): The specification of accessing redirect destination table. This is free form json object. Each delta service provider can customize its own implementation.

  42. val REDIRECT_WRITER_ONLY: DeltaConfig[Option[String]]

    This table feature is same as REDIRECT_READER_WRITER except it is a writer only table feature.

  43. val REQUIRE_CHECKPOINT_PROTECTION_BEFORE_VERSION: DeltaConfig[Long]

    This property is used by CheckpointProtectionTableFeature and denotes the version up to which the checkpoints are required to be cleaned up only together with the corresponding commits.

    This property is used by CheckpointProtectionTableFeature and denotes the version up to which the checkpoints are required to be cleaned up only together with the corresponding commits. If this is not possible, and metadata cleanup creates a new checkpoint prior to requireCheckpointProtectionBeforeVersion, it should validate write support against all protocols included in the commits that are being removed, or else abort. This is needed to make sure that the writer understands how to correctly create a checkpoint for the historic commit.

    Note, this is an internal config and should never be manually altered.

  44. val ROW_TRACKING_ENABLED: DeltaConfig[Boolean]

    Indicates whether Row Tracking is enabled on the table.

    Indicates whether Row Tracking is enabled on the table. When this flag is turned on, all rows are guaranteed to have Row IDs and Row Commit Versions assigned to them, and writers are expected to preserve them by materializing them to hidden columns in the data files.

  45. val SAMPLE_RETENTION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep delta sample files around before deleting them.

  46. val SYMLINK_FORMAT_MANIFEST_ENABLED: DeltaConfig[Boolean]
  47. val TABLE_FEATURE_DROP_TRUNCATE_HISTORY_LOG_RETENTION: DeltaConfig[CalendarInterval]

    The logRetention period to be used in DROP FEATURE ...

    The logRetention period to be used in DROP FEATURE ... TRUNCATE HISTORY command. The value should represent the expected duration of the longest running transaction. Setting this to a lower value than the longest running transaction may corrupt the table.

  48. val TOMBSTONE_RETENTION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep logically deleted data files around before deleting them physically.

    The shortest duration we have to keep logically deleted data files around before deleting them physically. This is to prevent failures in stale readers after compactions or partition overwrites.

    Note: this value should be large enough: - It should be larger than the longest possible duration of a job if you decide to run "VACUUM" when there are concurrent readers or writers accessing the table. - If you are running a streaming query reading from the table, you should make sure the query doesn't stop longer than this value. Otherwise, the query may not be able to restart as it still needs to read old files.

  49. val TRANSACTION_ID_RETENTION_DURATION: DeltaConfig[Option[CalendarInterval]]

    The shortest duration within which new Snapshots will retain transaction identifiers (i.e.

    The shortest duration within which new Snapshots will retain transaction identifiers (i.e. SetTransactions). When a new Snapshot sees a transaction identifier older than or equal to the specified TRANSACTION_ID_RETENTION_DURATION, it considers it expired and ignores it.

  50. val UNIVERSAL_FORMAT_ENABLED_FORMATS: DeltaConfig[Seq[String]]

    Convert the table's metadata into other storage formats after each Delta commit.

    Convert the table's metadata into other storage formats after each Delta commit. Only Iceberg is supported for now

  51. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  52. def buildConfig[T](key: String, defaultValue: String, fromString: (String) ⇒ T, validationFunction: (T) ⇒ Boolean, helpMessage: String, userConfigurable: Boolean = true, alternateConfs: Seq[DeltaConfig[T]] = Seq.empty): DeltaConfig[T]
    Attributes
    protected
  53. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  54. def deltaAssert(check: ⇒ Boolean, name: String, msg: String, deltaLog: DeltaLog = null, data: AnyRef = null, path: Option[Path] = None): Unit

    Helper method to check invariants in Delta code.

    Helper method to check invariants in Delta code. Fails when running in tests, records a delta assertion event and logs a warning otherwise.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  55. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  56. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  57. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  58. def getAllConfigs: Map[String, DeltaConfig[_]]

    Return all Delta configurations, including both set and unset ones.

  59. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  60. def getCommonTags(deltaLog: DeltaLog, tahoeId: String): Map[TagDefinition, String]
    Definition Classes
    DeltaLogging
  61. def getErrorData(e: Throwable): Map[String, Any]
    Definition Classes
    DeltaLogging
  62. def getMilliSeconds(i: CalendarInterval): Long
  63. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  64. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  65. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  66. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  67. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  68. def isValidIntervalConfigValue(i: CalendarInterval): Boolean

    For configs accepting an interval, we require the user specified string must obey:

    For configs accepting an interval, we require the user specified string must obey:

    - Doesn't use months or years, since an internal like this is not deterministic. - The microseconds parsed from the string value must be a non-negative value.

    The method returns whether a CalendarInterval satisfies the requirements.

  69. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  70. def logConsole(line: String): Unit
    Definition Classes
    DatabricksLogging
  71. def logDebug(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  72. def logDebug(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  73. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  74. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  75. def logError(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  76. def logError(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  77. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  78. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  79. def logInfo(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  80. def logInfo(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  81. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  82. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  83. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  84. def logTrace(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  85. def logTrace(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  86. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  87. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  88. def logWarning(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  89. def logWarning(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  90. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  91. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  92. def mergeGlobalConfigs(sqlConfs: SQLConf, tableConf: Map[String, String], ignoreProtocolConfsOpt: Option[Boolean] = None): Map[String, String]

    Table properties for new tables can be specified through SQL Configurations using the sqlConfPrefix and TableFeatureProtocolUtils.DEFAULT_FEATURE_PROP_PREFIX.

    Table properties for new tables can be specified through SQL Configurations using the sqlConfPrefix and TableFeatureProtocolUtils.DEFAULT_FEATURE_PROP_PREFIX. This method checks to see if any of the configurations exist among the SQL configurations and merges them with the user provided configurations. User provided configs take precedence.

    When ignoreProtocolConfsOpt is true (or false), this method will not (or will) copy protocol-related configs. If ignoreProtocolConfsOpt is None, whether to copy protocol-related configs will be depending on the existence of DeltaConfigs.CREATE_TABLE_IGNORE_PROTOCOL_DEFAULTS (delta.ignoreProtocolDefaults) in SQL or table configs.

    "Protocol-related configs" includes `delta.minReaderVersion`, `delta.minWriterVersion`, `delta.ignoreProtocolDefaults`, and anything that starts with `delta.feature.`

  93. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  94. def normalizeConfigKey(propKey: Option[String]): Option[String]

    Normalize the specified property key if the key is for a Delta config.

  95. def normalizeConfigKeys(propKeys: Seq[String]): Seq[String]

    Normalize the specified property keys if the key is for a Delta config.

  96. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  97. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  98. def parseCalendarInterval(s: String): CalendarInterval

    Convert a string to CalendarInterval.

    Convert a string to CalendarInterval. This method is case-insensitive and will throw IllegalArgumentException when the input string is not a valid interval.

    TODO Remove this method and use CalendarInterval.fromCaseInsensitiveString instead when upgrading Spark. This is a fork version of CalendarInterval.fromCaseInsensitiveString which will be available in the next Spark release (See SPARK-27735).

    Exceptions thrown

    IllegalArgumentException if the string is not a valid internal.

  99. def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    path

    Used to log the path of the delta table when deltaLog is null.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  100. def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  101. def recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  102. def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  103. def recordFrameProfile[T](group: String, name: String)(thunk: ⇒ T): T
    Attributes
    protected
    Definition Classes
    DeltaLogging
  104. def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: ⇒ S): S
    Definition Classes
    DatabricksLogging
  105. def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  106. def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  107. def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  108. val sqlConfPrefix: String

    A global default value set as a SQLConf will overwrite the default value of a DeltaConfig.

    A global default value set as a SQLConf will overwrite the default value of a DeltaConfig. For example, user can run: set spark.databricks.delta.properties.defaults.randomPrefixLength = 5 This setting will be populated to a Delta table during its creation time and overwrites the default value of delta.randomPrefixLength.

    We accept these SQLConfs as strings and only perform validation in DeltaConfig. All the DeltaConfigs set in SQLConf should adopt the same prefix.

  109. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  110. def toString(): String
    Definition Classes
    AnyRef → Any
  111. def validateConfigurations(configurations: Map[String, String]): Map[String, String]

    Validates specified configurations and returns the normalized key -> value map.

  112. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  113. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  114. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  115. def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: ⇒ T): T

    Report a log to indicate some command is running.

    Report a log to indicate some command is running.

    Definition Classes
    DeltaProgressReporter

Inherited from DeltaLogging

Inherited from DatabricksLogging

Inherited from DeltaProgressReporter

Inherited from LoggingShims

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped