object DeltaConfigs extends DeltaLogging
Contains list of reservoir configs and validation checks.
- Alphabetic
- By Inheritance
- DeltaConfigs
- DeltaLogging
- DatabricksLogging
- DeltaProgressReporter
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
val
AUTO_OPTIMIZE: DeltaConfig[Boolean]
Whether this table will automagically optimize the layout of files during writes.
-
val
CHECKPOINT_INTERVAL: DeltaConfig[Int]
How often to checkpoint the delta log.
-
val
CHECKPOINT_RETENTION_DURATION: DeltaConfig[CalendarInterval]
The shortest duration we have to keep checkpoint files around before deleting them.
The shortest duration we have to keep checkpoint files around before deleting them. Note that we'll never delete the most recent checkpoint. We may keep checkpoint files beyond this duration until the next calendar day.
-
val
DATA_SKIPPING_NUM_INDEXED_COLS: DeltaConfig[Int]
The number of columns to collect stats on for data skipping.
The number of columns to collect stats on for data skipping. A value of -1 means collecting stats for all columns. Updating this conf does not trigger stats re-collection, but redefines the stats schema of table, i.e., it will change the behavior of future stats collection (e.g., in append and OPTIMIZE) as well as data skipping (e.g., the column stats beyond this number will be ignored even when they exist).
-
val
ENABLE_EXPIRED_LOG_CLEANUP: DeltaConfig[Boolean]
Whether to clean up expired checkpoints and delta logs.
-
val
ENABLE_FULL_RETENTION_ROLLBACK: DeltaConfig[Boolean]
If true, a delta table can be rolled back to any point within LOG_RETENTION.
If true, a delta table can be rolled back to any point within LOG_RETENTION. Leaving this on requires converting the oldest delta file we have into a checkpoint, which we do once a day. If doing that operation is too expensive, it can be turned off, but the table can only be rolled back CHECKPOINT_RETENTION_DURATION ago instead of LOG_RETENTION ago.
-
val
IS_APPEND_ONLY: DeltaConfig[Boolean]
Whether this Delta table is append-only.
Whether this Delta table is append-only. Files can't be deleted, or values can't be updated.
-
val
LOG_RETENTION: DeltaConfig[CalendarInterval]
The shortest duration we have to keep delta files around before deleting them.
The shortest duration we have to keep delta files around before deleting them. We can only delete delta files that are before a compaction. We may keep files beyond this duration until the next calendar day.
-
val
RANDOMIZE_FILE_PREFIXES: DeltaConfig[Boolean]
Whether to use a random prefix in a file path instead of partition information.
Whether to use a random prefix in a file path instead of partition information. This is required for very high volume S3 calls to better be partitioned across S3 servers.
-
val
RANDOM_PREFIX_LENGTH: DeltaConfig[Int]
Whether to use a random prefix in a file path instead of partition information.
Whether to use a random prefix in a file path instead of partition information. This is required for very high volume S3 calls to better be partitioned across S3 servers.
-
val
SAMPLE_RETENTION: DeltaConfig[CalendarInterval]
The shortest duration we have to keep delta sample files around before deleting them.
- val SYMLINK_FORMAT_MANIFEST_ENABLED: DeltaConfig[Boolean]
-
val
TOMBSTONE_RETENTION: DeltaConfig[CalendarInterval]
The shortest duration we have to keep logically deleted data files around before deleting them physically.
The shortest duration we have to keep logically deleted data files around before deleting them physically. This is to prevent failures in stale readers after compactions or partition overwrites.
Note: this value should be large enough: - It should be larger than the longest possible duration of a job if you decide to run "VACUUM" when there are concurrent readers or writers accessing the table. - If you are running a streaming query reading from the table, you should make sure the query doesn't stop longer than this value. Otherwise, the query may not be able to restart as it still needs to read old files.
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getMilliSeconds(i: CalendarInterval): Long
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
isValidIntervalConfigValue(i: CalendarInterval): Boolean
For configs accepting an interval, we require the user specified string must obey:
For configs accepting an interval, we require the user specified string must obey:
- Doesn't use months or years, since an internal like this is not deterministic. - The microseconds parsed from the string value must be a non-negative value.
The method returns whether a CalendarInterval satisfies the requirements.
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logConsole(line: String): Unit
- Definition Classes
- DatabricksLogging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
mergeGlobalConfigs(sqlConfs: SQLConf, tableConf: Map[String, String], protocol: Protocol): Map[String, String]
Fetch global default values from SQLConf.
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
normalizeConfigKey(propKey: Option[String]): Option[String]
Normalize the specified property key if the key is for a Delta config.
-
def
normalizeConfigKeys(propKeys: Seq[String]): Seq[String]
Normalize the specified property keys if the key is for a Delta config.
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
parseCalendarInterval(s: String): CalendarInterval
Convert a string to CalendarInterval.
Convert a string to CalendarInterval. This method is case-insensitive and will throw IllegalArgumentException when the input string is not a valid interval.
TODO Remove this method and use
CalendarInterval.fromCaseInsensitiveStringinstead when upgrading Spark. This is a fork version ofCalendarInterval.fromCaseInsensitiveStringwhich will be available in the next Spark release (See SPARK-27735).- Exceptions thrown
IllegalArgumentExceptionif the string is not a valid internal.
-
def
recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null): Unit
Used to record the occurrence of a single event or report detailed, operation specific statistics.
Used to record the occurrence of a single event or report detailed, operation specific statistics.
- Attributes
- protected
- Definition Classes
- DeltaLogging
-
def
recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A
Used to report the duration as well as the success or failure of an operation.
Used to report the duration as well as the success or failure of an operation.
- Attributes
- protected
- Definition Classes
- DeltaLogging
-
def
recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
- Definition Classes
- DatabricksLogging
-
def
recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = null, silent: Boolean = true)(thunk: ⇒ S): S
- Definition Classes
- DatabricksLogging
-
def
recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
- Definition Classes
- DatabricksLogging
-
val
sqlConfPrefix: String
A global default value set as a SQLConf will overwrite the default value of a DeltaConfig.
A global default value set as a SQLConf will overwrite the default value of a DeltaConfig. For example, user can run: set spark.databricks.delta.properties.defaults.randomPrefixLength = 5 This setting will be populated to a Delta table during its creation time and overwrites the default value of delta.randomPrefixLength.
We accept these SQLConfs as strings and only perform validation in DeltaConfig. All the DeltaConfigs set in SQLConf should adopt the same prefix.
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
def
validateConfigurations(configurations: Map[String, String]): Map[String, String]
Validates specified configurations and returns the normalized key -> value map.
-
def
verifyProtocolVersionRequirements(configurations: Map[String, String], current: Protocol): Unit
Verify that the protocol version of the table satisfies the version requirements of all the configurations to be set.
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: ⇒ T): T
Report a log to indicate some command is running.
Report a log to indicate some command is running.
- Definition Classes
- DeltaProgressReporter