Packages

o

org.apache.spark.sql.delta

DeltaConfigs

object DeltaConfigs extends DeltaLogging

Contains list of reservoir configs and validation checks.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DeltaConfigs
  2. DeltaLogging
  3. DatabricksLogging
  4. DeltaProgressReporter
  5. Logging
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val AUTO_OPTIMIZE: DeltaConfig[Boolean]

    Whether this table will automagically optimize the layout of files during writes.

  5. val CHECKPOINT_INTERVAL: DeltaConfig[Int]

    How often to checkpoint the delta log.

  6. val CHECKPOINT_RETENTION_DURATION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep checkpoint files around before deleting them.

    The shortest duration we have to keep checkpoint files around before deleting them. Note that we'll never delete the most recent checkpoint. We may keep checkpoint files beyond this duration until the next calendar day.

  7. val DATA_SKIPPING_NUM_INDEXED_COLS: DeltaConfig[Int]

    The number of columns to collect stats on for data skipping.

    The number of columns to collect stats on for data skipping. A value of -1 means collecting stats for all columns. Updating this conf does not trigger stats re-collection, but redefines the stats schema of table, i.e., it will change the behavior of future stats collection (e.g., in append and OPTIMIZE) as well as data skipping (e.g., the column stats beyond this number will be ignored even when they exist).

  8. val ENABLE_EXPIRED_LOG_CLEANUP: DeltaConfig[Boolean]

    Whether to clean up expired checkpoints and delta logs.

  9. val ENABLE_FULL_RETENTION_ROLLBACK: DeltaConfig[Boolean]

    If true, a delta table can be rolled back to any point within LOG_RETENTION.

    If true, a delta table can be rolled back to any point within LOG_RETENTION. Leaving this on requires converting the oldest delta file we have into a checkpoint, which we do once a day. If doing that operation is too expensive, it can be turned off, but the table can only be rolled back CHECKPOINT_RETENTION_DURATION ago instead of LOG_RETENTION ago.

  10. val IS_APPEND_ONLY: DeltaConfig[Boolean]

    Whether this Delta table is append-only.

    Whether this Delta table is append-only. Files can't be deleted, or values can't be updated.

  11. val LOG_RETENTION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep delta files around before deleting them.

    The shortest duration we have to keep delta files around before deleting them. We can only delete delta files that are before a compaction. We may keep files beyond this duration until the next calendar day.

  12. val RANDOMIZE_FILE_PREFIXES: DeltaConfig[Boolean]

    Whether to use a random prefix in a file path instead of partition information.

    Whether to use a random prefix in a file path instead of partition information. This is required for very high volume S3 calls to better be partitioned across S3 servers.

  13. val RANDOM_PREFIX_LENGTH: DeltaConfig[Int]

    Whether to use a random prefix in a file path instead of partition information.

    Whether to use a random prefix in a file path instead of partition information. This is required for very high volume S3 calls to better be partitioned across S3 servers.

  14. val SAMPLE_RETENTION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep delta sample files around before deleting them.

  15. val SYMLINK_FORMAT_MANIFEST_ENABLED: DeltaConfig[Boolean]
  16. val TOMBSTONE_RETENTION: DeltaConfig[CalendarInterval]

    The shortest duration we have to keep logically deleted data files around before deleting them physically.

    The shortest duration we have to keep logically deleted data files around before deleting them physically. This is to prevent failures in stale readers after compactions or partition overwrites.

    Note: this value should be large enough: - It should be larger than the longest possible duration of a job if you decide to run "VACUUM" when there are concurrent readers or writers accessing the table. - If you are running a streaming query reading from the table, you should make sure the query doesn't stop longer than this value. Otherwise, the query may not be able to restart as it still needs to read old files.

  17. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  18. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  19. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  20. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  21. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  22. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  23. def getMilliSeconds(i: CalendarInterval): Long
  24. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  25. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  26. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  28. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  29. def isValidIntervalConfigValue(i: CalendarInterval): Boolean

    For configs accepting an interval, we require the user specified string must obey:

    For configs accepting an interval, we require the user specified string must obey:

    - Doesn't use months or years, since an internal like this is not deterministic. - The microseconds parsed from the string value must be a non-negative value.

    The method returns whether a CalendarInterval satisfies the requirements.

  30. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  31. def logConsole(line: String): Unit
    Definition Classes
    DatabricksLogging
  32. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  39. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def mergeGlobalConfigs(sqlConfs: SQLConf, tableConf: Map[String, String], protocol: Protocol): Map[String, String]

    Fetch global default values from SQLConf.

  44. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  45. def normalizeConfigKey(propKey: Option[String]): Option[String]

    Normalize the specified property key if the key is for a Delta config.

  46. def normalizeConfigKeys(propKeys: Seq[String]): Seq[String]

    Normalize the specified property keys if the key is for a Delta config.

  47. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  48. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  49. def parseCalendarInterval(s: String): CalendarInterval

    Convert a string to CalendarInterval.

    Convert a string to CalendarInterval. This method is case-insensitive and will throw IllegalArgumentException when the input string is not a valid interval.

    TODO Remove this method and use CalendarInterval.fromCaseInsensitiveString instead when upgrading Spark. This is a fork version of CalendarInterval.fromCaseInsensitiveString which will be available in the next Spark release (See SPARK-27735).

    Exceptions thrown

    IllegalArgumentException if the string is not a valid internal.

  50. def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null): Unit

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  51. def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation.

    Used to report the duration as well as the success or failure of an operation.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  52. def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  53. def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = null, silent: Boolean = true)(thunk: ⇒ S): S
    Definition Classes
    DatabricksLogging
  54. def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  55. val sqlConfPrefix: String

    A global default value set as a SQLConf will overwrite the default value of a DeltaConfig.

    A global default value set as a SQLConf will overwrite the default value of a DeltaConfig. For example, user can run: set spark.databricks.delta.properties.defaults.randomPrefixLength = 5 This setting will be populated to a Delta table during its creation time and overwrites the default value of delta.randomPrefixLength.

    We accept these SQLConfs as strings and only perform validation in DeltaConfig. All the DeltaConfigs set in SQLConf should adopt the same prefix.

  56. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  57. def toString(): String
    Definition Classes
    AnyRef → Any
  58. def validateConfigurations(configurations: Map[String, String]): Map[String, String]

    Validates specified configurations and returns the normalized key -> value map.

  59. def verifyProtocolVersionRequirements(configurations: Map[String, String], current: Protocol): Unit

    Verify that the protocol version of the table satisfies the version requirements of all the configurations to be set.

  60. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  61. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  62. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  63. def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: ⇒ T): T

    Report a log to indicate some command is running.

    Report a log to indicate some command is running.

    Definition Classes
    DeltaProgressReporter

Inherited from DeltaLogging

Inherited from DatabricksLogging

Inherited from DeltaProgressReporter

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped