object DeltaDataSource extends DatabricksLogging

Linear Supertypes
DatabricksLogging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DeltaDataSource
  2. DatabricksLogging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final val CDC_ENABLED_KEY: String("readChangeFeed")
  5. final val CDC_ENABLED_KEY_LEGACY: String("readChangeData")
  6. final val CDC_END_TIMESTAMP_KEY: String("endingTimestamp")
  7. final val CDC_END_VERSION_KEY: String("endingVersion")
  8. final val CDC_START_TIMESTAMP_KEY: String("startingTimestamp")
  9. final val CDC_START_VERSION_KEY: String("startingVersion")
  10. final val TIME_TRAVEL_SOURCE_KEY: String("__time_travel_source__")
  11. final val TIME_TRAVEL_TIMESTAMP_KEY: String("timestampAsOf")

    The option key for time traveling using a timestamp.

    The option key for time traveling using a timestamp. The timestamp should be a valid timestamp string which can be cast to a timestamp type.

  12. final val TIME_TRAVEL_VERSION_KEY: String("versionAsOf")

    The option key for time traveling using a version of a table.

    The option key for time traveling using a version of a table. This value should be castable to a long.

  13. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  14. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  15. def decodePartitioningColumns(str: String): Seq[String]
  16. def encodePartitioningColumns(columns: Seq[String]): String
  17. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  18. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  19. def extractSchemaTrackingLocationConfig(spark: SparkSession, parameters: Map[String, String]): Option[String]

    Extract the schema tracking location from options.

  20. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  21. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  22. def getMetadataTrackingLogForDeltaSource(spark: SparkSession, sourceSnapshot: SnapshotDescriptor, parameters: Map[String, String], sourceMetadataPathOpt: Option[String] = None, mergeConsecutiveSchemaChanges: Boolean = false): Option[DeltaSourceMetadataTrackingLog]

    Create a schema log for Delta streaming source if possible

  23. def getTimeTravelVersion(parameters: Map[String, String]): Option[DeltaTimeTravelSpec]

    Extracts whether users provided the option to time travel a relation.

  24. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  25. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  26. def logConsole(line: String): Unit
    Definition Classes
    DatabricksLogging
  27. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  28. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  29. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  30. def parsePathIdentifier(spark: SparkSession, userPath: String, options: Map[String, String]): (Path, Seq[(String, String)], Option[DeltaTimeTravelSpec])

    For Delta, we allow certain magic to be performed through the paths that are provided by users.

    For Delta, we allow certain magic to be performed through the paths that are provided by users. Normally, a user specified path should point to the root of a Delta table. However, some users are used to providing specific partition values through the path, because of how expensive it was to perform partition discovery before. We treat these partition values as logical partition filters, if a table does not exist at the provided path.

    In addition, we allow users to provide time travel specifications through the path. This is provided after an @ symbol after a path followed by a time specification in yyyyMMddHHmmssSSS format, or a version number preceded by a v.

    This method parses these specifications and returns these modifiers only if a path does not really exist at the provided path. We first parse out the time travel specification, and then the partition filters. For example, a path specified as: /some/path/partition=1@v1234 will be parsed into /some/path with filters partition=1 and a time travel spec of version 1234.

    returns

    A tuple of the root path of the Delta table, partition filters, and time travel options

  31. def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  32. def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: ⇒ S): S
    Definition Classes
    DatabricksLogging
  33. def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  34. def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  35. def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  36. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  37. def toString(): String
    Definition Classes
    AnyRef → Any
  38. def verifyAndCreatePartitionFilters(userPath: String, snapshot: Snapshot, partitionFilters: Seq[(String, String)]): Seq[Expression]

    Verifies that the provided partition filters are valid and returns the corresponding expressions.

  39. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  40. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  41. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from DatabricksLogging

Inherited from AnyRef

Inherited from Any

Ungrouped