trait DeltaSourceMetadataEvolutionSupport extends DeltaSourceBase

Helper functions for metadata evolution related handling for DeltaSource. A metadata change is one of: 1. Schema change 2. Delta table configuration change 3. Delta protocol change The documentation below will use schema change as example throughout.

To achieve schema evolution, we intercept in different stages of the normal streaming process to: 1. Capture all schema changes inside a stream 2. Stop the latestOffset from crossing the schema change boundary 3. Ensure the batch prior to the schema change can still be served correctly 4. Ensure the stream fails if and only if the prior batch is served successfully 5. Write the new schema to the schema tracking log prior to stream failure, so that next time when it restarts we will use the updated schema.

Specifically, 1. During latestOffset calls, if we detect schema change at version V, we generate a special barrier DeltaSourceOffset X that has ver=V and index=INDEX_METADATA_CHANGE. (We first generate an IndexedFile at this index, and that gets converted into an equivalent DeltaSourceOffset.) INDEX_METADATA_CHANGE comes after INDEX_VERSION_BASE (the first offset index that exists for any reservoir version) and before the offsets that represent data changes. This ensures that we apply the schema change before processing the data that uses that schema. 2. When we see a schema change offset X, then this is treated as a barrier that ends the current batch. The remaining data is effectively unavailable until all the source data before the schema change has been committed. 3. Then, when a commit is invoked on the offset schema change barrier offset X, we can then officially write the new schema into the schema tracking log and fail the stream. commit is only called after this batch ending at X is completed, so it would be safe to fail there. 4. In between when offset X is generated and when it is committed, there could be arbitrary number of calls to latestOffset, attempting to fetch new latestOffset. These calls mustn't generate new offsets until the schema change barrier offset has been committed, the new schema has been written to the schema tracking log, and the stream has been aborted and restarted. A nuance here - streaming engine won't commit until it sees a new offset that is semantically different, which is why we first generate an offset X with index INDEX_METADATA_CHANGE, but another second barrier offset X' immediately following it with index INDEX_POST_SCHEMA_CHANGE. In this way, we could ensure: a) Offset with index INDEX_METADATA_CHANGE is always committed (typically) b) Even if streaming engine changed its behavior and ONLY offset with index INDEX_POST_SCHEMA_CHANGE is committed, we can still see this is a schema change barrier with a schema change ready to be evolved. c) Whenever latestOffset sees a startOffset with a schema change barrier index, we can easily tell that we should not progress past the schema change, unless the schema change has actually happened. When a stream is restarted post a schema evolution (not initialization), it is guaranteed to have >= 2 entries in the schema log. To prevent users from shooting themselves in the foot while blindly restart stream without considering implications to downstream tables, by default we would not allow stream to restart without a magic SQL conf that user has to set to allow non-additive schema changes to propagate. We detect such non-additive schema changes during stream start by comparing the last schema log entry with the current one.

Self Type
DeltaSource
Linear Supertypes
DeltaSourceBase, DeltaLogging, DatabricksLogging, DeltaProgressReporter, LoggingShims, Logging, SupportsTriggerAvailableNow, SupportsAdmissionControl, Source, SparkDataStream, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DeltaSourceMetadataEvolutionSupport
  2. DeltaSourceBase
  3. DeltaLogging
  4. DatabricksLogging
  5. DeltaProgressReporter
  6. LoggingShims
  7. Logging
  8. SupportsTriggerAvailableNow
  9. SupportsAdmissionControl
  10. Source
  11. SparkDataStream
  12. AnyRef
  13. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    LoggingShims

Abstract Value Members

  1. abstract def getBatch(start: Option[Offset], end: Offset): DataFrame
    Definition Classes
    Source
  2. abstract def getOffset: Option[Offset]
    Definition Classes
    Source
  3. abstract def latestOffset(arg0: Offset, arg1: ReadLimit): Offset
    Definition Classes
    SupportsAdmissionControl
  4. abstract def latestOffsetInternal(startOffset: Option[DeltaSourceOffset], limit: ReadLimit): Option[DeltaSourceOffset]

    An internal latestOffsetInternal to get the latest offset.

    An internal latestOffsetInternal to get the latest offset.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  5. abstract def stop(): Unit
    Definition Classes
    SparkDataStream

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. lazy val allowUnsafeStreamingReadOnColumnMappingSchemaChanges: Boolean

    Flag that allows user to force enable unsafe streaming read on Delta table with column mapping enabled AND drop/rename actions.

    Flag that allows user to force enable unsafe streaming read on Delta table with column mapping enabled AND drop/rename actions.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  5. lazy val allowUnsafeStreamingReadOnPartitionColumnChanges: Boolean
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. def checkReadIncompatibleSchemaChangeOnStreamStartOnce(batchStartVersion: Long, batchEndVersionOpt: Option[Long] = None): Unit

    Check read-incompatible schema changes during stream (re)start so we could fail fast.

    Check read-incompatible schema changes during stream (re)start so we could fail fast.

    This only needs to be called ONCE in the life cycle of a stream, either at the very first latestOffset, or the very first getBatch to make sure we have detected an incompatible schema change. Typically, the verifyStreamHygiene that was called maybe good enough to detect these schema changes, there may be cases that wouldn't work, e.g. consider this sequence: 1. User starts a new stream @ startingVersion 1 2. latestOffset is called before getBatch() because there was no previous commits so getBatch won't be called as a recovery mechanism. Suppose there's a single rename/drop/nullability change S during computing next offset, S would look exactly the same as the latest schema so verifyStreamHygiene would not work. 3. latestOffset would return this new offset cross the schema boundary.

    If a schema log is already initialized, we don't have to run the initialization nor schema checks any more.

    batchStartVersion

    Start version we want to verify read compatibility against

    batchEndVersionOpt

    Optionally, if we are checking against an existing constructed batch during streaming initialization, we would also like to verify all schema changes in between as well before we can lazily initialize the schema log if needed.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  8. def checkReadIncompatibleSchemaChanges(metadata: Metadata, version: Long, batchStartVersion: Long, batchEndVersionOpt: Option[Long] = None, validatedDuringStreamStart: Boolean = false): Unit

    Narrow waist to verify a metadata action for read-incompatible schema changes, specifically: 1.

    Narrow waist to verify a metadata action for read-incompatible schema changes, specifically: 1. Any column mapping related schema changes (rename / drop) columns 2. Standard read-compatibility changes including: a) No missing columns b) No data type changes c) No read-incompatible nullability changes If the check fails, we throw an exception to exit the stream. If lazy log initialization is required, we also run a one time scan to safely initialize the metadata tracking log upon any non-additive schema change failures.

    metadata

    Metadata that contains a potential schema change

    version

    Version for the metadata action

    validatedDuringStreamStart

    Whether this check is being done during stream start.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  9. def cleanUpSnapshotResources(): Unit
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  10. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  11. def collectMetadataActions(startVersion: Long, endVersion: Long): Seq[(Long, Metadata)]
    Attributes
    protected
  12. def collectProtocolActions(startVersion: Long, endVersion: Long): Seq[(Long, Protocol)]
    Attributes
    protected
  13. def commit(end: Offset): Unit
    Definition Classes
    Source → SparkDataStream
  14. def commit(end: Offset): Unit
    Definition Classes
    Source
  15. def createDataFrame(indexedFiles: Iterator[IndexedFile]): DataFrame

    Given an iterator of file actions, create a DataFrame representing the files added to a table Only AddFile actions will be used to create the DataFrame.

    Given an iterator of file actions, create a DataFrame representing the files added to a table Only AddFile actions will be used to create the DataFrame.

    indexedFiles

    actions iterator from which to generate the DataFrame.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  16. def createDataFrameBetweenOffsets(startVersion: Long, startIndex: Long, isInitialSnapshot: Boolean, startOffsetOption: Option[DeltaSourceOffset], endOffset: DeltaSourceOffset): DataFrame

    Return the DataFrame between start and end offset.

    Return the DataFrame between start and end offset.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  17. def deltaAssert(check: ⇒ Boolean, name: String, msg: String, deltaLog: DeltaLog = null, data: AnyRef = null, path: Option[Path] = None): Unit

    Helper method to check invariants in Delta code.

    Helper method to check invariants in Delta code. Fails when running in tests, records a delta assertion event and logs a warning otherwise.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  18. def deserializeOffset(json: String): Offset
    Definition Classes
    Source → SparkDataStream
  19. val emptyDataFrame: DataFrame
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  20. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  21. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  22. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  23. lazy val forceEnableStreamingReadOnReadIncompatibleSchemaChangesDuringStreamStart: Boolean

    Flag that allows user to disable the read-compatibility check during stream start which protects against an corner case in which verifyStreamHygiene could not detect.

    Flag that allows user to disable the read-compatibility check during stream start which protects against an corner case in which verifyStreamHygiene could not detect. This is a bug fix but yet a potential behavior change, so we add a flag to fallback.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  24. lazy val forceEnableUnsafeReadOnNullabilityChange: Boolean

    Flag that allow user to fallback to the legacy behavior in which user can allow nullable=false schema to read nullable=true data, which is incorrect but a behavior change regardless.

    Flag that allow user to fallback to the legacy behavior in which user can allow nullable=false schema to read nullable=true data, which is incorrect but a behavior change regardless.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  25. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  26. def getCommonTags(deltaLog: DeltaLog, tahoeId: String): Map[TagDefinition, String]
    Definition Classes
    DeltaLogging
  27. def getDefaultReadLimit(): ReadLimit
    Definition Classes
    SupportsAdmissionControl
  28. def getErrorData(e: Throwable): Map[String, Any]
    Definition Classes
    DeltaLogging
  29. def getFileChangesAndCreateDataFrame(startVersion: Long, startIndex: Long, isInitialSnapshot: Boolean, endOffset: DeltaSourceOffset): DataFrame

    get the changes from startVersion, startIndex to the end

    get the changes from startVersion, startIndex to the end

    startVersion

    - calculated starting version

    startIndex

    - calculated starting index

    isInitialSnapshot

    - whether the stream has to return the initial snapshot or not

    endOffset

    - Offset that signifies the end of the stream.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  30. def getFileChangesWithRateLimit(fromVersion: Long, fromIndex: Long, isInitialSnapshot: Boolean, limits: Option[AdmissionLimits] = Some(AdmissionLimits())): ClosableIterator[IndexedFile]
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  31. def getMetadataOrProtocolChangeIndexedFileIterator(metadataChangeOpt: Option[Metadata], protocolChangeOpt: Option[Protocol], version: Long): ClosableIterator[IndexedFile]

    If the current stream metadata is not equal to the metadata change in metadataChangeOpt, return a metadata change barrier IndexedFile.

    If the current stream metadata is not equal to the metadata change in metadataChangeOpt, return a metadata change barrier IndexedFile. Only returns something if trackingMetadataChangeis true.

    Attributes
    protected
  32. def getNextOffsetFromPreviousOffset(previousOffset: DeltaSourceOffset, limits: Option[AdmissionLimits]): Option[DeltaSourceOffset]

    Return the next offset when previous offset exists.

    Return the next offset when previous offset exists.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  33. def getNextOffsetFromPreviousOffsetIfPendingSchemaChange(previousOffset: DeltaSourceOffset): Option[DeltaSourceOffset]

    If the given previous Delta source offset is a schema change offset, returns the appropriate next offset.

    If the given previous Delta source offset is a schema change offset, returns the appropriate next offset. This should be called before trying any other means of determining the next offset. If this returns None, then there is no schema change, and the caller should determine the next offset in the normal way.

    Attributes
    protected
  34. def getStartingOffsetFromSpecificDeltaVersion(fromVersion: Long, isInitialSnapshot: Boolean, limits: Option[AdmissionLimits]): Option[DeltaSourceOffset]

    Returns the offset that starts from a specific delta table version.

    Returns the offset that starts from a specific delta table version. This function is called when starting a new stream query.

    fromVersion

    The version of the delta table to calculate the offset from.

    isInitialSnapshot

    Whether the delta version is for the initial snapshot or not.

    limits

    Indicates how much data can be processed by a micro batch.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  35. val hasCheckedReadIncompatibleSchemaChangesOnStreamStart: Boolean

    A global flag to mark whether we have done a per-stream start check for column mapping schema changes (rename / drop).

    A global flag to mark whether we have done a per-stream start check for column mapping schema changes (rename / drop).

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
    Annotations
    @volatile()
  36. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  37. def initForTriggerAvailableNowIfNeeded(startOffsetOpt: Option[DeltaSourceOffset]): Unit

    initialize the internal states for AvailableNow if this method is called first time after prepareForTriggerAvailableNow.

    initialize the internal states for AvailableNow if this method is called first time after prepareForTriggerAvailableNow.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  38. def initLastOffsetForTriggerAvailableNow(startOffsetOpt: Option[DeltaSourceOffset]): Unit
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  39. def initialOffset(): Offset
    Definition Classes
    Source → SparkDataStream
  40. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  41. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def initializeMetadataTrackingAndExitStream(batchStartVersion: Long, batchEndVersionOpt: Option[Long] = None, alwaysFailUponLogInitialized: Boolean = false): Unit

    Initialize the schema tracking log if an empty schema tracking log is provided.

    Initialize the schema tracking log if an empty schema tracking log is provided. This method also checks the range between batchStartVersion and batchEndVersion to ensure we a safe schema to be initialized in the log.

    batchStartVersion

    Start version of the batch of data to be proceed, it should typically be the schema that is safe to process incoming data.

    batchEndVersionOpt

    Optionally, if we are looking at a constructed batch with existing end offset, we need to double verify to ensure no read-incompatible within the batch range.

    alwaysFailUponLogInitialized

    Whether we should always fail with the schema evolution exception.

    Attributes
    protected
  43. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  44. val isStreamingFromColumnMappingTable: Boolean

    Whether we are streaming from a table with column mapping enabled

    Whether we are streaming from a table with column mapping enabled

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  45. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  46. val lastOffsetForTriggerAvailableNow: Option[DeltaSourceOffset]

    When AvailableNow is used, this offset will be the upper bound where this run of the query will process up.

    When AvailableNow is used, this offset will be the upper bound where this run of the query will process up. We may run multiple micro batches, but the query will stop itself when it reaches this offset.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  47. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  48. def logConsole(line: String): Unit
    Definition Classes
    DatabricksLogging
  49. def logDebug(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  50. def logDebug(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  51. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  52. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  53. def logError(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  54. def logError(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  55. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  56. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  57. def logInfo(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  58. def logInfo(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  59. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  60. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  61. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  62. def logTrace(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  63. def logTrace(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  64. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  65. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  66. def logWarning(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  67. def logWarning(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  68. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  69. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  70. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  71. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  72. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  73. val persistedMetadataAtSourceInit: Option[PersistedMetadata]

    The persisted schema from the schema log that must be used to read data files in this Delta streaming source.

    The persisted schema from the schema log that must be used to read data files in this Delta streaming source.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  74. def prepareForTriggerAvailableNow(): Unit
    Definition Classes
    DeltaSourceBase → SupportsTriggerAvailableNow
  75. val readConfigurationsAtSourceInit: Map[String, String]
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  76. val readPartitionSchemaAtSourceInit: StructType
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  77. val readProtocolAtSourceInit: Protocol
    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  78. val readSchemaAtSourceInit: StructType

    The read schema for this source during initialization, taking in account of SchemaLog.

    The read schema for this source during initialization, taking in account of SchemaLog.

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  79. lazy val readSnapshotDescriptor: SnapshotDescriptor

    Create a snapshot descriptor, customizing its metadata using metadata tracking if necessary

    Create a snapshot descriptor, customizing its metadata using metadata tracking if necessary

    Attributes
    protected
    Definition Classes
    DeltaSourceBase
  80. def readyToInitializeMetadataTrackingEagerly: Boolean

    Whether a schema tracking log is provided (and is empty), so we could initialize eagerly.

    Whether a schema tracking log is provided (and is empty), so we could initialize eagerly. This should only be used for the first write to the schema log, after then, schema tracking should not rely on this state any more.

    Attributes
    protected
  81. def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    path

    Used to log the path of the delta table when deltaLog is null.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  82. def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  83. def recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  84. def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  85. def recordFrameProfile[T](group: String, name: String)(thunk: ⇒ T): T
    Attributes
    protected
    Definition Classes
    DeltaLogging
  86. def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: ⇒ S): S
    Definition Classes
    DatabricksLogging
  87. def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  88. def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  89. def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  90. def reportLatestOffset(): Offset
    Definition Classes
    SupportsAdmissionControl
  91. val schema: StructType
    Definition Classes
    DeltaSourceBase → Source
  92. def stopIndexedFileIteratorAtSchemaChangeBarrier(fileActionScanIter: ClosableIterator[IndexedFile]): ClosableIterator[IndexedFile]

    This is called from getFileChangesWithRateLimit() during latestOffset().

    This is called from getFileChangesWithRateLimit() during latestOffset().

    Attributes
    protected
  93. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  94. def toString(): String
    Definition Classes
    AnyRef → Any
  95. def trackingMetadataChange: Boolean

    Whether this DeltaSource is utilizing a schema log entry as its read schema.

    Whether this DeltaSource is utilizing a schema log entry as its read schema.

    If user explicitly turn on the flag to fall back to using latest schema to read (i.e. the legacy mode), we will ignore the schema log.

    Attributes
    protected
  96. def updateMetadataTrackingLogAndFailTheStreamIfNeeded(changedMetadataOpt: Option[Metadata], changedProtocolOpt: Option[Protocol], version: Long, replace: Boolean = false): Unit

    Write a new potentially changed metadata into the metadata tracking log.

    Write a new potentially changed metadata into the metadata tracking log. Then fail the stream to allow reanalysis if there are changes.

    changedMetadataOpt

    Potentially changed metadata action

    changedProtocolOpt

    Potentially changed protocol action

    version

    The version of change

    Attributes
    protected
  97. def updateMetadataTrackingLogAndFailTheStreamIfNeeded(end: Offset): Unit

    Update the current stream schema in the schema tracking log and fail the stream.

    Update the current stream schema in the schema tracking log and fail the stream. This is called during commit(). It's ok to fail during commit() because in streaming's semantics, the batch with offset ending at end should've already being processed completely.

    Attributes
    protected
  98. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  99. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  100. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  101. def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: ⇒ T): T

    Report a log to indicate some command is running.

    Report a log to indicate some command is running.

    Definition Classes
    DeltaProgressReporter

Inherited from DeltaSourceBase

Inherited from DeltaLogging

Inherited from DatabricksLogging

Inherited from DeltaProgressReporter

Inherited from LoggingShims

Inherited from Logging

Inherited from SupportsTriggerAvailableNow

Inherited from SupportsAdmissionControl

Inherited from Source

Inherited from SparkDataStream

Inherited from AnyRef

Inherited from Any

Ungrouped