object DeltaSourceMetadataEvolutionSupport
- Alphabetic
- By Inheritance
- DeltaSourceMetadataEvolutionSupport
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final val SQL_CONF_UNBLOCK_ALL: String("allowSourceColumnRenameAndDrop")
- final val SQL_CONF_UNBLOCK_DROP: String("allowSourceColumnDrop")
- final val SQL_CONF_UNBLOCK_RENAME: String("allowSourceColumnRename")
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- def getCheckpointHash(path: String): Int
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- def validateIfSchemaChangeCanBeUnblockedWithSQLConf(spark: SparkSession, metadataPath: String, currentSchema: PersistedMetadata, previousSchema: PersistedMetadata): Unit
Given a non-additive operation type from a previous schema evolution, check we can process using the new schema given any SQL conf users have explicitly set to unblock.
Given a non-additive operation type from a previous schema evolution, check we can process using the new schema given any SQL conf users have explicitly set to unblock. The SQL conf can take one of following formats: 1. spark.databricks.delta.streaming.allowSourceColumnRenameAndDrop = true -> allows all non-additive schema changes to propagate. 2. spark.databricks.delta.streaming.allowSourceColumnRenameAndDrop.$checkpointHash = true -> allows all non-additive schema changes to propagate for this particular stream 3. spark.databricks.delta.streaming.allowSourceColumnRenameAndDrop.$checkpointHash = $deltaVersion
The
allowSourceColumnRenameAndDropcan be replaced with: 1.allowSourceColumnRenameto just allow column rename 2.allowSourceColumnDropto just allow column dropsWe will check for any of these configs given the non-additive operation, and throw a proper error message to instruct the user to set the SQL conf if they would like to unblock.
- metadataPath
The path to the source-unique metadata location under checkpoint
- currentSchema
The current persisted schema
- previousSchema
The previous persisted schema
- Attributes
- protected[sources]
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()