Packages

class DataFiltersBuilder extends AnyRef

Builds the data filters for data skipping.

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DataFiltersBuilder
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DataFiltersBuilder(spark: SparkSession, dataSkippingType: DeltaDataSkippingType)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def apply(dataFilter: Expression): Option[DataSkippingPredicate]
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  7. val dataSkippingType: DeltaDataSkippingType
    Attributes
    protected
  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  16. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  17. def rewriteDataFiltersAsPartitionLike(clusteringColumns: Seq[String], expr: Expression): Option[DataSkippingPredicate]

    Rewrites the given expression as a partition-like expression if possible: 1.

    Rewrites the given expression as a partition-like expression if possible: 1. Rewrite the attribute references in the expression to reference the collected min stats on the attribute reference's column. 2. Construct an expression that returns true if any of the referenced columns are not partition-like on a given file. The rewritten expression is a union of the above expressions: a file is read if it's either not partition-like on any of the columns or if the rewritten expression evaluates to true.

    clusteringColumns

    The columns that are used for clustering.

    expr

    The data filtering expression to rewrite.

    returns

    If the expression is safe to rewrite, return the rewritten expression. Otherwise, return None.

  18. val spark: SparkSession
    Attributes
    protected
  19. val statsProvider: StatsProvider
    Attributes
    protected
  20. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  21. def toString(): String
    Definition Classes
    AnyRef → Any
  22. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  23. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  24. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  25. object SkippingEligibleExpression

    An extractor that matches expressions that are eligible for data skipping predicates.

    An extractor that matches expressions that are eligible for data skipping predicates.

    returns

    A tuple of 1) column name referenced in the expression, 2) date type for the expression, 3) DataSkippingPredicateBuilder that builds the data skipping predicate for the expression, if the given expression is eligible. Otherwise, return None.

Inherited from AnyRef

Inherited from Any

Ungrouped