class StagingQuery extends TBase[StagingQuery, _Fields] with Serializable with Cloneable with Comparable[StagingQuery]
Staging Query encapsulates arbitrary spark computation. One key feature is that the computation follows a
"fill-what's-missing" pattern. Basically instead of explicitly specifying dates you specify two macros.
{{ start_date }} and {{end_date}}. Chronon will pass in earliest-missing-partition for start_date and
execution-date / today for end_date. So the query will compute multiple partitions at once.
- Annotations
- @SuppressWarnings() @Generated()
- Alphabetic
- By Inheritance
- StagingQuery
- Cloneable
- TBase
- Serializable
- TSerializable
- Comparable
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new StagingQuery(other: StagingQuery)
Performs a deep copy on other.
- new StagingQuery()
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- def addToSetups(elem: String): Unit
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clear(): Unit
- Definition Classes
- StagingQuery → TBase
- Annotations
- @Override()
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def compareTo(other: StagingQuery): Int
- Definition Classes
- StagingQuery → Comparable
- Annotations
- @Override()
- def deepCopy(): StagingQuery
- Definition Classes
- StagingQuery → TBase
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(that: StagingQuery): Boolean
- def equals(that: AnyRef): Boolean
- Definition Classes
- StagingQuery → AnyRef → Any
- Annotations
- @Override()
- def fieldForId(fieldId: Int): _Fields
- Definition Classes
- StagingQuery → TBase
- Annotations
- @Nullable()
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getFieldValue(field: _Fields): AnyRef
- Definition Classes
- StagingQuery → TBase
- Annotations
- @Nullable()
- def getMetaData(): MetaData
Contains name, team, output_namespace, execution parameters etc.
Contains name, team, output_namespace, execution parameters etc. Things that don't change the semantics of the computation itself.
- Annotations
- @Nullable()
- def getQuery(): String
Arbitrary spark query that should be written with
{{ start_date }},{{ end_date }}and{{ latest_date }}templatesArbitrary spark query that should be written with
{{ start_date }},{{ end_date }}and{{ latest_date }}templates{{ start_date }}will be set to this user provided start date, future incremental runs will set it to the latest existing partition + 1 day.{{ end_date }}is the end partition of the computing range.{{ latest_date }}is the end partition independent of the computing range (meant for cumulative sources).{{ max_date(table=namespace.my_table) }}is the max partition available for a given table.
- Annotations
- @Nullable()
- def getSetups(): List[String]
Spark SQL setup statements.
Spark SQL setup statements. Used typically to register UDFs.
- Annotations
- @Nullable()
- def getSetupsIterator(): Iterator[String]
- Annotations
- @Nullable()
- def getSetupsSize(): Int
- def getStartPartition(): String
on the first run,
{{ start_date }}will be set to this user provided start date, future incremental runs will set it to the latest existing partition + 1 day.on the first run,
{{ start_date }}will be set to this user provided start date, future incremental runs will set it to the latest existing partition + 1 day.- Annotations
- @Nullable()
- def hashCode(): Int
- Definition Classes
- StagingQuery → AnyRef → Any
- Annotations
- @Override()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def isSet(field: _Fields): Boolean
Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise
Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise
- Definition Classes
- StagingQuery → TBase
- def isSetMetaData(): Boolean
Returns true if field metaData is set (has been assigned a value) and false otherwise
- def isSetQuery(): Boolean
Returns true if field query is set (has been assigned a value) and false otherwise
- def isSetSetups(): Boolean
Returns true if field setups is set (has been assigned a value) and false otherwise
- def isSetStartPartition(): Boolean
Returns true if field startPartition is set (has been assigned a value) and false otherwise
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def read(iprot: TProtocol): Unit
- Definition Classes
- StagingQuery → TSerializable
- def setFieldValue(field: _Fields, value: AnyRef): Unit
- Definition Classes
- StagingQuery → TBase
- def setMetaData(metaData: MetaData): StagingQuery
Contains name, team, output_namespace, execution parameters etc.
Contains name, team, output_namespace, execution parameters etc. Things that don't change the semantics of the computation itself.
- def setMetaDataIsSet(value: Boolean): Unit
- def setQuery(query: String): StagingQuery
Arbitrary spark query that should be written with
{{ start_date }},{{ end_date }}and{{ latest_date }}templatesArbitrary spark query that should be written with
{{ start_date }},{{ end_date }}and{{ latest_date }}templates{{ start_date }}will be set to this user provided start date, future incremental runs will set it to the latest existing partition + 1 day.{{ end_date }}is the end partition of the computing range.{{ latest_date }}is the end partition independent of the computing range (meant for cumulative sources).{{ max_date(table=namespace.my_table) }}is the max partition available for a given table.
- def setQueryIsSet(value: Boolean): Unit
- def setSetups(setups: List[String]): StagingQuery
Spark SQL setup statements.
Spark SQL setup statements. Used typically to register UDFs.
- def setSetupsIsSet(value: Boolean): Unit
- def setStartPartition(startPartition: String): StagingQuery
on the first run,
{{ start_date }}will be set to this user provided start date, future incremental runs will set it to the latest existing partition + 1 day. - def setStartPartitionIsSet(value: Boolean): Unit
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- StagingQuery → AnyRef → Any
- Annotations
- @Override()
- def unsetMetaData(): Unit
- def unsetQuery(): Unit
- def unsetSetups(): Unit
- def unsetStartPartition(): Unit
- def validate(): Unit
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- def write(oprot: TProtocol): Unit
- Definition Classes
- StagingQuery → TSerializable