package flink
- Alphabetic
- Public
- All
Type Members
-
class
AsyncKVStoreWriter extends RichAsyncFunction[PutRequest, WriteResponse]
Async Flink writer function to help us write to the KV store.
-
case class
AvroCodecFn[T](groupByServingInfoParsed: GroupByServingInfoParsed) extends BaseAvroCodecFn[Map[String, Any], PutRequest] with Product with Serializable
A Flink function that is responsible for converting the Spark expr eval output and converting that to a form that can be written out to the KV store (PutRequest object)
A Flink function that is responsible for converting the Spark expr eval output and converting that to a form that can be written out to the KV store (PutRequest object)
- T
The input data type
- groupByServingInfoParsed
The GroupBy we are working with
-
sealed abstract
class
BaseAvroCodecFn[IN, OUT] extends RichFlatMapFunction[IN, OUT]
Base class for the Avro conversion Flink operator.
Base class for the Avro conversion Flink operator.
Subclasses should override the RichFlatMapFunction methods (flatMap) and groupByServingInfoParsed.
- IN
The input data type which contains the data to be avro-converted to bytes.
- OUT
The output data type (generally a PutRequest).
-
class
FlinkJob[T] extends AnyRef
Flink job that processes a single streaming GroupBy and writes out the results to the KV store.
Flink job that processes a single streaming GroupBy and writes out the results to the KV store.
There are two versions of the job, tiled and untiled. The untiled version writes out raw events while the tiled version writes out pre-aggregates. See the
runGroupByJobandrunTiledGroupByJobmethods for more details.- T
- The input data type
- abstract class FlinkSource[T] extends Serializable
-
class
LateEventCounter extends RichFlatMapFunction[Map[String, Any], Map[String, Any]]
Function to count late events.
Function to count late events.
This function should consume the Side Output of the main tiling window.
-
class
SparkExpressionEvalFn[T] extends RichFlatMapFunction[T, Map[String, Any]]
A Flink function that uses Chronon's CatalystUtil to evaluate the Spark SQL expression in a GroupBy.
A Flink function that uses Chronon's CatalystUtil to evaluate the Spark SQL expression in a GroupBy. This function is instantiated for a given type T (specific case class object, Thrift / Proto object). Based on the selects and where clauses in the GroupBy, this function projects and filters the input data and emits a Map which contains the relevant fields & values that are needed to compute the aggregated values for the GroupBy.
- T
The type of the input data.
-
case class
TiledAvroCodecFn[T](groupByServingInfoParsed: GroupByServingInfoParsed) extends BaseAvroCodecFn[TimestampedTile, PutRequest] with Product with Serializable
A Flink function that is responsible for converting an array of pre-aggregates (aka a tile) to a form that can be written out to the KV store (PutRequest object).
A Flink function that is responsible for converting an array of pre-aggregates (aka a tile) to a form that can be written out to the KV store (PutRequest object).
- T
The input data type
- groupByServingInfoParsed
The GroupBy we are working with
- case class WriteResponse(putRequest: PutRequest, status: Boolean) extends Product with Serializable
Value Members
- object AsyncKVStoreWriter extends Serializable