class FlinkJob[T] extends AnyRef
Flink job that processes a single streaming GroupBy and writes out the results (raw events in untiled, pre-aggregates in case of tiled) to the KV store. At a high level, the operators are structured as follows: Kafka source -> Spark expression eval -> Avro conversion -> KV store writer Kafka source - Reads objects of type T (specific case class, Thrift / Proto) from a Kafka topic Spark expression eval - Evaluates the Spark SQL expression in the GroupBy and projects and filters the input data Avro conversion - Converts the Spark expr eval output to a form that can be written out to the KV store (PutRequest object) KV store writer - Writes the PutRequest objects to the KV store using the AsyncDataStream API
In the untiled version there are no-shuffles and thus this ends up being a single node in the Flink DAG (with the above 4 operators and parallelism as injected by the user)
- T
- The input data type
- Alphabetic
- By Inheritance
- FlinkJob
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
FlinkJob(eventSrc: FlinkSource[T], sinkFn: RichAsyncFunction[PutRequest, WriteResponse], groupByServingInfoParsed: GroupByServingInfoParsed, encoder: Encoder[T], parallelism: Int)
- eventSrc
- Provider of a Flink Datastream[T] for the given topic and feature group
- sinkFn
- Async Flink writer function to help us write to the KV store
- groupByServingInfoParsed
- The GroupBy we are working with
- encoder
- Spark Encoder for the input data type
- parallelism
- Parallelism to use for the Flink job
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
val
exprEval: SparkExpressionEvalFn[T]
- Attributes
- protected
- val featureGroupName: String
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val kafkaTopic: String
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def runGroupByJob(env: StreamExecutionEnvironment): DataStream[WriteResponse]
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()