Packages

class MetricsJob extends SparkJob

Linear Supertypes
SparkJob, JobBase, StrictLogging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MetricsJob
  2. SparkJob
  3. JobBase
  4. StrictLogging
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MetricsJob(domain: Domain, schema: Schema, stage: Stage, storageHandler: StorageHandler, schemaHandler: SchemaHandler)(implicit settings: Settings)

    domain

    : Domain name

    schema

    : Schema

    stage

    : stage

    storageHandler

    : Storage Handler

Type Members

  1. type JdbcConfigName = String
    Definition Classes
    JobBase

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def analyze(fullTableName: String): Any
    Attributes
    protected
    Definition Classes
    SparkJob
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  7. def createSparkViews(views: Views, sqlParameters: Map[String, String]): Unit
    Attributes
    protected
    Definition Classes
    SparkJob
  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. def lockPath(path: String): Path
  15. val logger: Logger
    Attributes
    protected
    Definition Classes
    StrictLogging
  16. def metricsPath(path: String): Path

    Function to build the metrics save path

    Function to build the metrics save path

    path

    : path where metrics are stored

    returns

    : path where the metrics for the specified schema are stored

  17. def name: String
    Definition Classes
    MetricsJobJobBase
  18. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  19. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  20. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  21. def parseViewDefinition(valueWithEnv: String): (SinkType, Option[JdbcConfigName], String)

    valueWithEnv

    in the form [SinkType:[configName:]]viewName

    returns

    (SinkType, configName, viewName)

    Attributes
    protected
    Definition Classes
    JobBase
  22. def partitionDataset(dataset: DataFrame, partition: List[String]): DataFrame
    Attributes
    protected
    Definition Classes
    SparkJob
  23. def partitionedDatasetWriter(dataset: DataFrame, partition: List[String]): DataFrameWriter[Row]

    Partition a dataset using dataset columns.

    Partition a dataset using dataset columns. To partition the dataset using the ingestion time, use the reserved column names :

    • comet_date
    • comet_year
    • comet_month
    • comet_day
    • comet_hour
    • comet_minute These columns are renamed to "date", "year", "month", "day", "hour", "minute" in the dataset and their values is set to the current date/time.
    dataset

    : Input dataset

    partition

    : list of columns to use for partitioning.

    returns

    The Spark session used to run this job

    Attributes
    protected
    Definition Classes
    SparkJob
  24. def registerUdf(udf: String): Unit
    Attributes
    protected
    Definition Classes
    SparkJob
  25. def run(dataUse: DataFrame, timestamp: Timestamp): Try[SparkJobResult]
  26. def run(): Try[JobResult]

    Just to force any spark job to implement its entry point using within the "run" method

    Just to force any spark job to implement its entry point using within the "run" method

    returns

    : Spark Session used for the job

    Definition Classes
    MetricsJobJobBase
  27. lazy val session: SparkSession
    Definition Classes
    SparkJob
  28. implicit val settings: Settings
    Definition Classes
    MetricsJobJobBase
  29. lazy val sparkEnv: SparkEnv
    Definition Classes
    SparkJob
  30. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  31. def toString(): String
    Definition Classes
    AnyRef → Any
  32. def unionDisContMetric(discreteDataset: Option[DataFrame], continuousDataset: Option[DataFrame], domain: Domain, schema: Schema, count: Long, ingestionTime: Timestamp, stageState: Stage): MetricsDatasets

    Function Function that unifies discrete and continuous metrics dataframe, then write save the result to parquet

    Function Function that unifies discrete and continuous metrics dataframe, then write save the result to parquet

    discreteDataset

    : dataframe that contains all the discrete metrics

    continuousDataset

    : dataframe that contains all the continuous metrics

    domain

    : name of the domain

    schema

    : schema of the initial data

    ingestionTime

    : time which correspond to the ingestion

    stageState

    : stage (unit / global)

  33. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  34. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from SparkJob

Inherited from JobBase

Inherited from StrictLogging

Inherited from AnyRef

Inherited from Any

Ungrouped