Packages

class BigQuerySparkJob extends SparkJob with BigQueryJobBase

Linear Supertypes
BigQueryJobBase, SparkJob, JobBase, StrictLogging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. BigQuerySparkJob
  2. BigQueryJobBase
  3. SparkJob
  4. JobBase
  5. StrictLogging
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new BigQuerySparkJob(cliConfig: BigQueryLoadConfig, maybeSchema: Option[Schema] = None)(implicit settings: Settings)

Type Members

  1. type JdbcConfigName = String
    Definition Classes
    JobBase

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def analyze(fullTableName: String): Any
    Attributes
    protected
    Definition Classes
    SparkJob
  5. def applyTableIamPolicy(tableId: TableId, rls: RowLevelSecurity): Policy

    To set access control on a table or view, we can use Identity and Access Management (IAM) policy After you create a table or view, you can set its policy with a set-iam-policy call For each call, we compare if the existing policy is equal to the defined one (in the Yaml file) If it's the case, we do nothing, otherwise we update the Table policy

    To set access control on a table or view, we can use Identity and Access Management (IAM) policy After you create a table or view, you can set its policy with a set-iam-policy call For each call, we compare if the existing policy is equal to the defined one (in the Yaml file) If it's the case, we do nothing, otherwise we update the Table policy

    Definition Classes
    BigQueryJobBase
  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. val bigquery: BigQuery
    Definition Classes
    BigQueryJobBase
  8. val bqTable: String
    Definition Classes
    BigQueryJobBase
  9. val bucket: String
  10. val cliConfig: BigQueryLoadConfig
    Definition Classes
    BigQuerySparkJobBigQueryJobBase
  11. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  12. val conf: Configuration
  13. def createSparkViews(views: Views, sqlParameters: Map[String, String]): Unit
    Attributes
    protected
    Definition Classes
    SparkJob
  14. val datasetId: DatasetId
    Definition Classes
    BigQueryJobBase
  15. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  16. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  17. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  18. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  19. def getOrCreateDataset(): Dataset
    Definition Classes
    BigQueryJobBase
  20. def getOrCreateTable(dataFrame: Option[DataFrame], maybeSchema: Option[Schema]): (Table, StandardTableDefinition)
  21. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  22. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  23. val logger: Logger
    Attributes
    protected
    Definition Classes
    StrictLogging
  24. def name: String
    Definition Classes
    BigQuerySparkJobJobBase
  25. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  26. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  27. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  28. def parseViewDefinition(valueWithEnv: String): (SinkType, Option[JdbcConfigName], String)

    valueWithEnv

    in the form [SinkType:[configName:]]viewName

    returns

    (SinkType, configName, viewName)

    Attributes
    protected
    Definition Classes
    JobBase
  29. def partitionDataset(dataset: DataFrame, partition: List[String]): DataFrame
    Attributes
    protected
    Definition Classes
    SparkJob
  30. def partitionedDatasetWriter(dataset: DataFrame, partition: List[String]): DataFrameWriter[Row]

    Partition a dataset using dataset columns.

    Partition a dataset using dataset columns. To partition the dataset using the ingestion time, use the reserved column names :

    • comet_date
    • comet_year
    • comet_month
    • comet_day
    • comet_hour
    • comet_minute These columns are renamed to "date", "year", "month", "day", "hour", "minute" in the dataset and their values is set to the current date/time.
    dataset

    : Input dataset

    partition

    : list of columns to use for partitioning.

    returns

    The Spark session used to run this job

    Attributes
    protected
    Definition Classes
    SparkJob
  31. def prepareConf(): Configuration
  32. def prepareRLS(): List[String]
    Definition Classes
    BigQueryJobBase
  33. val projectId: String
    Definition Classes
    BigQuerySparkJobBigQueryJobBase
  34. def registerUdf(udf: String): Unit
    Attributes
    protected
    Definition Classes
    SparkJob
  35. def run(): Try[JobResult]

    Just to force any spark job to implement its entry point within the "run" method

    Just to force any spark job to implement its entry point within the "run" method

    returns

    : Spark Session used for the job

    Definition Classes
    BigQuerySparkJobJobBase
  36. def runJob(statement: String, location: String): Job
    Definition Classes
    BigQueryJobBase
  37. def runSparkConnector(): Try[SparkJobResult]
  38. lazy val session: SparkSession
    Definition Classes
    SparkJob
  39. implicit val settings: Settings
    Definition Classes
    BigQuerySparkJobJobBase
  40. lazy val sparkEnv: SparkEnv
    Definition Classes
    SparkJob
  41. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  42. val tableId: TableId
    Definition Classes
    BigQueryJobBase
  43. def timePartitioning(partitionField: String, days: Option[Int] = None, requirePartitionFilter: Boolean): Builder
    Definition Classes
    BigQueryJobBase
  44. def toString(): String
    Definition Classes
    AnyRef → Any
  45. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  46. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  47. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from BigQueryJobBase

Inherited from SparkJob

Inherited from JobBase

Inherited from StrictLogging

Inherited from AnyRef

Inherited from Any

Ungrouped