class BigQuerySparkJob extends SparkJob with BigQueryJobBase
- Alphabetic
- By Inheritance
- BigQuerySparkJob
- BigQueryJobBase
- SparkJob
- JobBase
- StrictLogging
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new BigQuerySparkJob(cliConfig: BigQueryLoadConfig, maybeSchema: Option[Schema] = None)(implicit settings: Settings)
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
analyze(fullTableName: String): Any
- Attributes
- protected
- Definition Classes
- SparkJob
-
def
applyTableIamPolicy(tableId: TableId, rls: RowLevelSecurity): Policy
To set access control on a table or view, we can use Identity and Access Management (IAM) policy After you create a table or view, you can set its policy with a set-iam-policy call For each call, we compare if the existing policy is equal to the defined one (in the Yaml file) If it's the case, we do nothing, otherwise we update the Table policy
To set access control on a table or view, we can use Identity and Access Management (IAM) policy After you create a table or view, you can set its policy with a set-iam-policy call For each call, we compare if the existing policy is equal to the defined one (in the Yaml file) If it's the case, we do nothing, otherwise we update the Table policy
- Definition Classes
- BigQueryJobBase
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
val
bigquery: BigQuery
- Definition Classes
- BigQueryJobBase
-
val
bqTable: String
- Definition Classes
- BigQueryJobBase
- val bucket: String
-
val
cliConfig: BigQueryLoadConfig
- Definition Classes
- BigQuerySparkJob → BigQueryJobBase
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
- val conf: Configuration
-
def
createSparkViews(views: Views, sqlParameters: Map[String, String]): Unit
- Attributes
- protected
- Definition Classes
- SparkJob
-
val
datasetId: DatasetId
- Definition Classes
- BigQueryJobBase
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getOrCreateDataset(): Dataset
- Definition Classes
- BigQueryJobBase
- def getOrCreateTable(dataFrame: Option[DataFrame], maybeSchema: Option[Schema]): (Table, StandardTableDefinition)
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
val
logger: Logger
- Attributes
- protected
- Definition Classes
- StrictLogging
-
def
name: String
- Definition Classes
- BigQuerySparkJob → JobBase
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
parseViewDefinition(valueWithEnv: String): (SinkType, Option[JdbcConfigName], String)
- valueWithEnv
in the form [SinkType:[configName:]]viewName
- returns
(SinkType, configName, viewName)
- Attributes
- protected
- Definition Classes
- JobBase
-
def
partitionDataset(dataset: DataFrame, partition: List[String]): DataFrame
- Attributes
- protected
- Definition Classes
- SparkJob
-
def
partitionedDatasetWriter(dataset: DataFrame, partition: List[String]): DataFrameWriter[Row]
Partition a dataset using dataset columns.
Partition a dataset using dataset columns. To partition the dataset using the ingestion time, use the reserved column names :
- comet_date
- comet_year
- comet_month
- comet_day
- comet_hour
- comet_minute These columns are renamed to "date", "year", "month", "day", "hour", "minute" in the dataset and their values is set to the current date/time.
- dataset
: Input dataset
- partition
: list of columns to use for partitioning.
- returns
The Spark session used to run this job
- Attributes
- protected
- Definition Classes
- SparkJob
- def prepareConf(): Configuration
-
def
prepareRLS(): List[String]
- Definition Classes
- BigQueryJobBase
-
val
projectId: String
- Definition Classes
- BigQuerySparkJob → BigQueryJobBase
-
def
registerUdf(udf: String): Unit
- Attributes
- protected
- Definition Classes
- SparkJob
-
def
run(): Try[JobResult]
Just to force any spark job to implement its entry point within the "run" method
Just to force any spark job to implement its entry point within the "run" method
- returns
: Spark Session used for the job
- Definition Classes
- BigQuerySparkJob → JobBase
-
def
runJob(statement: String, location: String): Job
- Definition Classes
- BigQueryJobBase
- def runSparkConnector(): Try[SparkJobResult]
-
lazy val
session: SparkSession
- Definition Classes
- SparkJob
-
implicit
val
settings: Settings
- Definition Classes
- BigQuerySparkJob → JobBase
-
lazy val
sparkEnv: SparkEnv
- Definition Classes
- SparkJob
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
val
tableId: TableId
- Definition Classes
- BigQueryJobBase
-
def
timePartitioning(partitionField: String, days: Option[Int] = None, requirePartitionFilter: Boolean): Builder
- Definition Classes
- BigQueryJobBase
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()