Used only to insert data into BigQuery.
Used only to insert data into BigQuery. Getting the table metadata (existence, number of records, size, etc.) is done by API calls without reading existing data
Responsible for recursively deleting the intermediate path.
Responsible for recursively deleting the intermediate path. Implementing Thread in order to act as shutdown hook.
the path to delete
the hadoop configuration
Options for defining BigQueryRelations
Converts HDFS RemoteIterator to Scala iterator
Static helpers for working with BigQuery.
This object was generated by sbt-buildinfo.
Stateless converters for Converting between Spark and BigQuery types.
Resolvers for SparkBigQueryOptions