public class BigQueryDirectDataSourceWriterContext extends Object implements DataSourceWriterContext
| Constructor and Description |
|---|
BigQueryDirectDataSourceWriterContext(BigQueryClient bigQueryClient,
BigQueryClientFactory bigQueryWriteClientFactory,
com.google.cloud.bigquery.TableId destinationTableId,
String writeUUID,
org.apache.spark.sql.SaveMode saveMode,
org.apache.spark.sql.types.StructType sparkSchema,
com.google.api.gax.retrying.RetrySettings bigqueryDataWriterHelperRetrySettings,
com.google.common.base.Optional<String> traceId,
boolean enableModeCheckForSchemaFields,
com.google.common.collect.ImmutableMap<String,String> tableLabels,
SchemaConvertersConfiguration schemaConvertersConfiguration,
Optional<String> destinationTableKmsKeyName,
boolean writeAtLeastOnce,
PartitionOverwriteMode overwriteMode,
org.apache.spark.SparkContext sparkContext) |
| Modifier and Type | Method and Description |
|---|---|
void |
abort(WriterCommitMessageContext[] messages)
If not in WritingMode IGNORE_INPUTS, the BigQuery Storage Write API WriteClient is shut down.
|
void |
commit(WriterCommitMessageContext[] messages)
This function will determine, based on the WritingMode: if in IGNORE_INPUTS mode, no work is to
be done; otherwise all streams will be batch committed using the BigQuery Storage Write API,
and then: if in OVERWRITE mode, the overwriteDestinationWithTemporary function from
BigQueryClient will be called to replace the destination table with all the data from the
temporary table; if in OVERWRITE mode with dynamic partitions enabled,
overwriteDestinationWithTemporaryDynamicPartitons from BigQueryClient will be called to replace
the required partitions;if in ALL_ELSE mode no more work needs to be done.
|
DataWriterContextFactory<org.apache.spark.sql.catalyst.InternalRow> |
createWriterContextFactory() |
void |
onDataWriterCommit(WriterCommitMessageContext message) |
void |
setTableInfo(com.google.cloud.bigquery.TableInfo tableInfo) |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitcreate, useCommitCoordinatorpublic BigQueryDirectDataSourceWriterContext(BigQueryClient bigQueryClient, BigQueryClientFactory bigQueryWriteClientFactory, com.google.cloud.bigquery.TableId destinationTableId, String writeUUID, org.apache.spark.sql.SaveMode saveMode, org.apache.spark.sql.types.StructType sparkSchema, com.google.api.gax.retrying.RetrySettings bigqueryDataWriterHelperRetrySettings, com.google.common.base.Optional<String> traceId, boolean enableModeCheckForSchemaFields, com.google.common.collect.ImmutableMap<String,String> tableLabels, SchemaConvertersConfiguration schemaConvertersConfiguration, Optional<String> destinationTableKmsKeyName, boolean writeAtLeastOnce, PartitionOverwriteMode overwriteMode, org.apache.spark.SparkContext sparkContext) throws IllegalArgumentException
IllegalArgumentExceptionpublic DataWriterContextFactory<org.apache.spark.sql.catalyst.InternalRow> createWriterContextFactory()
createWriterContextFactory in interface DataSourceWriterContextpublic void onDataWriterCommit(WriterCommitMessageContext message)
onDataWriterCommit in interface DataSourceWriterContextpublic void commit(WriterCommitMessageContext[] messages)
commit in interface DataSourceWriterContextmessages - the BigQueryWriterCommitMessage array returned by the BigQueryDataWriter's.WritingMode,
BigQueryClient.overwriteDestinationWithTemporary(TableId temporaryTableId, TableId
destinationTableId)public void abort(WriterCommitMessageContext[] messages)
abort in interface DataSourceWriterContextmessages - the BigQueryWriterCommitMessage array returned by the BigQueryDataWriter's.BigQueryWriteClientpublic void setTableInfo(com.google.cloud.bigquery.TableInfo tableInfo)
setTableInfo in interface DataSourceWriterContextCopyright © 2024. All rights reserved.