public class SchemaAwareStreamWriter<T> extends Object implements AutoCloseable
| Modifier and Type | Class and Description |
|---|---|
static class |
SchemaAwareStreamWriter.Builder<T> |
| Modifier and Type | Method and Description |
|---|---|
com.google.api.core.ApiFuture<AppendRowsResponse> |
append(Iterable<T> items)
Writes a collection that contains objects to the BigQuery table by first converting the data to
Protobuf messages, then using StreamWriter's append() to write the data at current end of
stream.
|
com.google.api.core.ApiFuture<AppendRowsResponse> |
append(Iterable<T> items,
long offset)
Writes a collection that contains objects to the BigQuery table by first converting the data to
Protobuf messages, then using StreamWriter's append() to write the data at the specified
offset.
|
void |
close()
Closes the underlying StreamWriter.
|
com.google.protobuf.Descriptors.Descriptor |
getDescriptor()
Gets current descriptor
|
long |
getInflightWaitSeconds()
Returns the wait of a request in Client side before sending to the Server.
|
String |
getLocation()
Gets the location of the destination
|
Map<String,AppendRowsRequest.MissingValueInterpretation> |
getMissingValueInterpretationMap() |
String |
getStreamName() |
String |
getWriterId() |
boolean |
isClosed() |
boolean |
isUserClosed() |
static <T> SchemaAwareStreamWriter.Builder<T> |
newBuilder(String streamOrTableName,
BigQueryWriteClient client,
ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder with TableSchema being initialized
by StreamWriter by default.
|
static <T> SchemaAwareStreamWriter.Builder<T> |
newBuilder(String streamOrTableName,
TableSchema tableSchema,
BigQueryWriteClient client,
ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder.
|
static <T> SchemaAwareStreamWriter.Builder<T> |
newBuilder(String streamOrTableName,
TableSchema tableSchema,
ToProtoConverter<T> toProtoConverter)
newBuilder that constructs a SchemaAwareStreamWriter builder with BigQuery client being
initialized by StreamWriter by default.
|
void |
setMissingValueInterpretationMap(Map<String,AppendRowsRequest.MissingValueInterpretation> missingValueInterpretationMap)
Sets the missing value interpretation map for the SchemaAwareStreamWriter.
|
public com.google.api.core.ApiFuture<AppendRowsResponse> append(Iterable<T> items) throws IOException, com.google.protobuf.Descriptors.DescriptorValidationException
items - The array that contains objects to be writtenIOExceptioncom.google.protobuf.Descriptors.DescriptorValidationExceptionpublic com.google.api.core.ApiFuture<AppendRowsResponse> append(Iterable<T> items, long offset) throws IOException, com.google.protobuf.Descriptors.DescriptorValidationException
items - The collection that contains objects to be writtenoffset - Offset for deduplicationIOExceptioncom.google.protobuf.Descriptors.DescriptorValidationExceptionpublic String getStreamName()
public String getWriterId()
public com.google.protobuf.Descriptors.Descriptor getDescriptor()
public String getLocation()
public long getInflightWaitSeconds()
public void setMissingValueInterpretationMap(Map<String,AppendRowsRequest.MissingValueInterpretation> missingValueInterpretationMap)
missingValueInterpretationMap - the missing value interpretation map used by the
SchemaAwareStreamWriter.public Map<String,AppendRowsRequest.MissingValueInterpretation> getMissingValueInterpretationMap()
public static <T> SchemaAwareStreamWriter.Builder<T> newBuilder(String streamOrTableName, TableSchema tableSchema, ToProtoConverter<T> toProtoConverter)
The table schema passed in will be updated automatically when there is a schema update event. When used for Writer creation, it should be the latest schema. So when you are trying to reuse a stream, you should use Builder newBuilder( String streamOrTableName, BigQueryWriteClient client) instead, so the created Writer will be based on a fresh schema.
streamOrTableName - name of the stream that must follow
"projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+" or table name
"projects/[^/]+/datasets/[^/]+/tables/[^/]+"tableSchema - The schema of the table when the stream was created, which is passed back
through WriteStreampublic static <T> SchemaAwareStreamWriter.Builder<T> newBuilder(String streamOrTableName, TableSchema tableSchema, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)
The table schema passed in will be updated automatically when there is a schema update event. When used for Writer creation, it should be the latest schema. So when you are trying to reuse a stream, you should use Builder newBuilder( String streamOrTableName, BigQueryWriteClient client) instead, so the created Writer will be based on a fresh schema.
streamOrTableName - name of the stream that must follow
"projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+"tableSchema - The schema of the table when the stream was created, which is passed back
through WriteStreamclient - public static <T> SchemaAwareStreamWriter.Builder<T> newBuilder(String streamOrTableName, BigQueryWriteClient client, ToProtoConverter<T> toProtoConverter)
streamOrTableName - name of the stream that must follow
"projects/[^/]+/datasets/[^/]+/tables/[^/]+/streams/[^/]+"client - BigQueryWriteClientpublic void close()
close in interface AutoCloseablepublic boolean isClosed()
public boolean isUserClosed()
Copyright © 2023 Google LLC. All rights reserved.