Package org.apache.beam.sdk.io.kafka
Class KafkaReadSchemaTransformConfiguration
- java.lang.Object
-
- org.apache.beam.sdk.io.kafka.KafkaReadSchemaTransformConfiguration
-
@DefaultSchema(org.apache.beam.sdk.schemas.AutoValueSchema.class) public abstract class KafkaReadSchemaTransformConfiguration extends java.lang.ObjectConfiguration for reading from a Kafka topic.Internal only: This class is actively being worked on, and it will likely change. We provide no backwards compatibility guarantees, and it should not be implemented outside the Beam repository.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classKafkaReadSchemaTransformConfiguration.BuilderBuilder for theKafkaReadSchemaTransformConfiguration.
-
Field Summary
Fields Modifier and Type Field Description static java.util.Set<java.lang.String>VALID_DATA_FORMATSstatic java.lang.StringVALID_FORMATS_STRstatic java.util.Set<java.lang.String>VALID_START_OFFSET_VALUES
-
Constructor Summary
Constructors Constructor Description KafkaReadSchemaTransformConfiguration()
-
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description static KafkaReadSchemaTransformConfiguration.Builderbuilder()Instantiates aKafkaReadSchemaTransformConfiguration.Builderinstance.abstract java.lang.StringgetAutoOffsetResetConfig()abstract java.lang.StringgetBootstrapServers()Sets the bootstrap servers for the Kafka consumer.abstract java.lang.StringgetConfluentSchemaRegistrySubject()abstract java.lang.StringgetConfluentSchemaRegistryUrl()abstract java.util.Map<java.lang.String,java.lang.String>getConsumerConfigUpdates()abstract org.apache.beam.sdk.schemas.transforms.providers.ErrorHandlinggetErrorHandling()abstract java.lang.StringgetFileDescriptorPath()abstract java.lang.StringgetFormat()abstract java.lang.IntegergetMaxReadTimeSeconds()abstract java.lang.StringgetMessageName()abstract java.lang.StringgetSchema()abstract java.lang.StringgetTopic()Sets the topic from which to read.voidvalidate()
-
-
-
Field Detail
-
VALID_START_OFFSET_VALUES
public static final java.util.Set<java.lang.String> VALID_START_OFFSET_VALUES
-
VALID_FORMATS_STR
public static final java.lang.String VALID_FORMATS_STR
- See Also:
- Constant Field Values
-
VALID_DATA_FORMATS
public static final java.util.Set<java.lang.String> VALID_DATA_FORMATS
-
-
Method Detail
-
validate
public void validate()
-
builder
public static KafkaReadSchemaTransformConfiguration.Builder builder()
Instantiates aKafkaReadSchemaTransformConfiguration.Builderinstance.
-
getBootstrapServers
@SchemaFieldDescription("A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping\u2014this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form `host1:port1,host2:port2,...`") public abstract java.lang.String getBootstrapServers()Sets the bootstrap servers for the Kafka consumer.
-
getConfluentSchemaRegistryUrl
@Nullable public abstract java.lang.String getConfluentSchemaRegistryUrl()
-
getFormat
@SchemaFieldDescription("The encoding format for the data stored in Kafka. Valid options are: RAW,AVRO,JSON,PROTO") @Nullable public abstract java.lang.String getFormat()
-
getConfluentSchemaRegistrySubject
@Nullable public abstract java.lang.String getConfluentSchemaRegistrySubject()
-
getSchema
@SchemaFieldDescription("The schema in which the data is encoded in the Kafka topic. For AVRO data, this is a schema defined with AVRO schema syntax (https://avro.apache.org/docs/1.10.2/spec.html#schemas). For JSON data, this is a schema defined with JSON-schema syntax (https://json-schema.org/). If a URL to Confluent Schema Registry is provided, then this field is ignored, and the schema is fetched from Confluent Schema Registry.") @Nullable public abstract java.lang.String getSchema()
-
getFileDescriptorPath
@SchemaFieldDescription("The path to the Protocol Buffer File Descriptor Set file. This file is used for schema definition and message serialization.") @Nullable public abstract java.lang.String getFileDescriptorPath()
-
getMessageName
@SchemaFieldDescription("The name of the Protocol Buffer message to be used for schema extraction and data conversion.") @Nullable public abstract java.lang.String getMessageName()
-
getAutoOffsetResetConfig
@SchemaFieldDescription("What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. (1) earliest: automatically reset the offset to the earliest offset. (2) latest: automatically reset the offset to the latest offset (3) none: throw exception to the consumer if no previous offset is found for the consumer\u2019s group") @Nullable public abstract java.lang.String getAutoOffsetResetConfig()
-
getConsumerConfigUpdates
@SchemaFieldDescription("A list of key-value pairs that act as configuration parameters for Kafka consumers. Most of these configurations will not be needed, but if you need to customize your Kafka consumer, you may use this. See a detailed list: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html") @Nullable public abstract java.util.Map<java.lang.String,java.lang.String> getConsumerConfigUpdates()
-
getTopic
public abstract java.lang.String getTopic()
Sets the topic from which to read.
-
getMaxReadTimeSeconds
@SchemaFieldDescription("Upper bound of how long to read from Kafka.") @Nullable public abstract java.lang.Integer getMaxReadTimeSeconds()
-
getErrorHandling
@SchemaFieldDescription("This option specifies whether and where to output unwritable rows.") @Nullable public abstract org.apache.beam.sdk.schemas.transforms.providers.ErrorHandling getErrorHandling()
-
-