Uses of Interface
org.apache.camel.builder.component.dsl.KafkaComponentBuilderFactory.KafkaComponentBuilder
Packages that use KafkaComponentBuilderFactory.KafkaComponentBuilder
-
Uses of KafkaComponentBuilderFactory.KafkaComponentBuilder in org.apache.camel.builder.component
Methods in org.apache.camel.builder.component that return KafkaComponentBuilderFactory.KafkaComponentBuilderModifier and TypeMethodDescriptionComponentsBuilderFactory.kafka()Kafka (camel-kafka) Sent and receive messages to/from an Apache Kafka broker. -
Uses of KafkaComponentBuilderFactory.KafkaComponentBuilder in org.apache.camel.builder.component.dsl
Classes in org.apache.camel.builder.component.dsl that implement KafkaComponentBuilderFactory.KafkaComponentBuilderModifier and TypeClassDescriptionstatic classMethods in org.apache.camel.builder.component.dsl that return KafkaComponentBuilderFactory.KafkaComponentBuilderModifier and TypeMethodDescriptionKafkaComponentBuilderFactory.KafkaComponentBuilder.additionalProperties(Map<String, Object> additionalProperties) Sets additional properties for either kafka consumer or kafka producer in case they can't be set directly on the camel configurations (e.g.: new Kafka properties that are not reflected yet in Camel configurations), the properties have to be prefixed with additionalProperties.., e.g.: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro.KafkaComponentBuilderFactory.KafkaComponentBuilder.allowManualCommit(boolean allowManualCommit) Whether to allow doing manual commits via KafkaManualCommit.KafkaComponentBuilderFactory.KafkaComponentBuilder.autoCommitEnable(boolean autoCommitEnable) If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer.KafkaComponentBuilderFactory.KafkaComponentBuilder.autoCommitIntervalMs(Integer autoCommitIntervalMs) The frequency in ms that the consumer offsets are committed to zookeeper.KafkaComponentBuilderFactory.KafkaComponentBuilder.autoOffsetReset(String autoOffsetReset) What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest: automatically reset the offset to the latest offset fail: throw exception to the consumer.KafkaComponentBuilderFactory.KafkaComponentBuilder.autowiredEnabled(boolean autowiredEnabled) Whether autowiring is enabled.KafkaComponentBuilderFactory.KafkaComponentBuilder.batching(boolean batching) Whether to use batching for processing or streaming.KafkaComponentBuilderFactory.KafkaComponentBuilder.batchWithIndividualHeaders(boolean batchWithIndividualHeaders) If this feature is enabled and a single element of a batch is an Exchange or Message, the producer will generate individual kafka header values for it by using the batch Message to determine the values.KafkaComponentBuilderFactory.KafkaComponentBuilder.breakOnFirstError(boolean breakOnFirstError) This options controls what happens when a consumer is processing an exchange and it fails.KafkaComponentBuilderFactory.KafkaComponentBuilder.bridgeErrorHandler(boolean bridgeErrorHandler) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler.URL of the Kafka brokers to use.KafkaComponentBuilderFactory.KafkaComponentBuilder.bufferMemorySize(Integer bufferMemorySize) The total bytes of memory the producer can use to buffer records waiting to be sent to the server.Automatically check the CRC32 of the records consumed.The client id is a user-specified string sent in each request to help trace calls.KafkaComponentBuilderFactory.KafkaComponentBuilder.commitTimeoutMs(Long commitTimeoutMs) The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete.KafkaComponentBuilderFactory.KafkaComponentBuilder.compressionCodec(String compressionCodec) This parameter allows you to specify the compression codec for all data generated by this producer.KafkaComponentBuilderFactory.KafkaComponentBuilder.configuration(org.apache.camel.component.kafka.KafkaConfiguration configuration) Allows to pre-configure the Kafka component with common options that the endpoints will reuse.KafkaComponentBuilderFactory.KafkaComponentBuilder.connectionMaxIdleMs(Integer connectionMaxIdleMs) Close idle connections after the number of milliseconds specified by this config.KafkaComponentBuilderFactory.KafkaComponentBuilder.consumerRequestTimeoutMs(Integer consumerRequestTimeoutMs) The configuration controls the maximum amount of time the client will wait for the response of a request.KafkaComponentBuilderFactory.KafkaComponentBuilder.consumersCount(int consumersCount) The number of consumers that connect to kafka server.KafkaComponentBuilderFactory.KafkaComponentBuilder.createConsumerBackoffInterval(long createConsumerBackoffInterval) The delay in millis seconds to wait before trying again to create the kafka consumer (kafka-client).KafkaComponentBuilderFactory.KafkaComponentBuilder.createConsumerBackoffMaxAttempts(int createConsumerBackoffMaxAttempts) Maximum attempts to create the kafka consumer (kafka-client), before eventually giving up and failing.KafkaComponentBuilderFactory.KafkaComponentBuilder.deliveryTimeoutMs(Integer deliveryTimeoutMs) An upper bound on the time to report success or failure after a call to send() returns.KafkaComponentBuilderFactory.KafkaComponentBuilder.enableIdempotence(boolean enableIdempotence) When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream.KafkaComponentBuilderFactory.KafkaComponentBuilder.fetchMaxBytes(Integer fetchMaxBytes) The maximum amount of data the server should return for a fetch request.KafkaComponentBuilderFactory.KafkaComponentBuilder.fetchMinBytes(Integer fetchMinBytes) The minimum amount of data the server should return for a fetch request.KafkaComponentBuilderFactory.KafkaComponentBuilder.fetchWaitMaxMs(Integer fetchWaitMaxMs) The maximum amount of time the server will block before answering the fetch request if there isn't enough data to immediately satisfy fetch.min.bytes.A string that uniquely identifies the group of consumer processes to which this consumer belongs.KafkaComponentBuilderFactory.KafkaComponentBuilder.groupInstanceId(String groupInstanceId) A unique identifier of the consumer instance provided by the end user.KafkaComponentBuilderFactory.KafkaComponentBuilder.headerDeserializer(org.apache.camel.component.kafka.serde.KafkaHeaderDeserializer headerDeserializer) To use a custom KafkaHeaderDeserializer to deserialize kafka headers values.KafkaComponentBuilderFactory.KafkaComponentBuilder.headerFilterStrategy(org.apache.camel.spi.HeaderFilterStrategy headerFilterStrategy) To use a custom HeaderFilterStrategy to filter header to and from Camel message.KafkaComponentBuilderFactory.KafkaComponentBuilder.headerSerializer(org.apache.camel.component.kafka.serde.KafkaHeaderSerializer headerSerializer) To use a custom KafkaHeaderSerializer to serialize kafka headers values.KafkaComponentBuilderFactory.KafkaComponentBuilder.healthCheckConsumerEnabled(boolean healthCheckConsumerEnabled) Used for enabling or disabling all consumer based health checks from this component.KafkaComponentBuilderFactory.KafkaComponentBuilder.healthCheckProducerEnabled(boolean healthCheckProducerEnabled) Used for enabling or disabling all producer based health checks from this component.KafkaComponentBuilderFactory.KafkaComponentBuilder.heartbeatIntervalMs(Integer heartbeatIntervalMs) The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities.KafkaComponentBuilderFactory.KafkaComponentBuilder.interceptorClasses(String interceptorClasses) Sets interceptors for producer or consumers.KafkaComponentBuilderFactory.KafkaComponentBuilder.isolationLevel(String isolationLevel) Controls how to read messages written transactionally.KafkaComponentBuilderFactory.kafka()Kafka (camel-kafka) Sent and receive messages to/from an Apache Kafka broker.KafkaComponentBuilderFactory.KafkaComponentBuilder.kafkaClientFactory(org.apache.camel.component.kafka.KafkaClientFactory kafkaClientFactory) Factory to use for creating org.apache.kafka.clients.consumer.KafkaConsumer and org.apache.kafka.clients.producer.KafkaProducer instances.KafkaComponentBuilderFactory.KafkaComponentBuilder.kafkaManualCommitFactory(org.apache.camel.component.kafka.consumer.KafkaManualCommitFactory kafkaManualCommitFactory) Factory to use for creating KafkaManualCommit instances.KafkaComponentBuilderFactory.KafkaComponentBuilder.kerberosBeforeReloginMinTime(Integer kerberosBeforeReloginMinTime) Login thread sleep time between refresh attempts.KafkaComponentBuilderFactory.KafkaComponentBuilder.kerberosConfigLocation(String kerberosConfigLocation) Location of the kerberos config file.KafkaComponentBuilderFactory.KafkaComponentBuilder.kerberosInitCmd(String kerberosInitCmd) Kerberos kinit command path.KafkaComponentBuilderFactory.KafkaComponentBuilder.kerberosPrincipalToLocalRules(String kerberosPrincipalToLocalRules) A list of rules for mapping from principal names to short names (typically operating system usernames).KafkaComponentBuilderFactory.KafkaComponentBuilder.kerberosRenewJitter(Double kerberosRenewJitter) Percentage of random jitter added to the renewal time.KafkaComponentBuilderFactory.KafkaComponentBuilder.kerberosRenewWindowFactor(Double kerberosRenewWindowFactor) Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.The record key (or null if no key is specified).KafkaComponentBuilderFactory.KafkaComponentBuilder.keyDeserializer(String keyDeserializer) Deserializer class for the key that implements the Deserializer interface.KafkaComponentBuilderFactory.KafkaComponentBuilder.keySerializer(String keySerializer) The serializer class for keys (defaults to the same as for messages if nothing is given).KafkaComponentBuilderFactory.KafkaComponentBuilder.lazyStartProducer(boolean lazyStartProducer) Whether the producer should be started lazy (on the first message).The producer groups together any records that arrive in between request transmissions into a single, batched, request.KafkaComponentBuilderFactory.KafkaComponentBuilder.maxBlockMs(Integer maxBlockMs) The configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), commitTransaction() and abortTransaction() methods will block.KafkaComponentBuilderFactory.KafkaComponentBuilder.maxInFlightRequest(Integer maxInFlightRequest) The maximum number of unacknowledged requests the client will send on a single connection before blocking.KafkaComponentBuilderFactory.KafkaComponentBuilder.maxPartitionFetchBytes(Integer maxPartitionFetchBytes) The maximum amount of data per-partition the server will return.KafkaComponentBuilderFactory.KafkaComponentBuilder.maxPollIntervalMs(Integer maxPollIntervalMs) The maximum delay between invocations of poll() when using consumer group management.KafkaComponentBuilderFactory.KafkaComponentBuilder.maxPollRecords(Integer maxPollRecords) The maximum number of records returned in a single call to poll().KafkaComponentBuilderFactory.KafkaComponentBuilder.maxRequestSize(Integer maxRequestSize) The maximum size of a request.KafkaComponentBuilderFactory.KafkaComponentBuilder.metadataMaxAgeMs(Integer metadataMaxAgeMs) The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.KafkaComponentBuilderFactory.KafkaComponentBuilder.metricReporters(String metricReporters) A list of classes to use as metrics reporters.KafkaComponentBuilderFactory.KafkaComponentBuilder.metricsSampleWindowMs(Integer metricsSampleWindowMs) The window of time a metrics sample is computed over.KafkaComponentBuilderFactory.KafkaComponentBuilder.noOfMetricsSample(Integer noOfMetricsSample) The number of samples maintained to compute metrics.KafkaComponentBuilderFactory.KafkaComponentBuilder.offsetRepository(org.apache.camel.spi.StateRepository<String, String> offsetRepository) The offset repository to use to locally store the offset of each partition of the topic.KafkaComponentBuilderFactory.KafkaComponentBuilder.partitionAssignor(String partitionAssignor) The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used.KafkaComponentBuilderFactory.KafkaComponentBuilder.partitioner(String partitioner) The partitioner class for partitioning messages amongst sub-topics.KafkaComponentBuilderFactory.KafkaComponentBuilder.partitionerIgnoreKeys(boolean partitionerIgnoreKeys) Whether the message keys should be ignored when computing the partition.KafkaComponentBuilderFactory.KafkaComponentBuilder.partitionKey(Integer partitionKey) The partition to which the record will be sent (or null if no partition was specified).KafkaComponentBuilderFactory.KafkaComponentBuilder.pollExceptionStrategy(org.apache.camel.component.kafka.PollExceptionStrategy pollExceptionStrategy) To use a custom strategy with the consumer to control how to handle exceptions thrown from the Kafka broker while pooling messages.KafkaComponentBuilderFactory.KafkaComponentBuilder.pollOnError(org.apache.camel.component.kafka.PollOnError pollOnError) What to do if kafka threw an exception while polling for new messages.KafkaComponentBuilderFactory.KafkaComponentBuilder.pollTimeoutMs(Long pollTimeoutMs) The timeout used when polling the KafkaConsumer.KafkaComponentBuilderFactory.KafkaComponentBuilder.preValidateHostAndPort(boolean preValidateHostAndPort) Whether to eager validate that broker host:port is valid and can be DNS resolved to known host during starting this consumer.KafkaComponentBuilderFactory.KafkaComponentBuilder.producerBatchSize(Integer producerBatchSize) The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition.KafkaComponentBuilderFactory.KafkaComponentBuilder.queueBufferingMaxMessages(Integer queueBufferingMaxMessages) The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped.KafkaComponentBuilderFactory.KafkaComponentBuilder.receiveBufferBytes(Integer receiveBufferBytes) The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.KafkaComponentBuilderFactory.KafkaComponentBuilder.reconnectBackoffMaxMs(Integer reconnectBackoffMaxMs) The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect.KafkaComponentBuilderFactory.KafkaComponentBuilder.reconnectBackoffMs(Integer reconnectBackoffMs) The amount of time to wait before attempting to reconnect to a given host.KafkaComponentBuilderFactory.KafkaComponentBuilder.recordMetadata(boolean recordMetadata) Whether the producer should store the RecordMetadata results from sending to Kafka.KafkaComponentBuilderFactory.KafkaComponentBuilder.requestRequiredAcks(String requestRequiredAcks) The number of acknowledgments the producer requires the leader to have received before considering a request complete.KafkaComponentBuilderFactory.KafkaComponentBuilder.requestTimeoutMs(Integer requestTimeoutMs) The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client.Setting a value greater than zero will cause the client to resend any record that has failed to be sent due to a potentially transient error.KafkaComponentBuilderFactory.KafkaComponentBuilder.retryBackoffMaxMs(Integer retryBackoffMaxMs) The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed.KafkaComponentBuilderFactory.KafkaComponentBuilder.retryBackoffMs(Integer retryBackoffMs) The amount of time to wait before attempting to retry a failed request to a given topic partition.KafkaComponentBuilderFactory.KafkaComponentBuilder.saslJaasConfig(String saslJaasConfig) Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD;.KafkaComponentBuilderFactory.KafkaComponentBuilder.saslKerberosServiceName(String saslKerberosServiceName) The Kerberos principal name that Kafka runs as.KafkaComponentBuilderFactory.KafkaComponentBuilder.saslMechanism(String saslMechanism) The Simple Authentication and Security Layer (SASL) Mechanism used.KafkaComponentBuilderFactory.KafkaComponentBuilder.schemaRegistryURL(String schemaRegistryURL) URL of the schema registry servers to use.KafkaComponentBuilderFactory.KafkaComponentBuilder.securityProtocol(String securityProtocol) Protocol used to communicate with brokers.KafkaComponentBuilderFactory.KafkaComponentBuilder.seekTo(org.apache.camel.component.kafka.SeekPolicy seekTo) Set if KafkaConsumer should read from the beginning or the end on startup: SeekPolicy.BEGINNING: read from the beginning.KafkaComponentBuilderFactory.KafkaComponentBuilder.sendBufferBytes(Integer sendBufferBytes) Socket write buffer size.KafkaComponentBuilderFactory.KafkaComponentBuilder.sessionTimeoutMs(Integer sessionTimeoutMs) The timeout used to detect failures when using Kafka's group management facilities.KafkaComponentBuilderFactory.KafkaComponentBuilder.shutdownTimeout(int shutdownTimeout) Timeout in milliseconds to wait gracefully for the consumer or producer to shut down and terminate its worker threads.KafkaComponentBuilderFactory.KafkaComponentBuilder.specificAvroReader(boolean specificAvroReader) This enables the use of a specific Avro reader for use with the in multiple Schema registries documentation with Avro Deserializers implementation.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslCipherSuites(String sslCipherSuites) A list of cipher suites.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslContextParameters(org.apache.camel.support.jsse.SSLContextParameters sslContextParameters) SSL configuration using a Camel SSLContextParameters object.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslEnabledProtocols(String sslEnabledProtocols) The list of protocols enabled for SSL connections.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslEndpointAlgorithm(String sslEndpointAlgorithm) The endpoint identification algorithm to validate server hostname using server certificate.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslKeymanagerAlgorithm(String sslKeymanagerAlgorithm) The algorithm used by key manager factory for SSL connections.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslKeyPassword(String sslKeyPassword) The password of the private key in the key store file or the PEM key specified in sslKeystoreKey.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslKeystoreLocation(String sslKeystoreLocation) The location of the key store file.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslKeystorePassword(String sslKeystorePassword) The store password for the key store file.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslKeystoreType(String sslKeystoreType) The file format of the key store file.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslProtocol(String sslProtocol) The SSL protocol used to generate the SSLContext.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslProvider(String sslProvider) The name of the security provider used for SSL connections.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslTrustmanagerAlgorithm(String sslTrustmanagerAlgorithm) The algorithm used by trust manager factory for SSL connections.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslTruststoreLocation(String sslTruststoreLocation) The location of the trust store file.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslTruststorePassword(String sslTruststorePassword) The password for the trust store file.KafkaComponentBuilderFactory.KafkaComponentBuilder.sslTruststoreType(String sslTruststoreType) The file format of the trust store file.KafkaComponentBuilderFactory.KafkaComponentBuilder.subscribeConsumerBackoffInterval(long subscribeConsumerBackoffInterval) The delay in millis seconds to wait before trying again to subscribe to the kafka broker.KafkaComponentBuilderFactory.KafkaComponentBuilder.subscribeConsumerBackoffMaxAttempts(int subscribeConsumerBackoffMaxAttempts) Maximum number the kafka consumer will attempt to subscribe to the kafka broker, before eventually giving up and failing.KafkaComponentBuilderFactory.KafkaComponentBuilder.synchronous(boolean synchronous) Sets whether synchronous processing should be strictly used.KafkaComponentBuilderFactory.KafkaComponentBuilder.topicIsPattern(boolean topicIsPattern) Whether the topic is a pattern (regular expression).KafkaComponentBuilderFactory.KafkaComponentBuilder.useGlobalSslContextParameters(boolean useGlobalSslContextParameters) Enable usage of global SSL context parameters.KafkaComponentBuilderFactory.KafkaComponentBuilder.useIterator(boolean useIterator) Sets whether sending to kafka should send the message body as a single record, or use a java.util.Iterator to send multiple records to kafka (if the message body can be iterated).KafkaComponentBuilderFactory.KafkaComponentBuilder.valueDeserializer(String valueDeserializer) Deserializer class for value that implements the Deserializer interface.KafkaComponentBuilderFactory.KafkaComponentBuilder.valueSerializer(String valueSerializer) The serializer class for messages.KafkaComponentBuilderFactory.KafkaComponentBuilder.workerPool(ExecutorService workerPool) To use a custom worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing.KafkaComponentBuilderFactory.KafkaComponentBuilder.workerPoolCoreSize(Integer workerPoolCoreSize) Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing.KafkaComponentBuilderFactory.KafkaComponentBuilder.workerPoolMaxSize(Integer workerPoolMaxSize) Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledged the message that was sent to it from KafkaProducer using asynchronous non-blocking processing.