See: Description
| Class | Description |
|---|---|
| KafkaCodecs |
A registry for Kafka Serializers and Deserializers allowing to lookup serializers by class.
|
KafkaConsumer and
KafkaProducer
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleCreateConsumer
----
In the above example, a KafkaConsumer instance is created using
a map instance in order to specify the Kafka nodes list to connect (just one) and
the deserializers to use for getting key and value from each received message.
Likewise a producer can be created
[source,$lang]
----
examples.VertxKafkaClientExamples#createProducer
----
ifdef::java,groovy,kotlin[]
Another way is to use a Properties instance instead of the map.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleCreateConsumerJava
----
More advanced creation methods allow to specify the class type for the key and the value used for sending messages
or provided by received messages; this is a way for setting the key and value serializers/deserializers instead of
using the related properties for that
[source,$lang]
----
examples.VertxKafkaClientExamples#createProducerJava
----
Here the KafkaProducer instance is created in using a Properties for
specifying Kafka nodes list to connect (just one) and the acknowledgment mode; the key and value deserializers are
specified as parameters of KafkaProducer.create(io.vertx.core.Vertx, java.util.Properties, java.lang.Class, java.lang.Class).
endif::[]
== Receiving messages from a topic joining a consumer group
In order to start receiving messages from Kafka topics, the consumer can use the
KafkaConsumer.subscribe(java.util.Set) method for
subscribing to a set of topics being part of a consumer group (specified by the properties on creation).
You need to register an handler for handling incoming messages using the
KafkaConsumer.handler(io.vertx.core.Handler)
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleSubscribe(io.vertx.kafka.client.consumer.KafkaConsumer)
----
An handler can also be passed during subscription to be aware of the subscription result and being notified when the operation
is completed.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleSubscribeWithResult(io.vertx.kafka.client.consumer.KafkaConsumer)
----
Using the consumer group way, the Kafka cluster assigns partitions to the consumer taking into account other connected
consumers in the same consumer group, so that partitions can be spread across them.
The Kafka cluster handles partitions re-balancing when a consumer leaves the group (so assigned partitions are free
to be assigned to other consumers) or a new consumer joins the group (so it wants partitions to read from).
You can register handlers on a KafkaConsumer to be notified
of the partitions revocations and assignments by the Kafka cluster using
KafkaConsumer.partitionsRevokedHandler(io.vertx.core.Handler) and
KafkaConsumer.partitionsAssignedHandler(io.vertx.core.Handler).
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleConsumerPartitionsNotifs
----
After joining a consumer group for receiving messages, a consumer can decide to leave the consumer group in order to
not get messages anymore using KafkaConsumer.unsubscribe()
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleUnsubscribe
----
You can add an handler to be notified of the result
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleUnsubscribeWithCallback
----
== Receiving messages from a topic requesting specific partitions
Besides being part of a consumer group for receiving messages from a topic, a consumer can ask for a specific
topic partition. When the consumer is not part part of a consumer group the overall application cannot
rely on the re-balancing feature.
You can use KafkaConsumer.assign(java.util.Set, io.vertx.core.Handler)
in order to ask for specific partitions.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleConsumerAssignPartition
----
Calling KafkaConsumer.assignment(io.vertx.core.Handler) provides
the list of the current assigned partitions.
== Getting topic partition information
You can call the KafkaConsumer.partitionsFor(java.lang.String, io.vertx.core.Handler<io.vertx.core.AsyncResult<java.util.List<io.vertx.kafka.client.common.PartitionInfo>>>) to get information about
partitions for a specified topic
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleConsumerPartitionsFor
----
In addition KafkaConsumer.listTopics(io.vertx.core.Handler<io.vertx.core.AsyncResult<java.util.Map<java.lang.String, java.util.List<io.vertx.kafka.client.common.PartitionInfo>>>>) provides all available topics
with related partitions
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleConsumerListTopics
----
== Manual offset commit
In Apache Kafka the consumer is in charge to handle the offset of the last read message.
This is executed by the commit operation executed automatically every time a bunch of messages are read
from a topic partition. The configuration parameter `enable.auto.commit` must be set to `true` when the
consumer is created.
Manual offset commit, can be achieved with KafkaConsumer.commit(io.vertx.core.Handler).
It can be used to achieve _at least once_ delivery to be sure that the read messages are processed before committing
the offset.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleConsumerManualOffsetCommit
----
== Seeking in a topic partition
Apache Kafka can retain messages for a long period of time and the consumer can seek inside a topic partition
and obtain arbitraty access to the messages.
You can use KafkaConsumer.seek(io.vertx.kafka.client.common.TopicPartition, long) to change the offset for reading at a specific
position
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleSeek
----
When the consumer needs to re-read the stream from the beginning, it can use KafkaConsumer.seekToBeginning(io.vertx.kafka.client.common.TopicPartition)
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleSeekToBeginning
----
Finally KafkaConsumer.seekToEnd(io.vertx.kafka.client.common.TopicPartition) can be used to come back at the end of the partition
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleSeekToEnd
----
== Message flow control
A consumer can control the incoming message flow and pause/resume the read operation from a topic, e.g it
can pause the message flow when it needs more time to process the actual messages and then resume
to continue message processing.
To achieve that you can use KafkaConsumer.pause() and
KafkaConsumer.resume()
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleConsumerFlowControl
----
== Closing a consumer
Call close to close the consumer. Closing the consumer closes any open connections and releases all consumer resources.
The close is actually asynchronous and might not complete until some time after the call has returned. If you want to be notified
when the actual close has completed then you can pass in a handler.
This handler will then be called when the close has fully completed.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleConsumerClose(io.vertx.kafka.client.consumer.KafkaConsumer)
----
== Sending messages to a topic
You can use KafkaProducer.write(io.vertx.kafka.client.producer.KafkaProducerRecord<K, V>) to send messages (records) to a topic.
The simplest way to send a message is to specify only the destination topic and the related value, omitting its key
or partition, in this case the messages are sent in a round robin fashion across all the partitions of the topic.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleProducerWrite
----
You can receive message sent metadata like its topic, its destination partition and its assigned offset.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleProducerWriteWithAck
----
When you need to assign a partition to a message, you can specify its partition identifier
or its key
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleProducerWriteWithSpecificPartition
----
Since the producers identifies the destination using key hashing, you can use that to guarantee that all
messages with the same key are sent to the same partition and retain the order.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleProducerWriteWithSpecificKey
----
NOTE: the shared producer is created on the first `createShared` call and its configuration is defined at this moment,
shared producer usage must use the same configuration.
== Sharing a producer
Sometimes you want to share the same producer from within several verticles or contexts.
Calling KafkaProducer.createShared(io.vertx.core.Vertx, java.lang.String, java.util.Map)
returns a producer that can be shared safely.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleSharedProducer
----
The same resources (thread, connection) will be shared between the producer returned by this method.
When you are done with the producer, just close it, when all shared producers are closed, the resources will
be released for you.
== Closing a producer
Call close to close the producer. Closing the producer closes any open connections and releases all producer resources.
The close is actually asynchronous and might not complete until some time after the call has returned. If you want to be notified
when the actual close has completed then you can pass in a handler.
This handler will then be called when the close has fully completed.
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleProducerClose(io.vertx.kafka.client.producer.KafkaProducer)
----
== Getting topic partition information
You can call the KafkaProducer.partitionsFor(java.lang.String, io.vertx.core.Handler<io.vertx.core.AsyncResult<java.util.List<io.vertx.kafka.client.common.PartitionInfo>>>) to get information about
partitions for a specified topic:
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleProducerPartitionsFor
----
== Handling errors
Errors handling (e.g timeout) between a Kafka client (consumer or producer) and the Kafka cluster is done using
KafkaConsumer.exceptionHandler(io.vertx.core.Handler<java.lang.Throwable>) or
KafkaProducer.exceptionHandler(io.vertx.core.Handler<java.lang.Throwable>)
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleErrorHandling
----
== Automatic clean-up in verticles
If you’re creating consumers and producer from inside verticles, those consumers and producers will be automatically
closed when the verticle is undeployed.
== Using Vert.x serializers/deserizaliers
Vert.x Kafka client comes out of the box with serializers and deserializers for buffers, json object
and json array.
In a consumer you can use buffers
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleUsingVertxDeserializers()
----
Or in a producer
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleUsingVertxSerializers()
----
ifdef::java,groovy,kotlin[]
You can also specify the serizalizers/deserializers at creation time:
In a consumer
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleUsingVertxDeserializers2(io.vertx.core.Vertx)
----
Or in a producer
[source,$lang]
----
examples.VertxKafkaClientExamples#exampleUsingVertxSerializers2(io.vertx.core.Vertx)
----
endif::[]
ifdef::java[]
== RxJava API
The Kafka client provides an Rxified version of the original API.
[source,$lang]
----
examples.RxExamples#consumer(io.vertx.rxjava.kafka.client.consumer.KafkaConsumer)
----
endif::[]
ifdef::java,groovy,kotlin[]
== Stream implementation and native Kafka objects
When you want to operate on native Kafka records you can use a stream oriented
implementation which handles native Kafka objects.
The KafkaReadStream shall be used for reading topic partitions, it is
a read stream of ConsumerRecord objects.
The KafkaWriteStream shall be used for writing to topics, it is a write
stream of ProducerRecord.
The API exposed by these interfaces is mostly the same than the polyglot version.
endif::[]Copyright © 2017. All rights reserved.