Interface KafkaConsumer<K,V>
- All Superinterfaces:
ReadStream<KafkaConsumerRecord<K,,V>> StreamBase
You receive Kafka records by providing a handler(Handler). As messages arrive the handler
will be called with the records.
The pause() and resume() provides global control over reading the records from the consumer.
The pause(Set) and resume(Set) provides finer grained control over reading records
for specific Topic/Partition, these are Kafka's specific operations.
-
Method Summary
Modifier and TypeMethodDescriptionassign(TopicPartition topicPartition) Manually assign a partition to this consumer.assign(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Manually assign a partition to this consumer.assign(Set<TopicPartition> topicPartitions) Manually assign a list of partition to this consumer.assign(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Manually assign a list of partition to this consumer.Likeassignment(Handler)but returns aFutureof the asynchronous resultassignment(Handler<AsyncResult<Set<TopicPartition>>> handler) Get the set of partitions currently assigned to this consumer.asStream()batchHandler(Handler<KafkaConsumerRecords<K, V>> handler) Set the handler to be used when batches of messages are fetched from the Kafka server.beginningOffsets(TopicPartition topicPartition) LikebeginningOffsets(TopicPartition, Handler)but returns aFutureof the asynchronous resultvoidbeginningOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler) Get the first offset for the given partitions.beginningOffsets(Set<TopicPartition> topicPartitions) LikebeginningOffsets(Set, Handler)but returns aFutureof the asynchronous resultvoidbeginningOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition, Long>>> handler) Get the first offset for the given partitions.close()Close the consumervoidclose(Handler<AsyncResult<Void>> completionHandler) Close the consumercommit()Commit current offsets for all the subscribed list of topics and partition.voidcommit(Handler<AsyncResult<Void>> completionHandler) Commit current offsets for all the subscribed list of topics and partition.commit(Map<TopicPartition, OffsetAndMetadata> offsets) Commit the specified offsets for the specified list of topics and partitions to Kafka.voidcommit(Map<TopicPartition, OffsetAndMetadata> offsets, Handler<AsyncResult<Map<TopicPartition, OffsetAndMetadata>>> completionHandler) Commit the specified offsets for the specified list of topics and partitions to Kafka.committed(TopicPartition topicPartition) Likecommitted(TopicPartition, Handler)but returns aFutureof the asynchronous resultvoidcommitted(TopicPartition topicPartition, Handler<AsyncResult<OffsetAndMetadata>> handler) Get the last committed offset for the given partition (whether the commit happened by this process or another).static <K,V> KafkaConsumer<K, V> create(Vertx vertx, KafkaClientOptions options) Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> create(Vertx vertx, KafkaClientOptions options, Class<K> keyType, Class<V> valueType) Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> create(Vertx vertx, KafkaClientOptions options, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer) Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> create(Vertx vertx, Map<String, String> config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer) Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> create(Vertx vertx, Properties config) Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> create(Vertx vertx, Properties config, Class<K> keyType, Class<V> valueType) Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> create(Vertx vertx, Properties config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer) Create a new KafkaConsumer instancestatic <K,V> KafkaConsumer<K, V> Create a new KafkaConsumer instance from a nativeConsumer.static <K,V> KafkaConsumer<K, V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K, V> consumer, KafkaClientOptions options) Create a new KafkaConsumer instance from a nativeConsumer.longdemand()Returns the current demand.endHandler(Handler<Void> endHandler) endOffsets(TopicPartition topicPartition) LikeendOffsets(TopicPartition, Handler)but returns aFutureof the asynchronous resultvoidendOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler) Get the last offset for the given partition.endOffsets(Set<TopicPartition> topicPartitions) LikeendOffsets(Set, Handler)but returns aFutureof the asynchronous resultvoidendOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition, Long>>> handler) Get the last offset for the given partitions.exceptionHandler(Handler<Throwable> handler) fetch(long amount) handler(Handler<KafkaConsumerRecord<K, V>> handler) LikelistTopics(Handler)but returns aFutureof the asynchronous resultlistTopics(Handler<AsyncResult<Map<String, List<PartitionInfo>>>> handler) Get metadata about partitions for all topics that the user is authorized to view.offsetsForTimes(TopicPartition topicPartition, Long timestamp) LikeoffsetsForTimes(TopicPartition, Long, Handler)but returns aFutureof the asynchronous resultvoidoffsetsForTimes(TopicPartition topicPartition, Long timestamp, Handler<AsyncResult<OffsetAndTimestamp>> handler) Look up the offset for the given partition by timestamp.offsetsForTimes(Map<TopicPartition, Long> topicPartitionTimestamps) LikeoffsetsForTimes(Map, Handler)but returns aFutureof the asynchronous resultvoidoffsetsForTimes(Map<TopicPartition, Long> topicPartitionTimestamps, Handler<AsyncResult<Map<TopicPartition, OffsetAndTimestamp>>> handler) Look up the offsets for the given partitions by timestamp.partitionsAssignedHandler(Handler<Set<TopicPartition>> handler) Set the handler called when topic partitions are assigned to the consumerpartitionsFor(String topic) LikepartitionsFor(String, Handler)but returns aFutureof the asynchronous resultpartitionsFor(String topic, Handler<AsyncResult<List<PartitionInfo>>> handler) Get metadata about the partitions for a given topic.partitionsRevokedHandler(Handler<Set<TopicPartition>> handler) Set the handler called when topic partitions are revoked to the consumerpause()pause(TopicPartition topicPartition) Suspend fetching from the requested partition.pause(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Suspend fetching from the requested partition.pause(Set<TopicPartition> topicPartitions) Suspend fetching from the requested partitions.pause(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Suspend fetching from the requested partitions.paused()Likepaused(Handler)but returns aFutureof the asynchronous resultvoidpaused(Handler<AsyncResult<Set<TopicPartition>>> handler) Get the set of partitions that were previously paused by a call to pause(Set).Likepoll(Duration, Handler)but returns aFutureof the asynchronous resultvoidpoll(Duration timeout, Handler<AsyncResult<KafkaConsumerRecords<K, V>>> handler) Executes a poll for getting messages from Kafka.pollTimeout(Duration timeout) Sets the poll timeout for the underlying native Kafka Consumer.position(TopicPartition partition) Likeposition(TopicPartition, Handler)but returns aFutureof the asynchronous resultvoidposition(TopicPartition partition, Handler<AsyncResult<Long>> handler) Get the offset of the next record that will be fetched (if a record with that offset exists).resume()resume(TopicPartition topicPartition) Resume specified partition which have been paused with pause.resume(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Resume specified partition which have been paused with pause.resume(Set<TopicPartition> topicPartitions) Resume specified partitions which have been paused with pause.resume(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Resume specified partitions which have been paused with pause.seek(TopicPartition topicPartition, long offset) Overrides the fetch offsets that the consumer will use on the next poll.seek(TopicPartition topicPartition, long offset, Handler<AsyncResult<Void>> completionHandler) Overrides the fetch offsets that the consumer will use on the next poll.seekToBeginning(TopicPartition topicPartition) Seek to the first offset for each of the given partition.seekToBeginning(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Seek to the first offset for each of the given partition.seekToBeginning(Set<TopicPartition> topicPartitions) Seek to the first offset for each of the given partitions.seekToBeginning(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Seek to the first offset for each of the given partitions.seekToEnd(TopicPartition topicPartition) Seek to the last offset for each of the given partition.seekToEnd(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Seek to the last offset for each of the given partition.seekToEnd(Set<TopicPartition> topicPartitions) Seek to the last offset for each of the given partitions.seekToEnd(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Seek to the last offset for each of the given partitions.Subscribe to the given topic to get dynamically assigned partitions.subscribe(String topic, Handler<AsyncResult<Void>> completionHandler) Subscribe to the given topic to get dynamically assigned partitions.Subscribe to all topics matching specified pattern to get dynamically assigned partitions.subscribe(Pattern pattern, Handler<AsyncResult<Void>> completionHandler) Subscribe to all topics matching specified pattern to get dynamically assigned partitions.Subscribe to the given list of topics to get dynamically assigned partitions.Subscribe to the given list of topics to get dynamically assigned partitions.Likesubscription(Handler)but returns aFutureof the asynchronous resultsubscription(Handler<AsyncResult<Set<String>>> handler) Get the current subscription.Unsubscribe from topics currently subscribed with subscribe.unsubscribe(Handler<AsyncResult<Void>> completionHandler) Unsubscribe from topics currently subscribed with subscribe.unwrap()Methods inherited from interface io.vertx.core.streams.ReadStream
collect, pipe, pipeTo, pipeTo
-
Method Details
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K, V> consumer) Create a new KafkaConsumer instance from a nativeConsumer.- Parameters:
vertx- Vert.x instance to useconsumer- the Kafka consumer to wrap- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K, V> consumer, KafkaClientOptions options) Create a new KafkaConsumer instance from a nativeConsumer.- Parameters:
vertx- Vert.x instance to useconsumer- the Kafka consumer to wrapoptions- options used only for tracing settings- Returns:
- an instance of the KafkaConsumer
-
create
Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka consumer configuration- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Map<String, String> config, Class<K> keyType, Class<V> valueType) Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka consumer configurationkeyType- class type for the key deserializationvalueType- class type for the value deserialization- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Map<String, String> config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer) Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka consumer configurationkeyDeserializer- key deserializervalueDeserializer- value deserializer- Returns:
- an instance of the KafkaConsumer
-
create
Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useoptions- Kafka consumer options- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options, Class<K> keyType, Class<V> valueType) Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useoptions- Kafka consumer optionskeyType- class type for the key deserializationvalueType- class type for the value deserialization- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer) Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useoptions- Kafka consumer optionskeyDeserializer- key deserializervalueDeserializer- value deserializer- Returns:
- an instance of the KafkaConsumer
-
create
Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka consumer configuration- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Properties config, Class<K> keyType, Class<V> valueType) Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka consumer configurationkeyType- class type for the key deserializationvalueType- class type for the value deserialization- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Properties config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer) Create a new KafkaConsumer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka consumer configurationkeyDeserializer- key deserializervalueDeserializer- value deserializer- Returns:
- an instance of the KafkaConsumer
-
exceptionHandler
- Specified by:
exceptionHandlerin interfaceReadStream<K>- Specified by:
exceptionHandlerin interfaceStreamBase
-
handler
- Specified by:
handlerin interfaceReadStream<K>
-
pause
KafkaConsumer<K,V> pause()- Specified by:
pausein interfaceReadStream<K>
-
resume
KafkaConsumer<K,V> resume()- Specified by:
resumein interfaceReadStream<K>
-
fetch
- Specified by:
fetchin interfaceReadStream<K>
-
endHandler
- Specified by:
endHandlerin interfaceReadStream<K>
-
demand
long demand()Returns the current demand.-
If the stream is in flowing mode will return
- If the stream is in fetch mode, will return the current number of elements still to be delivered or 0 if paused.
Long.MAX_VALUE.- Returns:
- current demand
-
subscribe
Subscribe to the given topic to get dynamically assigned partitions.- Parameters:
topic- topic to subscribe to- Returns:
- a
Futurecompleted with the operation result
-
subscribe
Subscribe to the given list of topics to get dynamically assigned partitions.- Parameters:
topics- topics to subscribe to- Returns:
- a
Futurecompleted with the operation result
-
subscribe
Subscribe to the given topic to get dynamically assigned partitions.Due to internal buffering of messages, when changing the subscribed topic the old topic may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new topic.- Parameters:
topic- topic to subscribe tocompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
subscribe
Subscribe to the given list of topics to get dynamically assigned partitions.Due to internal buffering of messages, when changing the subscribed topics the old set of topics may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new set of topics.- Parameters:
topics- topics to subscribe tocompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
subscribe
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.- Parameters:
pattern- Pattern to subscribe to- Returns:
- a
Futurecompleted with the operation result
-
subscribe
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.Due to internal buffering of messages, when changing the subscribed topics the old set of topics may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new set of topics.- Parameters:
pattern- Pattern to subscribe tocompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
assign
Manually assign a partition to this consumer.- Parameters:
topicPartition- partition which want assigned- Returns:
- a
Futurecompleted with the operation result
-
assign
Manually assign a list of partition to this consumer.- Parameters:
topicPartitions- partitions which want assigned- Returns:
- a
Futurecompleted with the operation result
-
assign
KafkaConsumer<K,V> assign(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Manually assign a partition to this consumer.Due to internal buffering of messages, when reassigning the old partition may remain in effect (as observed by the handler(Handler) record handler)} until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new partition.- Parameters:
topicPartition- partition which want assignedcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
assign
KafkaConsumer<K,V> assign(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Manually assign a list of partition to this consumer.Due to internal buffering of messages, when reassigning the old set of partitions may remain in effect (as observed by the handler(Handler) record handler)} until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new set of partitions.- Parameters:
topicPartitions- partitions which want assignedcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
assignment
Get the set of partitions currently assigned to this consumer.- Parameters:
handler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
assignment
Future<Set<TopicPartition>> assignment()Likeassignment(Handler)but returns aFutureof the asynchronous result -
listTopics
Get metadata about partitions for all topics that the user is authorized to view.- Parameters:
handler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
listTopics
Future<Map<String,List<PartitionInfo>>> listTopics()LikelistTopics(Handler)but returns aFutureof the asynchronous result -
unsubscribe
Unsubscribe from topics currently subscribed with subscribe.- Returns:
- a
Futurecompleted with the operation result
-
unsubscribe
Unsubscribe from topics currently subscribed with subscribe.- Parameters:
completionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
subscription
Get the current subscription.- Parameters:
handler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
subscription
Likesubscription(Handler)but returns aFutureof the asynchronous result -
pause
Suspend fetching from the requested partition.- Parameters:
topicPartition- topic partition from which suspend fetching- Returns:
- a
Futurecompleted with the operation result
-
pause
Suspend fetching from the requested partitions.- Parameters:
topicPartitions- topic partition from which suspend fetching- Returns:
- a
Futurecompleted with the operation result
-
pause
KafkaConsumer<K,V> pause(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Suspend fetching from the requested partition.Due to internal buffering of messages, the record handler will continue to observe messages from the given
topicPartitionuntil some time after the givencompletionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will not see messages from the giventopicPartition.- Parameters:
topicPartition- topic partition from which suspend fetchingcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
pause
KafkaConsumer<K,V> pause(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Suspend fetching from the requested partitions.Due to internal buffering of messages, the record handler will continue to observe messages from the given
topicPartitionsuntil some time after the givencompletionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will not see messages from the giventopicPartitions.- Parameters:
topicPartitions- topic partition from which suspend fetchingcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
paused
Get the set of partitions that were previously paused by a call to pause(Set).- Parameters:
handler- handler called on operation completed
-
paused
Future<Set<TopicPartition>> paused()Likepaused(Handler)but returns aFutureof the asynchronous result -
resume
Resume specified partition which have been paused with pause.- Parameters:
topicPartition- topic partition from which resume fetching- Returns:
- a
Futurecompleted with the operation result
-
resume
Resume specified partitions which have been paused with pause.- Parameters:
topicPartitions- topic partition from which resume fetching- Returns:
- a
Futurecompleted with the operation result
-
resume
KafkaConsumer<K,V> resume(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Resume specified partition which have been paused with pause.- Parameters:
topicPartition- topic partition from which resume fetchingcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
resume
KafkaConsumer<K,V> resume(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Resume specified partitions which have been paused with pause.- Parameters:
topicPartitions- topic partition from which resume fetchingcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
partitionsRevokedHandler
Set the handler called when topic partitions are revoked to the consumer- Parameters:
handler- handler called on revoked topic partitions- Returns:
- current KafkaConsumer instance
-
partitionsAssignedHandler
Set the handler called when topic partitions are assigned to the consumer- Parameters:
handler- handler called on assigned topic partitions- Returns:
- current KafkaConsumer instance
-
seek
Overrides the fetch offsets that the consumer will use on the next poll.- Parameters:
topicPartition- topic partition for which seekoffset- offset to seek inside the topic partition- Returns:
- a
Futurecompleted with the operation result
-
seek
KafkaConsumer<K,V> seek(TopicPartition topicPartition, long offset, Handler<AsyncResult<Void>> completionHandler) Overrides the fetch offsets that the consumer will use on the next poll.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new offset.- Parameters:
topicPartition- topic partition for which seekoffset- offset to seek inside the topic partitioncompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
seekToBeginning
Seek to the first offset for each of the given partition.- Parameters:
topicPartition- topic partition for which seek- Returns:
- a
Futurecompleted with the operation result
-
seekToBeginning
Seek to the first offset for each of the given partitions.- Parameters:
topicPartitions- topic partition for which seek- Returns:
- a
Futurecompleted with the operation result
-
seekToBeginning
KafkaConsumer<K,V> seekToBeginning(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Seek to the first offset for each of the given partition.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new offset.- Parameters:
topicPartition- topic partition for which seekcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
seekToBeginning
KafkaConsumer<K,V> seekToBeginning(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Seek to the first offset for each of the given partitions.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new offset.- Parameters:
topicPartitions- topic partition for which seekcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
seekToEnd
Seek to the last offset for each of the given partition.- Parameters:
topicPartition- topic partition for which seek- Returns:
- a
Futurecompleted with the operation result
-
seekToEnd
Seek to the last offset for each of the given partitions.- Parameters:
topicPartitions- topic partition for which seek- Returns:
- a
Futurecompleted with the operation result
-
seekToEnd
KafkaConsumer<K,V> seekToEnd(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler) Seek to the last offset for each of the given partition.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new offset.- Parameters:
topicPartition- topic partition for which seekcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
seekToEnd
KafkaConsumer<K,V> seekToEnd(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler) Seek to the last offset for each of the given partitions.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandleris called. In contrast, the once the givencompletionHandleris called thebatchHandler(Handler)will only see messages consistent with the new offset.- Parameters:
topicPartitions- topic partition for which seekcompletionHandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
commit
Commit current offsets for all the subscribed list of topics and partition. -
commit
Commit current offsets for all the subscribed list of topics and partition.- Parameters:
completionHandler- handler called on operation completed
-
commit
Commit the specified offsets for the specified list of topics and partitions to Kafka.- Parameters:
offsets- offsets list to commit
-
commit
void commit(Map<TopicPartition, OffsetAndMetadata> offsets, Handler<AsyncResult<Map<TopicPartition, OffsetAndMetadata>>> completionHandler) Commit the specified offsets for the specified list of topics and partitions to Kafka.- Parameters:
offsets- offsets list to commitcompletionHandler- handler called on operation completed
-
committed
Get the last committed offset for the given partition (whether the commit happened by this process or another).- Parameters:
topicPartition- topic partition for getting last committed offsethandler- handler called on operation completed
-
committed
Likecommitted(TopicPartition, Handler)but returns aFutureof the asynchronous result -
partitionsFor
Get metadata about the partitions for a given topic.- Parameters:
topic- topic partition for which getting partitions infohandler- handler called on operation completed- Returns:
- current KafkaConsumer instance
-
partitionsFor
LikepartitionsFor(String, Handler)but returns aFutureof the asynchronous result -
batchHandler
Set the handler to be used when batches of messages are fetched from the Kafka server. Batch handlers need to take care not to block the event loop when dealing with large batches. It is better to process records individually using therecord handler.- Parameters:
handler- handler called when batches of messages are fetched- Returns:
- current KafkaConsumer instance
-
close
Close the consumer -
close
Close the consumer- Parameters:
completionHandler- handler called on operation completed
-
position
Get the offset of the next record that will be fetched (if a record with that offset exists).- Parameters:
partition- The partition to get the position forhandler- handler called on operation completed
-
position
Likeposition(TopicPartition, Handler)but returns aFutureof the asynchronous result -
offsetsForTimes
void offsetsForTimes(Map<TopicPartition, Long> topicPartitionTimestamps, Handler<AsyncResult<Map<TopicPartition, OffsetAndTimestamp>>> handler) Look up the offsets for the given partitions by timestamp. Note: the result might be empty in case for the given timestamp no offset can be found -- e.g., when the timestamp refers to the future- Parameters:
topicPartitionTimestamps- A map with pairs of (TopicPartition, Timestamp).handler- handler called on operation completed
-
offsetsForTimes
Future<Map<TopicPartition,OffsetAndTimestamp>> offsetsForTimes(Map<TopicPartition, Long> topicPartitionTimestamps) LikeoffsetsForTimes(Map, Handler)but returns aFutureof the asynchronous result -
offsetsForTimes
void offsetsForTimes(TopicPartition topicPartition, Long timestamp, Handler<AsyncResult<OffsetAndTimestamp>> handler) Look up the offset for the given partition by timestamp. Note: the result might be null in case for the given timestamp no offset can be found -- e.g., when the timestamp refers to the future- Parameters:
topicPartition- TopicPartition to query.timestamp- Timestamp to be used in the query.handler- handler called on operation completed
-
offsetsForTimes
LikeoffsetsForTimes(TopicPartition, Long, Handler)but returns aFutureof the asynchronous result -
beginningOffsets
void beginningOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition, Long>>> handler) Get the first offset for the given partitions.- Parameters:
topicPartitions- the partitions to get the earliest offsets.handler- handler called on operation completed. Returns the earliest available offsets for the given partitions
-
beginningOffsets
LikebeginningOffsets(Set, Handler)but returns aFutureof the asynchronous result -
beginningOffsets
Get the first offset for the given partitions.- Parameters:
topicPartition- the partition to get the earliest offset.handler- handler called on operation completed. Returns the earliest available offset for the given partition
-
beginningOffsets
LikebeginningOffsets(TopicPartition, Handler)but returns aFutureof the asynchronous result -
endOffsets
void endOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition, Long>>> handler) Get the last offset for the given partitions. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.- Parameters:
topicPartitions- the partitions to get the end offsets.handler- handler called on operation completed. The end offsets for the given partitions.
-
endOffsets
LikeendOffsets(Set, Handler)but returns aFutureof the asynchronous result -
endOffsets
Get the last offset for the given partition. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.- Parameters:
topicPartition- the partition to get the end offset.handler- handler called on operation completed. The end offset for the given partition.
-
endOffsets
LikeendOffsets(TopicPartition, Handler)but returns aFutureof the asynchronous result -
asStream
KafkaReadStream<K,V> asStream()- Returns:
- underlying the
KafkaReadStreaminstance
-
unwrap
- Returns:
- the underlying consumer
-
pollTimeout
Sets the poll timeout for the underlying native Kafka Consumer. Defaults to 1000ms. Setting timeout to a lower value results in a more 'responsive' client, because it will block for a shorter period if no data is available in the assigned partition and therefore allows subsequent actions to be executed with a shorter delay. At the same time, the client will poll more frequently and thus will potentially create a higher load on the Kafka Broker.- Parameters:
timeout- The time, spent waiting in poll if data is not available in the buffer. If 0, returns immediately with any records that are available currently in the native Kafka consumer's buffer, else returns empty. Must not be negative.
-
poll
Executes a poll for getting messages from Kafka.- Parameters:
timeout- The maximum time to block (must not be greater thanLong.MAX_VALUEmilliseconds)handler- handler called after the poll with batch of records (can be empty).
-
poll
Likepoll(Duration, Handler)but returns aFutureof the asynchronous result
-