Interface KafkaConsumer<K,V>

All Superinterfaces:
ReadStream<KafkaConsumerRecord<K,V>>, StreamBase

public interface KafkaConsumer<K,V> extends ReadStream<KafkaConsumerRecord<K,V>>
Vert.x Kafka consumer.

You receive Kafka records by providing a handler(Handler). As messages arrive the handler will be called with the records.

The pause() and resume() provides global control over reading the records from the consumer.

The pause(Set) and resume(Set) provides finer grained control over reading records for specific Topic/Partition, these are Kafka's specific operations.

  • Method Details

    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer)
      Create a new KafkaConsumer instance from a native Consumer.
      Parameters:
      vertx - Vert.x instance to use
      consumer - the Kafka consumer to wrap
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer, KafkaClientOptions options)
      Create a new KafkaConsumer instance from a native Consumer.
      Parameters:
      vertx - Vert.x instance to use
      consumer - the Kafka consumer to wrap
      options - options used only for tracing settings
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      config - Kafka consumer configuration
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      config - Kafka consumer configuration
      keyType - class type for the key deserialization
      valueType - class type for the value deserialization
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      config - Kafka consumer configuration
      keyDeserializer - key deserializer
      valueDeserializer - value deserializer
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      options - Kafka consumer options
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      options - Kafka consumer options
      keyType - class type for the key deserialization
      valueType - class type for the value deserialization
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      options - Kafka consumer options
      keyDeserializer - key deserializer
      valueDeserializer - value deserializer
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, Properties config)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      config - Kafka consumer configuration
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, Properties config, Class<K> keyType, Class<V> valueType)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      config - Kafka consumer configuration
      keyType - class type for the key deserialization
      valueType - class type for the value deserialization
      Returns:
      an instance of the KafkaConsumer
    • create

      static <K, V> KafkaConsumer<K,V> create(Vertx vertx, Properties config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
      Create a new KafkaConsumer instance
      Parameters:
      vertx - Vert.x instance to use
      config - Kafka consumer configuration
      keyDeserializer - key deserializer
      valueDeserializer - value deserializer
      Returns:
      an instance of the KafkaConsumer
    • exceptionHandler

      KafkaConsumer<K,V> exceptionHandler(Handler<Throwable> handler)
      Specified by:
      exceptionHandler in interface ReadStream<K>
      Specified by:
      exceptionHandler in interface StreamBase
    • handler

      Specified by:
      handler in interface ReadStream<K>
    • pause

      KafkaConsumer<K,V> pause()
      Specified by:
      pause in interface ReadStream<K>
    • resume

      KafkaConsumer<K,V> resume()
      Specified by:
      resume in interface ReadStream<K>
    • fetch

      KafkaConsumer<K,V> fetch(long amount)
      Specified by:
      fetch in interface ReadStream<K>
    • endHandler

      KafkaConsumer<K,V> endHandler(Handler<Void> endHandler)
      Specified by:
      endHandler in interface ReadStream<K>
    • demand

      long demand()
      Returns the current demand.
        If the stream is in flowing mode will return Long.MAX_VALUE.
      • If the stream is in fetch mode, will return the current number of elements still to be delivered or 0 if paused.
      Returns:
      current demand
    • subscribe

      Future<Void> subscribe(String topic)
      Subscribe to the given topic to get dynamically assigned partitions.
      Parameters:
      topic - topic to subscribe to
      Returns:
      a Future completed with the operation result
    • subscribe

      Future<Void> subscribe(Set<String> topics)
      Subscribe to the given list of topics to get dynamically assigned partitions.
      Parameters:
      topics - topics to subscribe to
      Returns:
      a Future completed with the operation result
    • subscribe

      KafkaConsumer<K,V> subscribe(String topic, Handler<AsyncResult<Void>> completionHandler)
      Subscribe to the given topic to get dynamically assigned partitions.

      Due to internal buffering of messages, when changing the subscribed topic the old topic may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new topic.

      Parameters:
      topic - topic to subscribe to
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • subscribe

      KafkaConsumer<K,V> subscribe(Set<String> topics, Handler<AsyncResult<Void>> completionHandler)
      Subscribe to the given list of topics to get dynamically assigned partitions.

      Due to internal buffering of messages, when changing the subscribed topics the old set of topics may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new set of topics.

      Parameters:
      topics - topics to subscribe to
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • subscribe

      Future<Void> subscribe(Pattern pattern)
      Subscribe to all topics matching specified pattern to get dynamically assigned partitions.
      Parameters:
      pattern - Pattern to subscribe to
      Returns:
      a Future completed with the operation result
    • subscribe

      KafkaConsumer<K,V> subscribe(Pattern pattern, Handler<AsyncResult<Void>> completionHandler)
      Subscribe to all topics matching specified pattern to get dynamically assigned partitions.

      Due to internal buffering of messages, when changing the subscribed topics the old set of topics may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new set of topics.

      Parameters:
      pattern - Pattern to subscribe to
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • assign

      Future<Void> assign(TopicPartition topicPartition)
      Manually assign a partition to this consumer.
      Parameters:
      topicPartition - partition which want assigned
      Returns:
      a Future completed with the operation result
    • assign

      Future<Void> assign(Set<TopicPartition> topicPartitions)
      Manually assign a list of partition to this consumer.
      Parameters:
      topicPartitions - partitions which want assigned
      Returns:
      a Future completed with the operation result
    • assign

      KafkaConsumer<K,V> assign(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
      Manually assign a partition to this consumer.

      Due to internal buffering of messages, when reassigning the old partition may remain in effect (as observed by the handler(Handler) record handler)} until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new partition.

      Parameters:
      topicPartition - partition which want assigned
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • assign

      KafkaConsumer<K,V> assign(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
      Manually assign a list of partition to this consumer.

      Due to internal buffering of messages, when reassigning the old set of partitions may remain in effect (as observed by the handler(Handler) record handler)} until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new set of partitions.

      Parameters:
      topicPartitions - partitions which want assigned
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • assignment

      Get the set of partitions currently assigned to this consumer.
      Parameters:
      handler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • assignment

      Future<Set<TopicPartition>> assignment()
      Like assignment(Handler) but returns a Future of the asynchronous result
    • listTopics

      Get metadata about partitions for all topics that the user is authorized to view.
      Parameters:
      handler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • listTopics

      Like listTopics(Handler) but returns a Future of the asynchronous result
    • unsubscribe

      Future<Void> unsubscribe()
      Unsubscribe from topics currently subscribed with subscribe.
      Returns:
      a Future completed with the operation result
    • unsubscribe

      KafkaConsumer<K,V> unsubscribe(Handler<AsyncResult<Void>> completionHandler)
      Unsubscribe from topics currently subscribed with subscribe.
      Parameters:
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • subscription

      KafkaConsumer<K,V> subscription(Handler<AsyncResult<Set<String>>> handler)
      Get the current subscription.
      Parameters:
      handler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • subscription

      Future<Set<String>> subscription()
      Like subscription(Handler) but returns a Future of the asynchronous result
    • pause

      Future<Void> pause(TopicPartition topicPartition)
      Suspend fetching from the requested partition.
      Parameters:
      topicPartition - topic partition from which suspend fetching
      Returns:
      a Future completed with the operation result
    • pause

      Future<Void> pause(Set<TopicPartition> topicPartitions)
      Suspend fetching from the requested partitions.
      Parameters:
      topicPartitions - topic partition from which suspend fetching
      Returns:
      a Future completed with the operation result
    • pause

      KafkaConsumer<K,V> pause(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
      Suspend fetching from the requested partition.

      Due to internal buffering of messages, the record handler will continue to observe messages from the given topicPartition until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will not see messages from the given topicPartition.

      Parameters:
      topicPartition - topic partition from which suspend fetching
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • pause

      KafkaConsumer<K,V> pause(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
      Suspend fetching from the requested partitions.

      Due to internal buffering of messages, the record handler will continue to observe messages from the given topicPartitions until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will not see messages from the given topicPartitions.

      Parameters:
      topicPartitions - topic partition from which suspend fetching
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • paused

      void paused(Handler<AsyncResult<Set<TopicPartition>>> handler)
      Get the set of partitions that were previously paused by a call to pause(Set).
      Parameters:
      handler - handler called on operation completed
    • paused

      Like paused(Handler) but returns a Future of the asynchronous result
    • resume

      Future<Void> resume(TopicPartition topicPartition)
      Resume specified partition which have been paused with pause.
      Parameters:
      topicPartition - topic partition from which resume fetching
      Returns:
      a Future completed with the operation result
    • resume

      Future<Void> resume(Set<TopicPartition> topicPartitions)
      Resume specified partitions which have been paused with pause.
      Parameters:
      topicPartitions - topic partition from which resume fetching
      Returns:
      a Future completed with the operation result
    • resume

      KafkaConsumer<K,V> resume(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
      Resume specified partition which have been paused with pause.
      Parameters:
      topicPartition - topic partition from which resume fetching
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • resume

      KafkaConsumer<K,V> resume(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
      Resume specified partitions which have been paused with pause.
      Parameters:
      topicPartitions - topic partition from which resume fetching
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • partitionsRevokedHandler

      KafkaConsumer<K,V> partitionsRevokedHandler(Handler<Set<TopicPartition>> handler)
      Set the handler called when topic partitions are revoked to the consumer
      Parameters:
      handler - handler called on revoked topic partitions
      Returns:
      current KafkaConsumer instance
    • partitionsAssignedHandler

      KafkaConsumer<K,V> partitionsAssignedHandler(Handler<Set<TopicPartition>> handler)
      Set the handler called when topic partitions are assigned to the consumer
      Parameters:
      handler - handler called on assigned topic partitions
      Returns:
      current KafkaConsumer instance
    • seek

      Future<Void> seek(TopicPartition topicPartition, long offset)
      Overrides the fetch offsets that the consumer will use on the next poll.
      Parameters:
      topicPartition - topic partition for which seek
      offset - offset to seek inside the topic partition
      Returns:
      a Future completed with the operation result
    • seek

      KafkaConsumer<K,V> seek(TopicPartition topicPartition, long offset, Handler<AsyncResult<Void>> completionHandler)
      Overrides the fetch offsets that the consumer will use on the next poll.

      Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new offset.

      Parameters:
      topicPartition - topic partition for which seek
      offset - offset to seek inside the topic partition
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • seekToBeginning

      Future<Void> seekToBeginning(TopicPartition topicPartition)
      Seek to the first offset for each of the given partition.
      Parameters:
      topicPartition - topic partition for which seek
      Returns:
      a Future completed with the operation result
    • seekToBeginning

      Future<Void> seekToBeginning(Set<TopicPartition> topicPartitions)
      Seek to the first offset for each of the given partitions.
      Parameters:
      topicPartitions - topic partition for which seek
      Returns:
      a Future completed with the operation result
    • seekToBeginning

      KafkaConsumer<K,V> seekToBeginning(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
      Seek to the first offset for each of the given partition.

      Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new offset.

      Parameters:
      topicPartition - topic partition for which seek
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • seekToBeginning

      KafkaConsumer<K,V> seekToBeginning(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
      Seek to the first offset for each of the given partitions.

      Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new offset.

      Parameters:
      topicPartitions - topic partition for which seek
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • seekToEnd

      Future<Void> seekToEnd(TopicPartition topicPartition)
      Seek to the last offset for each of the given partition.
      Parameters:
      topicPartition - topic partition for which seek
      Returns:
      a Future completed with the operation result
    • seekToEnd

      Future<Void> seekToEnd(Set<TopicPartition> topicPartitions)
      Seek to the last offset for each of the given partitions.
      Parameters:
      topicPartitions - topic partition for which seek
      Returns:
      a Future completed with the operation result
    • seekToEnd

      KafkaConsumer<K,V> seekToEnd(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
      Seek to the last offset for each of the given partition.

      Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new offset.

      Parameters:
      topicPartition - topic partition for which seek
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • seekToEnd

      KafkaConsumer<K,V> seekToEnd(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
      Seek to the last offset for each of the given partitions.

      Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given completionHandler is called. In contrast, the once the given completionHandler is called the batchHandler(Handler) will only see messages consistent with the new offset.

      Parameters:
      topicPartitions - topic partition for which seek
      completionHandler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • commit

      Future<Void> commit()
      Commit current offsets for all the subscribed list of topics and partition.
    • commit

      void commit(Handler<AsyncResult<Void>> completionHandler)
      Commit current offsets for all the subscribed list of topics and partition.
      Parameters:
      completionHandler - handler called on operation completed
    • commit

      Commit the specified offsets for the specified list of topics and partitions to Kafka.
      Parameters:
      offsets - offsets list to commit
    • commit

      Commit the specified offsets for the specified list of topics and partitions to Kafka.
      Parameters:
      offsets - offsets list to commit
      completionHandler - handler called on operation completed
    • committed

      void committed(TopicPartition topicPartition, Handler<AsyncResult<OffsetAndMetadata>> handler)
      Get the last committed offset for the given partition (whether the commit happened by this process or another).
      Parameters:
      topicPartition - topic partition for getting last committed offset
      handler - handler called on operation completed
    • committed

      Future<OffsetAndMetadata> committed(TopicPartition topicPartition)
      Like committed(TopicPartition, Handler) but returns a Future of the asynchronous result
    • partitionsFor

      KafkaConsumer<K,V> partitionsFor(String topic, Handler<AsyncResult<List<PartitionInfo>>> handler)
      Get metadata about the partitions for a given topic.
      Parameters:
      topic - topic partition for which getting partitions info
      handler - handler called on operation completed
      Returns:
      current KafkaConsumer instance
    • partitionsFor

      Future<List<PartitionInfo>> partitionsFor(String topic)
      Like partitionsFor(String, Handler) but returns a Future of the asynchronous result
    • batchHandler

      KafkaConsumer<K,V> batchHandler(Handler<KafkaConsumerRecords<K,V>> handler)
      Set the handler to be used when batches of messages are fetched from the Kafka server. Batch handlers need to take care not to block the event loop when dealing with large batches. It is better to process records individually using the record handler.
      Parameters:
      handler - handler called when batches of messages are fetched
      Returns:
      current KafkaConsumer instance
    • close

      Future<Void> close()
      Close the consumer
    • close

      void close(Handler<AsyncResult<Void>> completionHandler)
      Close the consumer
      Parameters:
      completionHandler - handler called on operation completed
    • position

      void position(TopicPartition partition, Handler<AsyncResult<Long>> handler)
      Get the offset of the next record that will be fetched (if a record with that offset exists).
      Parameters:
      partition - The partition to get the position for
      handler - handler called on operation completed
    • position

      Future<Long> position(TopicPartition partition)
      Like position(TopicPartition, Handler) but returns a Future of the asynchronous result
    • offsetsForTimes

      void offsetsForTimes(Map<TopicPartition,Long> topicPartitionTimestamps, Handler<AsyncResult<Map<TopicPartition,OffsetAndTimestamp>>> handler)
      Look up the offsets for the given partitions by timestamp. Note: the result might be empty in case for the given timestamp no offset can be found -- e.g., when the timestamp refers to the future
      Parameters:
      topicPartitionTimestamps - A map with pairs of (TopicPartition, Timestamp).
      handler - handler called on operation completed
    • offsetsForTimes

      Future<Map<TopicPartition,OffsetAndTimestamp>> offsetsForTimes(Map<TopicPartition,Long> topicPartitionTimestamps)
      Like offsetsForTimes(Map, Handler) but returns a Future of the asynchronous result
    • offsetsForTimes

      void offsetsForTimes(TopicPartition topicPartition, Long timestamp, Handler<AsyncResult<OffsetAndTimestamp>> handler)
      Look up the offset for the given partition by timestamp. Note: the result might be null in case for the given timestamp no offset can be found -- e.g., when the timestamp refers to the future
      Parameters:
      topicPartition - TopicPartition to query.
      timestamp - Timestamp to be used in the query.
      handler - handler called on operation completed
    • offsetsForTimes

      Future<OffsetAndTimestamp> offsetsForTimes(TopicPartition topicPartition, Long timestamp)
      Like offsetsForTimes(TopicPartition, Long, Handler) but returns a Future of the asynchronous result
    • beginningOffsets

      void beginningOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition,Long>>> handler)
      Get the first offset for the given partitions.
      Parameters:
      topicPartitions - the partitions to get the earliest offsets.
      handler - handler called on operation completed. Returns the earliest available offsets for the given partitions
    • beginningOffsets

      Future<Map<TopicPartition,Long>> beginningOffsets(Set<TopicPartition> topicPartitions)
      Like beginningOffsets(Set, Handler) but returns a Future of the asynchronous result
    • beginningOffsets

      void beginningOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler)
      Get the first offset for the given partitions.
      Parameters:
      topicPartition - the partition to get the earliest offset.
      handler - handler called on operation completed. Returns the earliest available offset for the given partition
    • beginningOffsets

      Future<Long> beginningOffsets(TopicPartition topicPartition)
      Like beginningOffsets(TopicPartition, Handler) but returns a Future of the asynchronous result
    • endOffsets

      void endOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition,Long>>> handler)
      Get the last offset for the given partitions. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.
      Parameters:
      topicPartitions - the partitions to get the end offsets.
      handler - handler called on operation completed. The end offsets for the given partitions.
    • endOffsets

      Future<Map<TopicPartition,Long>> endOffsets(Set<TopicPartition> topicPartitions)
      Like endOffsets(Set, Handler) but returns a Future of the asynchronous result
    • endOffsets

      void endOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler)
      Get the last offset for the given partition. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.
      Parameters:
      topicPartition - the partition to get the end offset.
      handler - handler called on operation completed. The end offset for the given partition.
    • endOffsets

      Future<Long> endOffsets(TopicPartition topicPartition)
      Like endOffsets(TopicPartition, Handler) but returns a Future of the asynchronous result
    • asStream

      KafkaReadStream<K,V> asStream()
      Returns:
      underlying the KafkaReadStream instance
    • unwrap

      org.apache.kafka.clients.consumer.Consumer<K,V> unwrap()
      Returns:
      the underlying consumer
    • pollTimeout

      KafkaConsumer<K,V> pollTimeout(Duration timeout)
      Sets the poll timeout for the underlying native Kafka Consumer. Defaults to 1000ms. Setting timeout to a lower value results in a more 'responsive' client, because it will block for a shorter period if no data is available in the assigned partition and therefore allows subsequent actions to be executed with a shorter delay. At the same time, the client will poll more frequently and thus will potentially create a higher load on the Kafka Broker.
      Parameters:
      timeout - The time, spent waiting in poll if data is not available in the buffer. If 0, returns immediately with any records that are available currently in the native Kafka consumer's buffer, else returns empty. Must not be negative.
    • poll

      void poll(Duration timeout, Handler<AsyncResult<KafkaConsumerRecords<K,V>>> handler)
      Executes a poll for getting messages from Kafka.
      Parameters:
      timeout - The maximum time to block (must not be greater than Long.MAX_VALUE milliseconds)
      handler - handler called after the poll with batch of records (can be empty).
    • poll

      Like poll(Duration, Handler) but returns a Future of the asynchronous result