sealed abstract class KafkaConsumer[F[_], K, V] extends AnyRef
KafkaConsumer represents a consumer of Kafka records, with the
ability to subscribe to topics, start a single top-level stream,
and optionally control it via the provided fiber instance.
The following top-level streams are provided.
- stream provides a single stream of records, where the order
of records is guaranteed per topic-partition.
- partitionedStream provides a stream with elements as streams
that continually request records for a single partition. Order
is guaranteed per topic-partition, but all assigned partitions
will have to be processed in parallel.
For the streams, records are wrapped in CommittableConsumerRecords
which provide CommittableOffsets with the ability to commit
record offsets to Kafka. For performance reasons, offsets are
usually committed in batches using CommittableOffsetBatch.
Provided Pipes, like commitBatchWithin are available for
batch committing offsets. If you are not committing offsets to
Kafka, you can simply discard the CommittableOffset, and
only make use of the record.
While it's technically possible to start more than one stream from a
single KafkaConsumer, it is generally not recommended as there is
no guarantee which stream will receive which records, and there might
be an overlap, in terms of duplicate records, between the two streams.
If a first stream completes, possibly with error, there's no guarantee
the stream has processed all of the records it received, and a second
stream from the same KafkaConsumer might not be able to pick up where
the first one left off. Therefore, only create a single top-level stream
per KafkaConsumer, and if you want to start a new stream if the first
one finishes, let the KafkaConsumer shutdown and create a new one.
- Alphabetic
- By Inheritance
- KafkaConsumer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Abstract Value Members
- abstract def assign(topic: String): F[Unit]
Manually assigns all partitions for the specified topic to the consumer.
- abstract def assign(topic: String, partitions: NonEmptySet[Int]): F[Unit]
Manually assigns the specified list of partitions for the specified topic to the consumer.
Manually assigns the specified list of partitions for the specified topic to the consumer. This function does not allow for incremental assignment and will replace the previous assignment (if there is one).
Manual topic assignment through this method does not use the consumer's group management functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic metadata change. Note that it is not possible to use both manual partition assignment with
assignand group assignment withsubscribe.If auto-commit is enabled, an async commit (based on the old assignment) will be triggered before the new assignment replaces the old one.
To unassign all partitions, use KafkaConsumer#unsubscribe.
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#assign
- abstract def assign(partitions: NonEmptySet[TopicPartition]): F[Unit]
Manually assigns the specified list of topic partitions to the consumer.
Manually assigns the specified list of topic partitions to the consumer. This function does not allow for incremental assignment and will replace the previous assignment (if there is one).
Manual topic assignment through this method does not use the consumer's group management functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic metadata change. Note that it is not possible to use both manual partition assignment with
assignand group assigment withsubscribe.If auto-commit is enabled, an async commit (based on the old assignment) will be triggered before the new assignment replaces the old one.
To unassign all partitions, use KafkaConsumer#unsubscribe.
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#assign
- abstract def assignment: F[SortedSet[TopicPartition]]
Returns the set of partitions currently assigned to this consumer.
- abstract def assignmentStream: Stream[F, SortedSet[TopicPartition]]
Streamwhere the elements are the set ofTopicPartitions currently assigned to this consumer.Streamwhere the elements are the set ofTopicPartitions currently assigned to this consumer. The stream emits whenever a rebalance changes partition assignments. - abstract def beginningOffsets(partitions: Set[TopicPartition], timeout: FiniteDuration): F[Map[TopicPartition, Long]]
Returns the first offset for the specified partitions.
- abstract def beginningOffsets(partitions: Set[TopicPartition]): F[Map[TopicPartition, Long]]
Returns the first offset for the specified partitions.
Timeout is determined bydefault.api.timeout.ms, which is set using ConsumerSettings#withDefaultApiTimeout. - abstract def endOffsets(partitions: Set[TopicPartition], timeout: FiniteDuration): F[Map[TopicPartition, Long]]
Returns the last offset for the specified partitions.
- abstract def endOffsets(partitions: Set[TopicPartition]): F[Map[TopicPartition, Long]]
Returns the last offset for the specified partitions.
Timeout is determined byrequest.timeout.ms, which is set using ConsumerSettings#withRequestTimeout. - abstract def fiber: Fiber[F, Unit]
A
Fiberthat can be used to cancel the underlying consumer, or wait for it to complete.A
Fiberthat can be used to cancel the underlying consumer, or wait for it to complete. If you're using stream, or any other provided stream in KafkaConsumer, these will be automatically interrupted when the underlying consumer has been cancelled or when it finishes with an exception.
Whenevercancelis invoked, an attempt will be made to stop the underlying consumer. Thecanceloperation will not wait for the consumer to shutdown. If you also want to wait for the shutdown to complete, you can usejoin. Note thatjoinis guaranteed to complete after consumer shutdown, even when the consumer is cancelled withcancel.
ThisFiberinstance is usually only required if the consumer needs to be cancelled due to some external condition, or when an external process needs to be cancelled whenever the consumer has shut down. Most of the time, when you're only using the streams provided by KafkaConsumer, there is no need to use this. - abstract def metrics: F[Map[MetricName, Metric]]
Returns consumer metrics.
Returns consumer metrics.
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#metrics
- abstract def partitionedStream: Stream[F, Stream[F, CommittableConsumerRecord[F, K, V]]]
Streamwhere the elements themselves areStreams which continually request records for a single partition.Streamwhere the elements themselves areStreams which continually request records for a single partition. TheseStreams will have to be processed in parallel, usingparJoinorparJoinUnbounded. Note that when usingparJoin(n)andnis smaller than the number of currently assigned partitions, then there will be assigned partitions which won't be processed. For that reason, preferparJoinUnboundedand the actual limit will be the number of assigned partitions.
If you do not want to process all partitions in parallel, then you can use stream instead, where records for all partitions are in a singleStream.- Note
you have to first use
subscribeto subscribe the consumer before using thisStream. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream.
- abstract def partitionsFor(topic: String, timeout: FiniteDuration): F[List[PartitionInfo]]
Returns the partitions for the specified topic.
- abstract def partitionsFor(topic: String): F[List[PartitionInfo]]
Returns the partitions for the specified topic.
Returns the partitions for the specified topic.
Timeout is determined by
default.api.timeout.ms, which is set using ConsumerSettings#withDefaultApiTimeout. - abstract def position(partition: TopicPartition, timeout: FiniteDuration): F[Long]
Returns the offset of the next record that will be fetched.
- abstract def position(partition: TopicPartition): F[Long]
Returns the offset of the next record that will be fetched.
Timeout is determined bydefault.api.timeout.ms, which is set using ConsumerSettings#withDefaultApiTimeout. - abstract def seek(partition: TopicPartition, offset: Long): F[Unit]
Overrides the fetch offsets that the consumer will use when reading the next record.
Overrides the fetch offsets that the consumer will use when reading the next record. If this API is invoked for the same partition more than once, the latest offset will be used. Note that you may lose data if this API is arbitrarily used in the middle of consumption to reset the fetch offsets.
- abstract def seekToBeginning[G[_]](partitions: G[TopicPartition])(implicit G: Foldable[G]): F[Unit]
Seeks to the first offset for each of the specified partitions.
Seeks to the first offset for each of the specified partitions. If no partitions are provided, seeks to the first offset for all currently assigned partitions.
Note that this seek evaluates lazily, and only on the next call topollorposition. - abstract def seekToBeginning: F[Unit]
Seeks to the first offset for each currently assigned partition.
Seeks to the first offset for each currently assigned partition. This is equivalent to using
seekToBeginningwith an empty set of partitions.
Note that this seek evaluates lazily, and only on the next call topollorposition. - abstract def seekToEnd[G[_]](partitions: G[TopicPartition])(implicit G: Foldable[G]): F[Unit]
Seeks to the last offset for each of the specified partitions.
Seeks to the last offset for each of the specified partitions. If no partitions are provided, seeks to the last offset for all currently assigned partitions.
Note that this seek evaluates lazily, and only on the next call topollorposition. - abstract def seekToEnd: F[Unit]
Seeks to the last offset for each currently assigned partition.
Seeks to the last offset for each currently assigned partition. This is equivalent to using
seekToEndwith an empty set of partitions.
Note that this seek evaluates lazily, and only on the next call topollorposition. - abstract def stream: Stream[F, CommittableConsumerRecord[F, K, V]]
Alias for
partitionedStream.parJoinUnbounded.Alias for
partitionedStream.parJoinUnbounded. See partitionedStream for more information.- Note
you have to first use
subscribeto subscribe the consumer before using thisStream. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream.
- abstract def subscribe(regex: Regex): F[Unit]
Subscribes the consumer to the topics matching the specified
Regex.Subscribes the consumer to the topics matching the specified
Regex. Note that you have to use one of thesubscribefunctions before you can use any of the providedStreams, or a NotSubscribedException will be raised in theStreams.- regex
the regex to which matching topics should be subscribed
- abstract def subscribe[G[_]](topics: G[String])(implicit G: Reducible[G]): F[Unit]
Subscribes the consumer to the specified topics.
Subscribes the consumer to the specified topics. Note that you have to use one of the
subscribefunctions to subscribe to one or more topics before using any of the providedStreams, or a NotSubscribedException will be raised in theStreams.- topics
the topics to which the consumer should subscribe
- abstract def subscribeTo(firstTopic: String, remainingTopics: String*): F[Unit]
Subscribes the consumer to the specified topics.
Subscribes the consumer to the specified topics. Note that you have to use one of the
subscribefunctions to subscribe to one or more topics before using any of the providedStreams, or a NotSubscribedException will be raised in theStreams. - abstract def unsubscribe: F[Unit]
Unsubscribes the consumer from all topics and partitions assigned by
subscribeorassign.
Concrete Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##(): Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native() @HotSpotIntrinsicCandidate()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated @deprecated
- Deprecated
(Since version ) see corresponding Javadoc for more information.