object Consumer
- Alphabetic
- By Inheritance
- Consumer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- sealed trait AutoOffsetStrategy extends AnyRef
- sealed trait OffsetRetrieval extends AnyRef
See ConsumerSettings.withOffsetRetrieval.
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def assignment: RIO[Consumer, Set[TopicPartition]]
Accessor method
- def beginningOffsets(partitions: Set[TopicPartition], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, Long]]
Accessor method
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- def committed(partitions: Set[TopicPartition], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, Option[OffsetAndMetadata]]]
Accessor method
- def consumeWith[R, R1, K, V](settings: ConsumerSettings, subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V], commitRetryPolicy: Schedule[Any, Any, Any] = Schedule.exponential(1.second) && Schedule.recurs(3))(f: (ConsumerRecord[K, V]) => URIO[R1, Unit])(implicit arg0: zio.EnvironmentTag[R], arg1: zio.EnvironmentTag[R1]): RIO[zio.&[R, R1], Unit]
Execute an effect for each record and commit the offset after processing
Execute an effect for each record and commit the offset after processing
This method is the easiest way of processing messages on a Kafka topic.
Messages on a single partition are processed sequentially, while the processing of multiple partitions happens in parallel.
Offsets are committed after execution of the effect. They are batched when a commit action is in progress to avoid backpressuring the stream. When commits fail due to a org.apache.kafka.clients.consumer.RetriableCommitFailedException they are retried according to commitRetryPolicy
The effect should absorb any failures. Failures should be handled by retries or ignoring the error, which will result in the Kafka message being skipped.
Messages are processed with 'at least once' consistency: it is not guaranteed that every message that is processed by the effect has a corresponding offset commit before stream termination.
Usage example:
val settings: ConsumerSettings = ??? val subscription = Subscription.Topics(Set("my-kafka-topic")) val consumerIO = Consumer.consumeWith(settings, subscription, Serdes.string, Serdes.string) { record => // Process the received record here putStrLn(s"Received record: ${record.key()}: ${record.value()}") }
- R
Environment for the consuming effect
- R1
Environment for the deserializers
- K
Type of keys (an implicit
Deserializershould be in scope)- V
Type of values (an implicit
Deserializershould be in scope)- settings
Settings for creating a Consumer
- subscription
Topic subscription parameters
- keyDeserializer
Deserializer for the key of the messages
- valueDeserializer
Deserializer for the value of the messages
- commitRetryPolicy
Retry commits that failed due to a RetriableCommitFailedException according to this schedule
- f
Function that returns the effect to execute for each message. It is passed the org.apache.kafka.clients.consumer.ConsumerRecord.
- returns
Effect that completes with a unit value only when interrupted. May fail when the Consumer fails.
- def endOffsets(partitions: Set[TopicPartition], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, Long]]
Accessor method
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def fromJavaConsumerWithPermit(javaConsumer: org.apache.kafka.clients.consumer.Consumer[Array[Byte], Array[Byte]], settings: ConsumerSettings, access: Semaphore, diagnostics: Diagnostics = Diagnostics.NoOp): ZIO[Scope, Throwable, Consumer]
Create a zio-kafka Consumer from an
org.apache.kafka KafkaConsumer.Create a zio-kafka Consumer from an
org.apache.kafka KafkaConsumer.You are responsible for all of the following:
- creating and closing the
KafkaConsumer, - making sure
auto.commitis disabled, - creating
accessas a fair semaphore with a single permit, - acquire a permit from
accessbefore using the consumer, and release if afterwards, - not using the following consumer methods:
subscribe,unsubscribe,assign,poll,commit*,seek,pause,resume, andenforceRebalance, - keeping the consumer config given to the java consumer in sync with the properties in
settings(for example by constructingsettingswithConsumerSettings(bootstrapServers).withProperties(config)).
Any deviation of these rules is likely to cause hard to track errors.
Semaphore
accessis shared between you and the zio-kafka consumer. Use it as short as possible; while you hold a permit the zio-kafka consumer is blocked.- javaConsumer
Consumer
- settings
Settings
- access
A Semaphore with 1 permit.
- diagnostics
Optional diagnostics listener
- creating and closing the
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def listTopics(timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[String, List[PartitionInfo]]]
Accessor method
- def live: RLayer[zio.&[ConsumerSettings, Diagnostics], Consumer]
- def make(settings: ConsumerSettings, diagnostics: Diagnostics = Diagnostics.NoOp): ZIO[Scope, Throwable, Consumer]
A new consumer.
A new consumer.
- diagnostics
an optional callback for key events in the consumer life-cycle. The callbacks will be executed in a separate fiber. Since the events are queued, failure to handle these events leads to out of memory errors
- def metrics: RIO[Consumer, Map[MetricName, Metric]]
Accessor method
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- val offsetBatches: ZSink[Any, Nothing, Offset, Nothing, OffsetBatch]
- def offsetsForTimes(timestamps: Map[TopicPartition, Long], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, OffsetAndTimestamp]]
Accessor method
- def partitionedAssignmentStream[R, K, V](subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V]): ZStream[Consumer, Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])]]
Accessor method
- def partitionedStream[R, K, V](subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V]): ZStream[Consumer, Throwable, (TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])]
Accessor method
- def partitionsFor(topic: String, timeout: zio.Duration = Duration.Infinity): RIO[Consumer, List[PartitionInfo]]
Accessor method
- def plainStream[R, K, V](subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V], bufferSize: Int = 4): ZStream[zio.&[R, Consumer], Throwable, CommittableRecord[K, V]]
Accessor method
- def position(partition: TopicPartition, timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Long]
Accessor method
- def stopConsumption: RIO[Consumer, Unit]
Accessor method
- def subscription: RIO[Consumer, Set[String]]
Accessor method
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- object AutoOffsetStrategy
See ConsumerSettings.withOffsetRetrieval.
- case object CommitTimeout extends RuntimeException with NoStackTrace with Product with Serializable
- object OffsetRetrieval
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)
- def fromJavaConsumer(javaConsumer: org.apache.kafka.clients.consumer.Consumer[Array[Byte], Array[Byte]], settings: ConsumerSettings, diagnostics: Diagnostics = Diagnostics.NoOp): ZIO[Scope, Throwable, Consumer]
Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer
Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer
You are responsible for creating and closing the KafkaConsumer. Make sure auto.commit is disabled.
- Annotations
- @deprecated
- Deprecated
(Since version 2.9.0) Use fromJavaConsumerWithPermit