object Consumer
- Alphabetic
- By Inheritance
- Consumer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- sealed trait AutoOffsetStrategy extends AnyRef
- type ConsumerDiagnostics = Diagnostics[DiagnosticEvent]
A callback for consumer diagnostic events.
- sealed trait OffsetRetrieval extends AnyRef
See ConsumerSettings.withOffsetRetrieval.
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- val NoDiagnostics: ConsumerDiagnostics
A diagnostics implementation that does nothing.
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- def consumeWith[R, R1, K, V](settings: ConsumerSettings, subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V], commitRetryPolicy: Schedule[Any, Any, Any] = Schedule.exponential(1.second) && Schedule.recurs(3))(f: (ConsumerRecord[K, V]) => URIO[R1, Unit])(implicit arg0: zio.EnvironmentTag[R], arg1: zio.EnvironmentTag[R1]): RIO[zio.&[R, R1], Unit]
Execute an effect for each record and commit the offset after processing
Execute an effect for each record and commit the offset after processing
This method is the easiest way of processing messages on a Kafka topic.
Messages on a single partition are processed sequentially, while the processing of multiple partitions happens in parallel.
Offsets are committed after execution of the effect. They are batched when a commit action is in progress to avoid backpressuring the stream. When commits fail due to an org.apache.kafka.clients.consumer.RetriableCommitFailedException they are retried according to commitRetryPolicy
The effect should absorb any failures. Failures should be handled by retries or ignoring the error, which will result in the Kafka message being skipped.
Messages are processed with 'at least once' consistency: it is not guaranteed that every message that is processed by the effect has a corresponding offset commit before stream termination.
Usage example:
val settings: ConsumerSettings = ??? val subscription = Subscription.Topics(Set("my-kafka-topic")) val consumerIO = Consumer.consumeWith(settings, subscription, Serdes.string, Serdes.string) { record => // Process the received record here putStrLn(s"Received record: ${record.key()}: ${record.value()}") }
- R
Environment for the consuming effect
- R1
Environment for the deserializers
- K
Type of keys (an implicit
Deserializershould be in scope)- V
Type of values (an implicit
Deserializershould be in scope)- settings
Settings for creating a Consumer
- subscription
Topic subscription parameters
- keyDeserializer
Deserializer for the key of the messages
- valueDeserializer
Deserializer for the value of the messages
- commitRetryPolicy
Retry commits that failed due to a RetriableCommitFailedException according to this schedule
- f
Function that returns the effect to execute for each message. It is passed the org.apache.kafka.clients.consumer.ConsumerRecord.
- returns
Effect that completes with a unit value only when interrupted. May fail when the Consumer fails.
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def fromJavaConsumerWithPermit(javaConsumer: org.apache.kafka.clients.consumer.Consumer[Array[Byte], Array[Byte]], settings: ConsumerSettings, access: Semaphore): ZIO[Scope, Throwable, Consumer]
Create a zio-kafka Consumer from an
org.apache.kafka KafkaConsumer.Create a zio-kafka Consumer from an
org.apache.kafka KafkaConsumer.You are responsible for all of the following:
- creating and closing the
KafkaConsumer, - making sure
auto.commitis disabled, - creating
accessas a fair semaphore with a single permit, - acquire a permit from
accessbefore using the consumer, and release it afterward, - not using the following consumer methods:
subscribe,unsubscribe,assign,poll,commit*,seek,pause,resume, andenforceRebalance, - keeping the consumer config given to the java consumer in sync with the properties in
settings(for example by constructingsettingswithConsumerSettings(bootstrapServers).withProperties(config)).
Any deviation of these rules is likely to cause hard to track errors.
Semaphore
accessis shared between you and the zio-kafka consumer. Use it as short as possible; while you hold a permit the zio-kafka consumer is blocked.- javaConsumer
Consumer
- settings
Settings
- access
A Semaphore with 1 permit.
- creating and closing the
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def live: RLayer[ConsumerSettings, Consumer]
- def make(settings: ConsumerSettings): ZIO[Scope, Throwable, Consumer]
A new consumer.
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- val offsetBatches: ZSink[Any, Nothing, Offset, Nothing, OffsetBatch]
- def runWithGracefulShutdown[R1, R2, E, A](control: StreamControl[R1, E, A], shutdownTimeout: zio.Duration)(withStream: (ZStream[R1, E, A]) => ZIO[zio.&[R2, Scope], E, Any]): ZIO[zio.&[R1, R2], E, Any]
Takes a StreamControl for some stream and runs the given ZIO workflow on that stream such that, when interrupted, stops fetching records and gracefully waits for the ZIO workflow to complete.
Takes a StreamControl for some stream and runs the given ZIO workflow on that stream such that, when interrupted, stops fetching records and gracefully waits for the ZIO workflow to complete.
This is useful for running streams from within your application's Main class, such that streams are cleanly stopped when the application is shutdown (for example by your container runtime).
WARNING: this is an EXPERIMENTAL API and may disappear or change in an incompatible way without notice in any zio-kafka version.
- control
Result of one of the Consumer's methods returning a StreamControl
- shutdownTimeout
Timeout for the workflow to complete after initiating the graceful shutdown
- withStream
Takes the stream as input and returns a ZIO workflow that processes the stream. As in most programs the given workflow runs until an external interruption, the result value (Any type) is meaningless.
withStreamis typically something like:stream => stream.mapZIO(record => ZIO.debug(record)) .mapZIO(record => record.offset.commit)
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- object AutoOffsetStrategy
See ConsumerSettings.withOffsetRetrieval.
- case object CommitTimeout extends RuntimeException with NoStackTrace with Product with Serializable
- object OffsetRetrieval