Packages

object Consumer

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Consumer
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. sealed trait AutoOffsetStrategy extends AnyRef
  2. type ConsumerDiagnostics = Diagnostics[DiagnosticEvent]

    A callback for consumer diagnostic events.

  3. sealed trait OffsetRetrieval extends AnyRef

    See ConsumerSettings.withOffsetRetrieval.

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val NoDiagnostics: ConsumerDiagnostics

    A diagnostics implementation that does nothing.

  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  7. def consumeWith[R, R1, K, V](settings: ConsumerSettings, subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V], commitRetryPolicy: Schedule[Any, Any, Any] = Schedule.exponential(1.second) && Schedule.recurs(3))(f: (ConsumerRecord[K, V]) => URIO[R1, Unit])(implicit arg0: zio.EnvironmentTag[R], arg1: zio.EnvironmentTag[R1]): RIO[zio.&[R, R1], Unit]

    Execute an effect for each record and commit the offset after processing

    Execute an effect for each record and commit the offset after processing

    This method is the easiest way of processing messages on a Kafka topic.

    Messages on a single partition are processed sequentially, while the processing of multiple partitions happens in parallel.

    Offsets are committed after execution of the effect. They are batched when a commit action is in progress to avoid backpressuring the stream. When commits fail due to an org.apache.kafka.clients.consumer.RetriableCommitFailedException they are retried according to commitRetryPolicy

    The effect should absorb any failures. Failures should be handled by retries or ignoring the error, which will result in the Kafka message being skipped.

    Messages are processed with 'at least once' consistency: it is not guaranteed that every message that is processed by the effect has a corresponding offset commit before stream termination.

    Usage example:

    val settings: ConsumerSettings = ???
    val subscription = Subscription.Topics(Set("my-kafka-topic"))
    
    val consumerIO = Consumer.consumeWith(settings, subscription, Serdes.string, Serdes.string) { record =>
      // Process the received record here
      putStrLn(s"Received record: ${record.key()}: ${record.value()}")
    }
    R

    Environment for the consuming effect

    R1

    Environment for the deserializers

    K

    Type of keys (an implicit Deserializer should be in scope)

    V

    Type of values (an implicit Deserializer should be in scope)

    settings

    Settings for creating a Consumer

    subscription

    Topic subscription parameters

    keyDeserializer

    Deserializer for the key of the messages

    valueDeserializer

    Deserializer for the value of the messages

    commitRetryPolicy

    Retry commits that failed due to a RetriableCommitFailedException according to this schedule

    f

    Function that returns the effect to execute for each message. It is passed the org.apache.kafka.clients.consumer.ConsumerRecord.

    returns

    Effect that completes with a unit value only when interrupted. May fail when the Consumer fails.

  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  10. def fromJavaConsumerWithPermit(javaConsumer: org.apache.kafka.clients.consumer.Consumer[Array[Byte], Array[Byte]], settings: ConsumerSettings, access: Semaphore): ZIO[Scope, Throwable, Consumer]

    Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer.

    Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer.

    You are responsible for all of the following:

    • creating and closing the KafkaConsumer,
    • making sure auto.commit is disabled,
    • creating access as a fair semaphore with a single permit,
    • acquire a permit from access before using the consumer, and release it afterward,
    • not using the following consumer methods: subscribe, unsubscribe, assign, poll, commit*, seek, pause, resume, and enforceRebalance,
    • keeping the consumer config given to the java consumer in sync with the properties in settings (for example by constructing settings with ConsumerSettings(bootstrapServers).withProperties(config)).

    Any deviation of these rules is likely to cause hard to track errors.

    Semaphore access is shared between you and the zio-kafka consumer. Use it as short as possible; while you hold a permit the zio-kafka consumer is blocked.

    javaConsumer

    Consumer

    settings

    Settings

    access

    A Semaphore with 1 permit.

  11. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. def live: RLayer[ConsumerSettings, Consumer]
  15. def make(settings: ConsumerSettings): ZIO[Scope, Throwable, Consumer]

    A new consumer.

  16. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  17. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  18. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  19. val offsetBatches: ZSink[Any, Nothing, Offset, Nothing, OffsetBatch]
  20. def runWithGracefulShutdown[R1, R2, E, A](control: StreamControl[R1, E, A], shutdownTimeout: zio.Duration)(withStream: (ZStream[R1, E, A]) => ZIO[zio.&[R2, Scope], E, Any]): ZIO[zio.&[R1, R2], E, Any]

    Takes a StreamControl for some stream and runs the given ZIO workflow on that stream such that, when interrupted, stops fetching records and gracefully waits for the ZIO workflow to complete.

    Takes a StreamControl for some stream and runs the given ZIO workflow on that stream such that, when interrupted, stops fetching records and gracefully waits for the ZIO workflow to complete.

    This is useful for running streams from within your application's Main class, such that streams are cleanly stopped when the application is shutdown (for example by your container runtime).

    WARNING: this is an EXPERIMENTAL API and may disappear or change in an incompatible way without notice in any zio-kafka version.

    control

    Result of one of the Consumer's methods returning a StreamControl

    shutdownTimeout

    Timeout for the workflow to complete after initiating the graceful shutdown

    withStream

    Takes the stream as input and returns a ZIO workflow that processes the stream. As in most programs the given workflow runs until an external interruption, the result value (Any type) is meaningless. withStream is typically something like:

    stream => stream.mapZIO(record => ZIO.debug(record))
                    .mapZIO(record => record.offset.commit)
  21. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  22. def toString(): String
    Definition Classes
    AnyRef → Any
  23. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  24. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  25. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  26. object AutoOffsetStrategy

    See ConsumerSettings.withOffsetRetrieval.

  27. case object CommitTimeout extends RuntimeException with NoStackTrace with Product with Serializable
  28. object OffsetRetrieval

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from AnyRef

Inherited from Any

Ungrouped