Packages

object Consumer

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Consumer
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. sealed trait AutoOffsetStrategy extends AnyRef
  2. type ConsumerDiagnostics = Diagnostics[DiagnosticEvent]

    A callback for consumer diagnostic events.

  3. sealed trait OffsetRetrieval extends AnyRef

    See ConsumerSettings.withOffsetRetrieval.

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val NoDiagnostics: ConsumerDiagnostics

    A diagnostics implementation that does nothing.

  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  7. def consumeWith[R, R1, K, V](settings: ConsumerSettings, subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V], commitRetryPolicy: Schedule[Any, Any, Any] = Schedule.exponential(1.second) && Schedule.recurs(3))(f: (ConsumerRecord[K, V]) => URIO[R1, Unit])(implicit arg0: zio.EnvironmentTag[R], arg1: zio.EnvironmentTag[R1]): RIO[zio.&[R, R1], Unit]

    Execute an effect for each record and commit the offset after processing

    Execute an effect for each record and commit the offset after processing

    This method is the easiest way of processing messages on a Kafka topic.

    Messages on a single partition are processed sequentially, while the processing of multiple partitions happens in parallel.

    Offsets are committed after execution of the effect. They are batched when a commit action is in progress to avoid backpressuring the stream. When commits fail due to a org.apache.kafka.clients.consumer.RetriableCommitFailedException they are retried according to commitRetryPolicy

    The effect should absorb any failures. Failures should be handled by retries or ignoring the error, which will result in the Kafka message being skipped.

    Messages are processed with 'at least once' consistency: it is not guaranteed that every message that is processed by the effect has a corresponding offset commit before stream termination.

    Usage example:

    val settings: ConsumerSettings = ???
    val subscription = Subscription.Topics(Set("my-kafka-topic"))
    
    val consumerIO = Consumer.consumeWith(settings, subscription, Serdes.string, Serdes.string) { record =>
      // Process the received record here
      putStrLn(s"Received record: ${record.key()}: ${record.value()}")
    }
    R

    Environment for the consuming effect

    R1

    Environment for the deserializers

    K

    Type of keys (an implicit Deserializer should be in scope)

    V

    Type of values (an implicit Deserializer should be in scope)

    settings

    Settings for creating a Consumer

    subscription

    Topic subscription parameters

    keyDeserializer

    Deserializer for the key of the messages

    valueDeserializer

    Deserializer for the value of the messages

    commitRetryPolicy

    Retry commits that failed due to a RetriableCommitFailedException according to this schedule

    f

    Function that returns the effect to execute for each message. It is passed the org.apache.kafka.clients.consumer.ConsumerRecord.

    returns

    Effect that completes with a unit value only when interrupted. May fail when the Consumer fails.

  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  10. def fromJavaConsumerWithPermit(javaConsumer: org.apache.kafka.clients.consumer.Consumer[Array[Byte], Array[Byte]], settings: ConsumerSettings, access: Semaphore, diagnostics: ConsumerDiagnostics = Diagnostics.NoOp): ZIO[Scope, Throwable, Consumer]

    Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer.

    Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer.

    You are responsible for all of the following:

    • creating and closing the KafkaConsumer,
    • making sure auto.commit is disabled,
    • creating access as a fair semaphore with a single permit,
    • acquire a permit from access before using the consumer, and release it afterward,
    • not using the following consumer methods: subscribe, unsubscribe, assign, poll, commit*, seek, pause, resume, and enforceRebalance,
    • keeping the consumer config given to the java consumer in sync with the properties in settings (for example by constructing settings with ConsumerSettings(bootstrapServers).withProperties(config)).

    Any deviation of these rules is likely to cause hard to track errors.

    Semaphore access is shared between you and the zio-kafka consumer. Use it as short as possible; while you hold a permit the zio-kafka consumer is blocked.

    javaConsumer

    Consumer

    settings

    Settings

    access

    A Semaphore with 1 permit.

    diagnostics

    an optional callback for key events in the consumer life-cycle. The callbacks will be executed in a separate fiber. Since the events are queued, failure to handle these events leads to out of memory errors

  11. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. def live: RLayer[zio.&[ConsumerSettings, ConsumerDiagnostics], Consumer]
  15. def make(settings: ConsumerSettings, diagnostics: ConsumerDiagnostics = Diagnostics.NoOp): ZIO[Scope, Throwable, Consumer]

    A new consumer.

    A new consumer.

    diagnostics

    an optional callback for key events in the consumer life-cycle. The callbacks will be executed in a separate fiber. Since the events are queued, failure to handle these events leads to out of memory errors

  16. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  17. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  18. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  19. val offsetBatches: ZSink[Any, Nothing, Offset, Nothing, OffsetBatch]
  20. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  21. def toString(): String
    Definition Classes
    AnyRef → Any
  22. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  23. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  24. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  25. object AutoOffsetStrategy

    See ConsumerSettings.withOffsetRetrieval.

  26. case object CommitTimeout extends RuntimeException with NoStackTrace with Product with Serializable
  27. object OffsetRetrieval

Deprecated Value Members

  1. def assignment: RIO[Consumer, Set[TopicPartition]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  2. def beginningOffsets(partitions: Set[TopicPartition], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, Long]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  3. def committed(partitions: Set[TopicPartition], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, Option[OffsetAndMetadata]]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  4. def endOffsets(partitions: Set[TopicPartition], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, Long]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  5. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

  6. def fromJavaConsumer(javaConsumer: org.apache.kafka.clients.consumer.Consumer[Array[Byte], Array[Byte]], settings: ConsumerSettings, diagnostics: ConsumerDiagnostics = Diagnostics.NoOp): ZIO[Scope, Throwable, Consumer]

    Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer

    Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer

    You are responsible for creating and closing the KafkaConsumer. Make sure auto.commit is disabled.

    Annotations
    @deprecated
    Deprecated

    (Since version 2.9.0) Use fromJavaConsumerWithPermit

  7. def listTopics(timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[String, List[PartitionInfo]]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  8. def metrics: RIO[Consumer, Map[MetricName, Metric]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  9. def offsetsForTimes(timestamps: Map[TopicPartition, Long], timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Map[TopicPartition, OffsetAndTimestamp]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  10. def partitionedAssignmentStream[R, K, V](subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V]): ZStream[Consumer, Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  11. def partitionedStream[R, K, V](subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V]): ZStream[Consumer, Throwable, (TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  12. def partitionsFor(topic: String, timeout: zio.Duration = Duration.Infinity): RIO[Consumer, List[PartitionInfo]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  13. def plainStream[R, K, V](subscription: Subscription, keyDeserializer: Deserializer[R, K], valueDeserializer: Deserializer[R, V], bufferSize: Int = 4): ZStream[zio.&[R, Consumer], Throwable, CommittableRecord[K, V]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  14. def position(partition: TopicPartition, timeout: zio.Duration = Duration.Infinity): RIO[Consumer, Long]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  15. def stopConsumption: RIO[Consumer, Unit]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

  16. def subscription: RIO[Consumer, Set[String]]

    Accessor method

    Accessor method

    Annotations
    @deprecated
    Deprecated

    (Since version 2.11.0) Use zio service pattern instead (https://zio.dev/reference/service-pattern/), will be removed in zio-kafka 3.0.0

Inherited from AnyRef

Inherited from Any

Ungrouped