Attributes
Members list
Type members
Classlikes
See ConsumerSettings.withOffsetRetrieval.
See ConsumerSettings.withOffsetRetrieval.
Attributes
- Companion
- trait
- Supertypes
- Self type
-
AutoOffsetStrategy.type
Attributes
- Companion
- object
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
- Self type
Attributes
- Supertypes
- Self type
-
CommitTimeout.type
Attributes
- Companion
- trait
- Supertypes
- Self type
-
OffsetRetrieval.type
Value members
Concrete methods
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Execute an effect for each record and commit the offset after processing
Execute an effect for each record and commit the offset after processing
This method is the easiest way of processing messages on a Kafka topic.
Messages on a single partition are processed sequentially, while the processing of multiple partitions happens in parallel.
Offsets are committed after execution of the effect. They are batched when a commit action is in progress to avoid backpressuring the stream. When commits fail due to a org.apache.kafka.clients.consumer.RetriableCommitFailedException they are retried according to commitRetryPolicy
The effect should absorb any failures. Failures should be handled by retries or ignoring the error, which will result in the Kafka message being skipped.
Messages are processed with 'at least once' consistency: it is not guaranteed that every message that is processed by the effect has a corresponding offset commit before stream termination.
Usage example:
val settings: ConsumerSettings = ???
val subscription = Subscription.Topics(Set("my-kafka-topic"))
val consumerIO = Consumer.consumeWith(settings, subscription, Serdes.string, Serdes.string) { record =>
// Process the received record here
putStrLn(s"Received record: ${record.key()}: ${record.value()}")
}
Type parameters
- K
-
Type of keys (an implicit
Deserializershould be in scope) - R
-
Environment for the consuming effect
- R1
-
Environment for the deserializers
- V
-
Type of values (an implicit
Deserializershould be in scope)
Value parameters
- commitRetryPolicy
-
Retry commits that failed due to a RetriableCommitFailedException according to this schedule
- f
-
Function that returns the effect to execute for each message. It is passed the org.apache.kafka.clients.consumer.ConsumerRecord.
- keyDeserializer
-
Deserializer for the key of the messages
- settings
-
Settings for creating a Consumer
- subscription
-
Topic subscription parameters
- valueDeserializer
-
Deserializer for the value of the messages
Attributes
- Returns
-
Effect that completes with a unit value only when interrupted. May fail when the Consumer fails.
Accessor method
Accessor method
Attributes
Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer.
Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer.
You are responsible for all of the following:
- creating and closing the
KafkaConsumer, - making sure
auto.commitis disabled, - creating
accessas a fair semaphore with a single permit, - acquire a permit from
accessbefore using the consumer, and release if afterwards, - not using the following consumer methods:
subscribe,unsubscribe,assign,poll,commit*,seek,pause,resume, andenforceRebalance, - keeping the consumer config given to the java consumer in sync with the properties in
settings(for example by constructingsettingswithConsumerSettings(bootstrapServers).withProperties(config)).
Any deviation of these rules is likely to cause hard to track errors.
Semaphore access is shared between you and the zio-kafka consumer. Use it as short as possible; while you hold a permit the zio-kafka consumer is blocked.
Value parameters
- access
-
A Semaphore with 1 permit.
- diagnostics
-
Optional diagnostics listener
- javaConsumer
-
Consumer
- settings
-
Settings
Attributes
Accessor method
Accessor method
Attributes
A new consumer.
A new consumer.
Value parameters
- diagnostics
-
an optional callback for key events in the consumer life-cycle. The callbacks will be executed in a separate fiber. Since the events are queued, failure to handle these events leads to out of memory errors
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Accessor method
Accessor method
Attributes
Deprecated methods
Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer
Create a zio-kafka Consumer from an org.apache.kafka KafkaConsumer
You are responsible for creating and closing the KafkaConsumer. Make sure auto.commit is disabled.
Attributes
- Deprecated
-
[Since version 2.9.0]