final case class ProducerSettings(closeTimeout: zio.Duration = 30.seconds, sendBufferSize: Int = 4096, authErrorRetrySchedule: Schedule[Any, Throwable, Any] = Schedule.stop, properties: Map[String, AnyRef] = Map.empty, diagnostics: ProducerDiagnostics = Producer.NoDiagnostics, metricLabels: Set[MetricLabel] = Set.empty) extends Product with Serializable
Settings for the Producer.
To stay source compatible with future releases, you are recommended to construct the settings as follows:
ProducerSettings(bootstrapServers) .withLinger(500.millis) .withCompression(ProducerCompression.Zstd(3)) .... etc.
- Alphabetic
- By Inheritance
- ProducerSettings
- Serializable
- Product
- Equals
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new ProducerSettings(closeTimeout: zio.Duration = 30.seconds, sendBufferSize: Int = 4096, authErrorRetrySchedule: Schedule[Any, Throwable, Any] = Schedule.stop, properties: Map[String, AnyRef] = Map.empty, diagnostics: ProducerDiagnostics = Producer.NoDiagnostics, metricLabels: Set[MetricLabel] = Set.empty)
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- val authErrorRetrySchedule: Schedule[Any, Throwable, Any]
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- val closeTimeout: zio.Duration
- val diagnostics: ProducerDiagnostics
- def driverSettings: Map[String, AnyRef]
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val metricLabels: Set[MetricLabel]
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- def productElementNames: Iterator[String]
- Definition Classes
- Product
- val properties: Map[String, AnyRef]
- val sendBufferSize: Int
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- def withAuthErrorRetrySchedule(authErrorRetrySchedule: Schedule[Any, Throwable, Any]): ProducerSettings
Configure retries for authorization or authentication errors.
Configure retries for authorization or authentication errors.
If you want to retry other (retriable) exceptions, please use the retries configuration property.
⚠️ Retrying may cause records to be produced in a different order than the order in which they were given to zio-kafka.
- authErrorRetrySchedule
The schedule at which the producer will retry producing, even when producing fails with an org.apache.kafka.common.errors.AuthorizationException or org.apache.kafka.common.errors.AuthenticationException. This setting helps with failed producing due to too slow authorization or authentication in the broker. For example, to retry 5 times, spaced by 500ms, you can set this to
Schedule.recurs(5) && Schedule.spaced(500.millis)
The default is
Schedule.stopwhich is, to fail the producer on the first auth error.
- def withBootstrapServers(servers: List[String]): ProducerSettings
- def withClientId(clientId: String): ProducerSettings
- def withCloseTimeout(duration: zio.Duration): ProducerSettings
- def withCompression(compression: ProducerCompression): ProducerSettings
- compression
The compression codec to use when publishing records. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). See also withLinger.
- def withCredentials(credentialsStore: KafkaCredentialStore): ProducerSettings
- def withDiagnostics(diagnostics: ProducerDiagnostics): ProducerSettings
- diagnostics
an optional callback for key events in the producer life-cycle. The callbacks will be executed in a separate fiber. Since the events are queued, failure to handle these events leads to out of memory errors.
- def withLinger(lingerDuration: zio.Duration): ProducerSettings
- lingerDuration
The maximum amount of time a record is allowed to linger in the producer's internal buffer. Higher values allow for better batching (especially important when compression is used), lower values reduce latency and memory usage.
- def withMetricsLabels(metricLabels: Set[MetricLabel]): ProducerSettings
- metricLabels
The labels given to all metrics collected by the zio-kafka producer. By default, no labels are set. For applications with multiple producers it is recommended to set some metric labels. For example, one can imagine a producer-id that can be used as a label:
producerSettings.withMetricLabels(Set(MetricLabel("producer-id", producerId)))
- def withProperties(kvs: Map[String, AnyRef]): ProducerSettings
- def withProperties(kvs: (String, AnyRef)*): ProducerSettings
- def withProperty(key: String, value: AnyRef): ProducerSettings
- def withSendBufferSize(sendBufferSize: Int): ProducerSettings
- sendBufferSize
The maximum number of record chunks that can queue up while waiting for the underlying producer to become available. Performance critical users that publish a lot of records one by one (instead of in chunks), should consider increasing this value, for example to
10240.