Compression

org.apache.pekko.stream.scaladsl.Compression
object Compression

Attributes

Source
Compression.scala
Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Value members

Concrete methods

Creates a flow that deflate-compresses a stream of ByteString. Note that the compressor will SYNC_FLUSH after every pekko.util.ByteString so that it is guaranteed that every pekko.util.ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

Creates a flow that deflate-compresses a stream of ByteString. Note that the compressor will SYNC_FLUSH after every pekko.util.ByteString so that it is guaranteed that every pekko.util.ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

FIXME: should strategy / flush mode be configurable? See https://github.com/akka/akka/issues/21849

Attributes

Source
Compression.scala
def deflate(level: Int, nowrap: Boolean): Flow[ByteString, ByteString, NotUsed]

Same as deflate with configurable level and nowrap

Same as deflate with configurable level and nowrap

Value parameters

level

Compression level (0-9)

nowrap

if true then use GZIP compatible compression

Attributes

Source
Compression.scala
def gunzip(maxBytesPerChunk: Int): Flow[ByteString, ByteString, NotUsed]

Creates a Flow that decompresses a gzip-compressed stream of data.

Creates a Flow that decompresses a gzip-compressed stream of data.

Value parameters

maxBytesPerChunk

Maximum length of an output pekko.util.ByteString chunk.

Attributes

Source
Compression.scala

Creates a flow that gzip-compresses a stream of ByteStrings. Note that the compressor will SYNC_FLUSH after every pekko.util.ByteString so that it is guaranteed that every pekko.util.ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

Creates a flow that gzip-compresses a stream of ByteStrings. Note that the compressor will SYNC_FLUSH after every pekko.util.ByteString so that it is guaranteed that every pekko.util.ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

FIXME: should strategy / flush mode be configurable? See https://github.com/akka/akka/issues/21849

Attributes

Source
Compression.scala

Same as gzip with a custom level.

Same as gzip with a custom level.

Value parameters

level

Compression level (0-9)

Attributes

Source
Compression.scala
def inflate(maxBytesPerChunk: Int): Flow[ByteString, ByteString, NotUsed]

Creates a Flow that decompresses a deflate-compressed stream of data.

Creates a Flow that decompresses a deflate-compressed stream of data.

Value parameters

maxBytesPerChunk

Maximum length of an output pekko.util.ByteString chunk.

Attributes

Source
Compression.scala
def inflate(maxBytesPerChunk: Int, nowrap: Boolean): Flow[ByteString, ByteString, NotUsed]

Creates a Flow that decompresses a deflate-compressed stream of data.

Creates a Flow that decompresses a deflate-compressed stream of data.

Value parameters

maxBytesPerChunk

Maximum length of an output pekko.util.ByteString chunk.

nowrap

if true then use GZIP compatible decompression

Attributes

Source
Compression.scala

Concrete fields

final val MaxBytesPerChunkDefault: 65536

Attributes

Source
Compression.scala