Compression

org.apache.pekko.stream.javadsl.Compression
object Compression

Attributes

Source
Compression.scala
Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Value members

Concrete methods

Creates a flow that deflate-compresses a stream of ByteString. Note that the compressor will SYNC_FLUSH after every ByteString so that it is guaranteed that every ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

Creates a flow that deflate-compresses a stream of ByteString. Note that the compressor will SYNC_FLUSH after every ByteString so that it is guaranteed that every ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

Attributes

Source
Compression.scala
def deflate(level: Int, nowrap: Boolean): Flow[ByteString, ByteString, NotUsed]

Same as deflate with configurable level and nowrap

Same as deflate with configurable level and nowrap

Value parameters

level

Compression level (0-9)

nowrap

if true then use GZIP compatible compression

Attributes

Source
Compression.scala
def gunzip(maxBytesPerChunk: Int): Flow[ByteString, ByteString, NotUsed]

Creates a Flow that decompresses gzip-compressed stream of data.

Creates a Flow that decompresses gzip-compressed stream of data.

Value parameters

maxBytesPerChunk

Maximum length of the output ByteString chunk.

Attributes

Source
Compression.scala

Creates a flow that gzip-compresses a stream of ByteStrings. Note that the compressor will SYNC_FLUSH after every ByteString so that it is guaranteed that every ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

Creates a flow that gzip-compresses a stream of ByteStrings. Note that the compressor will SYNC_FLUSH after every ByteString so that it is guaranteed that every ByteString coming out of the flow can be fully decompressed without waiting for additional data. This may come at a compression performance cost for very small chunks.

Attributes

Source
Compression.scala

Same as gzip with a custom level.

Same as gzip with a custom level.

Value parameters

level

Compression level (0-9)

Attributes

Source
Compression.scala
def inflate(maxBytesPerChunk: Int): Flow[ByteString, ByteString, NotUsed]

Creates a Flow that decompresses deflate-compressed stream of data.

Creates a Flow that decompresses deflate-compressed stream of data.

Value parameters

maxBytesPerChunk

Maximum length of the output ByteString chunk.

Attributes

Source
Compression.scala
def inflate(maxBytesPerChunk: Int, nowrap: Boolean): Flow[ByteString, ByteString, NotUsed]

Same as inflate with configurable maximum output length and nowrap

Same as inflate with configurable maximum output length and nowrap

Value parameters

maxBytesPerChunk

Maximum length of the output ByteString chunk.

nowrap

if true then use GZIP compatible decompression

Attributes

Source
Compression.scala