Packages

c

com.nvidia.spark.rapids

MultiFileCoalescingPartitionReaderBase

abstract class MultiFileCoalescingPartitionReaderBase extends FilePartitionReaderBase with MultiFileReaderFunctions

The abstracted multi-file coalescing reading class, which tries to coalesce small ColumnarBatch into a bigger ColumnarBatch according to maxReadBatchSizeRows, maxReadBatchSizeBytes and the checkIfNeedToSplitDataBlock.

Please be note, this class is applied to below similar file format

| HEADER | -> optional

| block | -> repeated

| FOOTER | -> optional

The data driven:

next() -> populateCurrentBlockChunk (try the best to coalesce ColumnarBatch) -> allocate a bigger HostMemoryBuffer for HEADER + the populated block chunks + FOOTER -> write header to HostMemoryBuffer -> launch tasks to copy the blocks to the HostMemoryBuffer -> wait all tasks finished -> write footer to HostMemoryBuffer -> decode the HostMemoryBuffer in the GPU

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MultiFileCoalescingPartitionReaderBase
  2. MultiFileReaderFunctions
  3. FilePartitionReaderBase
  4. ScanWithMetrics
  5. Logging
  6. PartitionReader
  7. Closeable
  8. AutoCloseable
  9. AnyRef
  10. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MultiFileCoalescingPartitionReaderBase(conf: Configuration, clippedBlocks: Seq[SingleDataBlockInfo], partitionSchema: StructType, maxReadBatchSizeRows: Integer, maxReadBatchSizeBytes: Long, maxGpuColumnSizeBytes: Long, numThreads: Int, execMetrics: Map[String, GpuMetric])

    conf

    Configuration

    clippedBlocks

    the block metadata from the original file that has been clipped to only contain the column chunks to be read

    partitionSchema

    schema of partitions

    maxReadBatchSizeRows

    soft limit on the maximum number of rows the reader reads per batch

    maxReadBatchSizeBytes

    soft limit on the maximum number of bytes the reader reads per batch

    maxGpuColumnSizeBytes

    maximum number of bytes for a GPU column

    numThreads

    the size of the threadpool

    execMetrics

    metrics

Abstract Value Members

  1. abstract def calculateEstimatedBlocksOutputSize(batchContext: BatchContext): Long

    Calculate the output size according to the block chunks and the schema, and the estimated output size will be used as the initialized size of allocating HostMemoryBuffer

    Calculate the output size according to the block chunks and the schema, and the estimated output size will be used as the initialized size of allocating HostMemoryBuffer

    Please be note, the estimated size should be at least equal to size of HEAD + Blocks + FOOTER

    batchContext

    the batch building context

    returns

    Long, the estimated output size

  2. abstract def calculateFinalBlocksOutputSize(footerOffset: Long, blocks: Seq[DataBlockBase], batchContext: BatchContext): Long

    Calculate the final block output size which will be used to decide if re-allocate HostMemoryBuffer

    Calculate the final block output size which will be used to decide if re-allocate HostMemoryBuffer

    There is no need to re-calculate the block size, just calculate the footer size and plus footerOffset.

    If the size calculated by this function is bigger than the one calculated by calculateEstimatedBlocksOutputSize, then it will cause HostMemoryBuffer re-allocating, and cause the performance issue.

    footerOffset

    footer offset

    blocks

    blocks to be evaluated

    batchContext

    the batch building context

    returns

    the output size

  3. abstract def checkIfNeedToSplitDataBlock(currentBlockInfo: SingleDataBlockInfo, nextBlockInfo: SingleDataBlockInfo): Boolean

    To check if the next block will be split into another ColumnarBatch

    To check if the next block will be split into another ColumnarBatch

    currentBlockInfo

    current SingleDataBlockInfo

    nextBlockInfo

    next SingleDataBlockInfo

    returns

    true: split the next block into another ColumnarBatch and vice versa

  4. abstract def getBatchRunner(tc: TaskContext, file: Path, outhmb: HostMemoryBuffer, blocks: ArrayBuffer[DataBlockBase], offset: Long, batchContext: BatchContext): Callable[(Seq[DataBlockBase], Long)]

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    tc

    task context to use

    file

    file to be read

    outhmb

    the sliced HostMemoryBuffer to hold the blocks, and the implementation is in charge of closing it in sub-class

    blocks

    blocks meta info to specify which blocks to be read

    offset

    used as the offset adjustment

    batchContext

    the batch building context

    returns

    Callable[(Seq[DataBlockBase], Long)], which will be submitted to a ThreadPoolExecutor, and the Callable will return a tuple result and result._1 is block meta info with the offset adjusted result._2 is the bytes read

  5. abstract def getFileFormatShortName: String

    File format short name used for logging and other things to uniquely identity which file format is being used.

    File format short name used for logging and other things to uniquely identity which file format is being used.

    returns

    the file format short name

  6. abstract def readBufferToTablesAndClose(dataBuffer: HostMemoryBuffer, dataSize: Long, clippedSchema: SchemaBase, readSchema: StructType, extraInfo: ExtraInfo): GpuDataProducer[Table]

    Sent host memory to GPU to decode

    Sent host memory to GPU to decode

    dataBuffer

    the data which can be decoded in GPU

    dataSize

    data size

    clippedSchema

    the clipped schema

    readSchema

    the expected schema

    extraInfo

    the extra information for specific file format

    returns

    Table

  7. abstract def writeFileFooter(buffer: HostMemoryBuffer, bufferSize: Long, footerOffset: Long, blocks: Seq[DataBlockBase], batchContext: BatchContext): (HostMemoryBuffer, Long)

    Writer a footer for a specific file format.

    Writer a footer for a specific file format. If there is no footer for the file format, just return (hmb, offset)

    Please be note, some file formats may re-allocate the HostMemoryBuffer because of the estimated initialized buffer size may be a little smaller than the actual size. So in this case, the hmb should be closed in the implementation.

    buffer

    The buffer holding (header + data blocks)

    bufferSize

    The total buffer size which equals to size of (header + blocks + footer)

    footerOffset

    Where begin to write the footer

    blocks

    The data block meta info

    batchContext

    The batch building context

    returns

    the buffer and the buffer size

  8. abstract def writeFileHeader(buffer: HostMemoryBuffer, batchContext: BatchContext): Long

    Write a header for a specific file format.

    Write a header for a specific file format. If there is no header for the file format, just ignore it and return 0

    buffer

    where the header will be written

    batchContext

    the batch building context

    returns

    how many bytes written

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. var batchIter: Iterator[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  7. def close(): Unit
    Definition Classes
    FilePartitionReaderBase → Closeable → AutoCloseable
  8. def createBatchContext(chunkedBlocks: LinkedHashMap[Path, ArrayBuffer[DataBlockBase]], clippedSchema: SchemaBase): BatchContext

    Return a batch context which will be shared during the process of building a memory file, aka with the following APIs.

    Return a batch context which will be shared during the process of building a memory file, aka with the following APIs.

    • calculateEstimatedBlocksOutputSize
    • writeFileHeader
    • getBatchRunner
    • calculateFinalBlocksOutputSize
    • writeFileFooter It is useful when something is needed by some or all of the above APIs. Children can override this to return a customized batch context.
    chunkedBlocks

    mapping of file path to data blocks

    clippedSchema

    schema info

    Attributes
    protected
  9. def currentMetricsValues(): Array[CustomTaskMetric]
    Definition Classes
    PartitionReader
  10. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  11. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  12. def fileSystemBytesRead(): Long
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
    Annotations
    @nowarn()
  13. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. def finalizeOutputBatch(batch: ColumnarBatch, extraInfo: ExtraInfo): ColumnarBatch

    A callback to finalize the output batch.

    A callback to finalize the output batch. The batch returned will be the final output batch of the reader's "get" method.

    batch

    the batch after decoding, adding partitioned columns.

    extraInfo

    the corresponding extra information of the input batch.

    returns

    the finalized columnar batch.

    Attributes
    protected
  15. def get(): ColumnarBatch
    Definition Classes
    FilePartitionReaderBase → PartitionReader
  16. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  17. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  18. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  19. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  20. var isDone: Boolean
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  21. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  22. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  23. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  24. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  25. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  26. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  28. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  29. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  31. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  36. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  37. def next(): Boolean
    Definition Classes
    MultiFileCoalescingPartitionReaderBase → PartitionReader
  38. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  39. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  40. def startNewBufferRetry: Unit

    You can reset the target batch size if needed for splits...

  41. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  42. def toString(): String
    Definition Classes
    AnyRef → Any
  43. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  44. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  45. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from MultiFileReaderFunctions

Inherited from FilePartitionReaderBase

Inherited from ScanWithMetrics

Inherited from Logging

Inherited from PartitionReader[ColumnarBatch]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from AnyRef

Inherited from Any

Ungrouped