Packages

c

com.nvidia.spark.rapids

MultiFileParquetPartitionReader

class MultiFileParquetPartitionReader extends MultiFileCoalescingPartitionReaderBase with ParquetPartitionReaderBase

A PartitionReader that can read multiple Parquet files up to the certain size. It will coalesce small files together and copy the block data in a separate thread pool to speed up processing the small files before sending down to the GPU.

Efficiently reading a Parquet split on the GPU requires re-constructing the Parquet file in memory that contains just the column chunks that are needed. This avoids sending unnecessary data to the GPU and saves GPU memory.

Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MultiFileParquetPartitionReader
  2. ParquetPartitionReaderBase
  3. MultiFileCoalescingPartitionReaderBase
  4. MultiFileReaderFunctions
  5. FilePartitionReaderBase
  6. ScanWithMetrics
  7. Logging
  8. PartitionReader
  9. Closeable
  10. AutoCloseable
  11. AnyRef
  12. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MultiFileParquetPartitionReader(conf: Configuration, splits: Array[PartitionedFile], clippedBlocks: Seq[ParquetSingleDataBlockMeta], isSchemaCaseSensitive: Boolean, debugDumpPrefix: Option[String], debugDumpAlways: Boolean, useChunkedReader: Boolean, maxReadBatchSizeRows: Integer, maxReadBatchSizeBytes: Long, targetBatchSizeBytes: Long, execMetrics: Map[String, GpuMetric], partitionSchema: StructType, numThreads: Int, ignoreMissingFiles: Boolean, ignoreCorruptFiles: Boolean, useFieldId: Boolean)

    conf

    the Hadoop configuration

    splits

    the partitioned files to read

    clippedBlocks

    the block metadata from the original Parquet file that has been clipped to only contain the column chunks to be read

    isSchemaCaseSensitive

    whether schema is case sensitive

    debugDumpPrefix

    a path prefix to use for dumping the fabricated Parquet data

    debugDumpAlways

    whether to debug dump always or only on errors

    maxReadBatchSizeRows

    soft limit on the maximum number of rows the reader reads per batch

    maxReadBatchSizeBytes

    soft limit on the maximum number of bytes the reader reads per batch

    execMetrics

    metrics

    partitionSchema

    Schema of partitions.

    numThreads

    the size of the threadpool

    ignoreMissingFiles

    Whether to ignore missing files

    ignoreCorruptFiles

    Whether to ignore corrupt files

Type Members

  1. class ParquetCopyBlocksRunner extends Callable[(Seq[DataBlockBase], Long)]

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val PARQUET_META_SIZE: Long
    Definition Classes
    ParquetPartitionReaderBase
  5. implicit def ParquetSingleDataBlockMeta(in: ExtraInfo): ParquetExtraInfo
  6. def addPartitionValues(batch: ColumnarBatch, inPartitionValues: InternalRow, partitionSchema: StructType): ColumnarBatch
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
  7. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  8. var batchIter: Iterator[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  9. def calculateEstimatedBlocksOutputSize(batchContext: BatchContext): Long

    Calculate the output size according to the block chunks and the schema, and the estimated output size will be used as the initialized size of allocating HostMemoryBuffer

    Calculate the output size according to the block chunks and the schema, and the estimated output size will be used as the initialized size of allocating HostMemoryBuffer

    Please be note, the estimated size should be at least equal to size of HEAD + Blocks + FOOTER

    batchContext

    the batch building context

    returns

    Long, the estimated output size

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  10. def calculateExtraMemoryForParquetFooter(numCols: Int, numBlocks: Int): Int

    Calculate an amount of extra memory if we are combining multiple files together.

    Calculate an amount of extra memory if we are combining multiple files together. We want to add extra memory because the ColumnChunks saved in the footer have 2 fields file_offset and data_page_offset that get much larger when we are combining files. Here we estimate that by taking the number of columns * number of blocks which should be the number of column chunks and then saying there are 2 fields that could be larger and assume max size of those would be 8 bytes worst case. So we probably allocate to much here but it shouldn't be by a huge amount and its better then having to realloc and copy.

    numCols

    the number of columns

    numBlocks

    the total number of blocks to be combined

    returns

    amount of extra memory to allocate

    Definition Classes
    ParquetPartitionReaderBase
  11. def calculateFinalBlocksOutputSize(footerOffset: Long, blocks: Seq[DataBlockBase], bContext: BatchContext): Long

    Calculate the final block output size which will be used to decide if re-allocate HostMemoryBuffer

    Calculate the final block output size which will be used to decide if re-allocate HostMemoryBuffer

    There is no need to re-calculate the block size, just calculate the footer size and plus footerOffset.

    If the size calculated by this function is bigger than the one calculated by calculateEstimatedBlocksOutputSize, then it will cause HostMemoryBuffer re-allocating, and cause the performance issue.

    footerOffset

    footer offset

    blocks

    blocks to be evaluated

    returns

    the output size

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  12. def calculateParquetFooterSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
    Annotations
    @nowarn()
  13. def calculateParquetOutputSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType, handleCoalesceFiles: Boolean): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  14. def checkIfNeedToSplitBlocks(currentIsCorrectedRebaseMode: Boolean, nextIsCorrectedRebaseMode: Boolean, currentIsCorrectedInt96RebaseMode: Boolean, nextIsCorrectedInt96RebaseMode: Boolean, currentSchema: SchemaBase, nextSchema: SchemaBase, currentFilePath: String, nextFilePath: String): Boolean
    Definition Classes
    ParquetPartitionReaderBase
  15. def checkIfNeedToSplitDataBlock(currentBlockInfo: SingleDataBlockInfo, nextBlockInfo: SingleDataBlockInfo): Boolean

    To check if the next block will be split into another ColumnarBatch

    To check if the next block will be split into another ColumnarBatch

    currentBlockInfo

    current SingleDataBlockInfo

    nextBlockInfo

    next SingleDataBlockInfo

    returns

    true: split the next block into another ColumnarBatch and vice versa

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  16. final def chunkedSplit(buffer: HostMemoryBuffer): Seq[HostMemoryBuffer]

    Set this to a splitter instance when chunked reading is supported

    Set this to a splitter instance when chunked reading is supported

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  17. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  18. def close(): Unit
    Definition Classes
    FilePartitionReaderBase → Closeable → AutoCloseable
  19. def computeBlockMetaData(blocks: Seq[BlockMetaData], realStartOffset: Long): Seq[BlockMetaData]

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    blocks

    block metadata from the original file(s) that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    returns

    updated block metadata

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
    Annotations
    @nowarn()
  20. val conf: Configuration
  21. def copyBlocksData(filePath: Path, out: HostMemoryOutputStream, blocks: Seq[BlockMetaData], realStartOffset: Long, metrics: Map[String, GpuMetric]): Seq[BlockMetaData]

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output.

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output. The output blocks will contain the same column chunk metadata but with the file offsets updated to reflect the new position of the column data as written to the output.

    out

    the output stream to receive the data

    blocks

    block metadata from the original file that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    returns

    updated block metadata corresponding to the output

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  22. val copyBufferSize: Int
    Definition Classes
    ParquetPartitionReaderBase
  23. def copyDataRange(range: CopyRange, in: FSDataInputStream, out: HostMemoryOutputStream, copyBuffer: Array[Byte]): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  24. def createBatchContext(chunkedBlocks: LinkedHashMap[Path, ArrayBuffer[DataBlockBase]], clippedSchema: SchemaBase): BatchContext

    Return a batch context which will be shared during the process of building a memory file, aka with the following APIs.

    Return a batch context which will be shared during the process of building a memory file, aka with the following APIs.

    • calculateEstimatedBlocksOutputSize
    • writeFileHeader
    • getBatchRunner
    • calculateFinalBlocksOutputSize
    • writeFileFooter It is useful when something is needed by some or all of the above APIs. Children can override this to return a customized batch context.
    chunkedBlocks

    mapping of file path to data blocks

    clippedSchema

    schema info

    Attributes
    protected
    Definition Classes
    MultiFileCoalescingPartitionReaderBase
  25. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  26. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  27. val execMetrics: Map[String, GpuMetric]
  28. def fileSystemBytesRead(): Long
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
    Annotations
    @nowarn()
  29. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  30. def finalizeOutputBatch(batch: ColumnarBatch, extraInfo: ExtraInfo): ColumnarBatch

    A callback to finalize the output batch.

    A callback to finalize the output batch. The batch returned will be the final output batch of the reader's "get" method.

    batch

    the batch after decoding, adding partitioned columns.

    extraInfo

    the corresponding extra information of the input batch.

    returns

    the finalized columnar batch.

    Attributes
    protected
    Definition Classes
    MultiFileCoalescingPartitionReaderBase
  31. def get(): ColumnarBatch
    Definition Classes
    FilePartitionReaderBase → PartitionReader
  32. def getBatchRunner(taskContext: TaskContext, file: Path, outhmb: HostMemoryBuffer, blocks: ArrayBuffer[DataBlockBase], offset: Long, batchContext: BatchContext): Callable[(Seq[DataBlockBase], Long)]

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    file

    file to be read

    outhmb

    the sliced HostMemoryBuffer to hold the blocks, and the implementation is in charge of closing it in sub-class

    blocks

    blocks meta info to specify which blocks to be read

    offset

    used as the offset adjustment

    batchContext

    the batch building context

    returns

    Callable[(Seq[DataBlockBase], Long)], which will be submitted to a ThreadPoolExecutor, and the Callable will return a tuple result and result._1 is block meta info with the offset adjusted result._2 is the bytes read

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  33. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  34. final def getFileFormatShortName: String

    File format short name used for logging and other things to uniquely identity which file format is being used.

    File format short name used for logging and other things to uniquely identity which file format is being used.

    returns

    the file format short name

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  35. def getParquetOptions(readDataSchema: StructType, clippedSchema: MessageType, useFieldId: Boolean): ParquetOptions
    Definition Classes
    ParquetPartitionReaderBase
  36. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  37. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  38. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  39. var isDone: Boolean
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  40. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  41. val isSchemaCaseSensitive: Boolean
  42. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  43. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  44. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  47. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  48. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  49. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  50. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  51. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  52. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  53. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  54. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  55. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  56. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  57. def next(): Boolean
    Definition Classes
    MultiFileCoalescingPartitionReaderBase → PartitionReader
  58. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  59. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  60. def populateCurrentBlockChunk(blockIter: BufferedIterator[BlockMetaData], maxReadBatchSizeRows: Int, maxReadBatchSizeBytes: Long, readDataSchema: StructType): Seq[BlockMetaData]
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  61. def readBufferToTablesAndClose(dataBuffer: HostMemoryBuffer, dataSize: Long, clippedSchema: SchemaBase, readDataSchema: StructType, extraInfo: ExtraInfo): GpuDataProducer[Table]

    Sent host memory to GPU to decode

    Sent host memory to GPU to decode

    dataBuffer

    the data which can be decoded in GPU

    dataSize

    data size

    clippedSchema

    the clipped schema

    extraInfo

    the extra information for specific file format

    returns

    Table

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  62. def readPartFile(blocks: Seq[BlockMetaData], clippedSchema: MessageType, filePath: Path): (HostMemoryBuffer, Long, Seq[BlockMetaData])
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  63. final def startNewBufferRetry(): Unit

    You can reset the target batch size if needed for splits...

    You can reset the target batch size if needed for splits...

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  64. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  65. implicit def toBlockMetaData(block: DataBlockBase): BlockMetaData

    conversions used by multithreaded reader and coalescing reader

    conversions used by multithreaded reader and coalescing reader

    Definition Classes
    ParquetPartitionReaderBase
  66. implicit def toBlockMetaDataSeq(blocks: Seq[DataBlockBase]): Seq[BlockMetaData]
    Definition Classes
    ParquetPartitionReaderBase
  67. def toCudfColumnNames(readDataSchema: StructType, fileSchema: MessageType, isCaseSensitive: Boolean, useFieldId: Boolean): Seq[String]

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf.

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf. Also clips the column names if useFieldId is true.

    readDataSchema

    Spark schema to read

    fileSchema

    the schema of the dumped parquet-formatted buffer, already removed unmatched

    isCaseSensitive

    if it is case sensitive

    useFieldId

    if enabled spark.sql.parquet.fieldId.read.enabled

    returns

    a sequence of tuple of column names following the order of readDataSchema

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  68. implicit def toDataBlockBase(blocks: Seq[BlockMetaData]): Seq[DataBlockBase]
    Definition Classes
    ParquetPartitionReaderBase
  69. implicit def toMessageType(schema: SchemaBase): MessageType
  70. def toString(): String
    Definition Classes
    AnyRef → Any
  71. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  72. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  73. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  74. def writeFileFooter(buffer: HostMemoryBuffer, bufferSize: Long, footerOffset: Long, blocks: Seq[DataBlockBase], bContext: BatchContext): (HostMemoryBuffer, Long)

    Writer a footer for a specific file format.

    Writer a footer for a specific file format. If there is no footer for the file format, just return (hmb, offset)

    Please be note, some file formats may re-allocate the HostMemoryBuffer because of the estimated initialized buffer size may be a little smaller than the actual size. So in this case, the hmb should be closed in the implementation.

    buffer

    The buffer holding (header + data blocks)

    bufferSize

    The total buffer size which equals to size of (header + blocks + footer)

    footerOffset

    Where begin to write the footer

    blocks

    The data block meta info

    returns

    the buffer and the buffer size

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  75. def writeFileHeader(buffer: HostMemoryBuffer, bContext: BatchContext): Long

    Write a header for a specific file format.

    Write a header for a specific file format. If there is no header for the file format, just ignore it and return 0

    buffer

    where the header will be written

    returns

    how many bytes written

    Definition Classes
    MultiFileParquetPartitionReaderMultiFileCoalescingPartitionReaderBase
  76. def writeFooter(out: OutputStream, blocks: Seq[BlockMetaData], schema: MessageType): Unit
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase

Inherited from MultiFileReaderFunctions

Inherited from FilePartitionReaderBase

Inherited from ScanWithMetrics

Inherited from Logging

Inherited from PartitionReader[ColumnarBatch]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from AnyRef

Inherited from Any

Ungrouped