Packages

t

com.nvidia.spark.rapids

ParquetPartitionReaderBase

trait ParquetPartitionReaderBase extends Logging with ScanWithMetrics with MultiFileReaderFunctions

Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ParquetPartitionReaderBase
  2. MultiFileReaderFunctions
  3. ScanWithMetrics
  4. Logging
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def conf: Configuration
  2. abstract def execMetrics: Map[String, GpuMetric]
  3. abstract def isSchemaCaseSensitive: Boolean

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val PARQUET_META_SIZE: Long
  5. def addPartitionValues(batch: ColumnarBatch, inPartitionValues: InternalRow, partitionSchema: StructType): ColumnarBatch
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. def calculateExtraMemoryForParquetFooter(numCols: Int, numBlocks: Int): Int

    Calculate an amount of extra memory if we are combining multiple files together.

    Calculate an amount of extra memory if we are combining multiple files together. We want to add extra memory because the ColumnChunks saved in the footer have 2 fields file_offset and data_page_offset that get much larger when we are combining files. Here we estimate that by taking the number of columns * number of blocks which should be the number of column chunks and then saying there are 2 fields that could be larger and assume max size of those would be 8 bytes worst case. So we probably allocate to much here but it shouldn't be by a huge amount and its better then having to realloc and copy.

    numCols

    the number of columns

    numBlocks

    the total number of blocks to be combined

    returns

    amount of extra memory to allocate

  8. def calculateParquetFooterSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType): Long
    Attributes
    protected
    Annotations
    @nowarn()
  9. def calculateParquetOutputSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType, handleCoalesceFiles: Boolean): Long
    Attributes
    protected
  10. def checkIfNeedToSplitBlocks(currentIsCorrectedRebaseMode: Boolean, nextIsCorrectedRebaseMode: Boolean, currentIsCorrectedInt96RebaseMode: Boolean, nextIsCorrectedInt96RebaseMode: Boolean, currentSchema: SchemaBase, nextSchema: SchemaBase, currentFilePath: String, nextFilePath: String): Boolean
  11. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  12. def computeBlockMetaData(blocks: Seq[BlockMetaData], realStartOffset: Long): Seq[BlockMetaData]

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    blocks

    block metadata from the original file(s) that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    returns

    updated block metadata

    Attributes
    protected
    Annotations
    @nowarn()
  13. def copyBlocksData(filePath: Path, out: HostMemoryOutputStream, blocks: Seq[BlockMetaData], realStartOffset: Long, metrics: Map[String, GpuMetric]): Seq[BlockMetaData]

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output.

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output. The output blocks will contain the same column chunk metadata but with the file offsets updated to reflect the new position of the column data as written to the output.

    out

    the output stream to receive the data

    blocks

    block metadata from the original file that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    returns

    updated block metadata corresponding to the output

    Attributes
    protected
  14. val copyBufferSize: Int
  15. def copyDataRange(range: CopyRange, in: FSDataInputStream, out: HostMemoryOutputStream, copyBuffer: Array[Byte]): Long
    Attributes
    protected
  16. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  17. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  18. def fileSystemBytesRead(): Long
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
    Annotations
    @nowarn()
  19. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  20. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  21. def getParquetOptions(readDataSchema: StructType, clippedSchema: MessageType, useFieldId: Boolean): ParquetOptions
  22. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  23. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  24. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  25. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  26. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  27. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  28. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  31. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  35. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  39. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  40. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  41. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  42. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  43. def populateCurrentBlockChunk(blockIter: BufferedIterator[BlockMetaData], maxReadBatchSizeRows: Int, maxReadBatchSizeBytes: Long, readDataSchema: StructType): Seq[BlockMetaData]
    Attributes
    protected
  44. def readPartFile(blocks: Seq[BlockMetaData], clippedSchema: MessageType, filePath: Path): (HostMemoryBuffer, Long, Seq[BlockMetaData])
    Attributes
    protected
  45. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  46. implicit def toBlockMetaData(block: DataBlockBase): BlockMetaData

    conversions used by multithreaded reader and coalescing reader

  47. implicit def toBlockMetaDataSeq(blocks: Seq[DataBlockBase]): Seq[BlockMetaData]
  48. def toCudfColumnNames(readDataSchema: StructType, fileSchema: MessageType, isCaseSensitive: Boolean, useFieldId: Boolean): Seq[String]

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf.

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf. Also clips the column names if useFieldId is true.

    readDataSchema

    Spark schema to read

    fileSchema

    the schema of the dumped parquet-formatted buffer, already removed unmatched

    isCaseSensitive

    if it is case sensitive

    useFieldId

    if enabled spark.sql.parquet.fieldId.read.enabled

    returns

    a sequence of tuple of column names following the order of readDataSchema

    Attributes
    protected
  49. implicit def toDataBlockBase(blocks: Seq[BlockMetaData]): Seq[DataBlockBase]
  50. def toString(): String
    Definition Classes
    AnyRef → Any
  51. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  52. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  53. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  54. def writeFooter(out: OutputStream, blocks: Seq[BlockMetaData], schema: MessageType): Unit
    Attributes
    protected

Inherited from MultiFileReaderFunctions

Inherited from ScanWithMetrics

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped