Packages

c

org.apache.spark.sql.rapids

GpuMultiFileCloudAvroPartitionReader

class GpuMultiFileCloudAvroPartitionReader extends MultiFileCloudPartitionReaderBase with MultiFileReaderFunctions with GpuAvroReaderBase

A PartitionReader that can read multiple AVRO files in parallel. This is most efficient running in a cloud environment where the I/O of reading is slow.

When reading a file, it

  • seeks to the start position of the first block located in this partition.
  • next, parses the meta and sync, rewrites the meta and sync, and copies the data to a batch buffer per block, until reaching the last one of the current partition.
  • sends batches to GPU at last.
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. GpuMultiFileCloudAvroPartitionReader
  2. GpuAvroReaderBase
  3. MultiFileReaderFunctions
  4. MultiFileCloudPartitionReaderBase
  5. FilePartitionReaderBase
  6. ScanWithMetrics
  7. Logging
  8. PartitionReader
  9. Closeable
  10. AutoCloseable
  11. AnyRef
  12. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new GpuMultiFileCloudAvroPartitionReader(conf: Configuration, files: Array[PartitionedFile], numThreads: Int, maxNumFileProcessed: Int, filters: Array[Filter], execMetrics: Map[String, GpuMetric], ignoreCorruptFiles: Boolean, ignoreMissingFiles: Boolean, debugDumpPrefix: Option[String], debugDumpAlways: Boolean, readDataSchema: StructType, partitionSchema: StructType, maxReadBatchSizeRows: Integer, maxReadBatchSizeBytes: Long, maxGpuColumnSizeBytes: Long)

    conf

    the Hadoop configuration

    files

    the partitioned files to read

    numThreads

    the size of the threadpool

    maxNumFileProcessed

    threshold to control the maximum file number to be submitted to threadpool

    filters

    filters passed into the filterHandler

    execMetrics

    the metrics

    ignoreCorruptFiles

    Whether to ignore corrupt files

    ignoreMissingFiles

    Whether to ignore missing files

    debugDumpPrefix

    a path prefix to use for dumping the fabricated AVRO data or null

    debugDumpAlways

    whether to debug dump always or only on errors

    readDataSchema

    the Spark schema describing what will be read

    partitionSchema

    Schema of partitions.

    maxReadBatchSizeRows

    soft limit on the maximum number of rows to be read per batch

    maxReadBatchSizeBytes

    soft limit on the maximum number of bytes to be read per batch

    maxGpuColumnSizeBytes

    maximum number of bytes for a GPU column

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. var batchIter: Iterator[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  6. val cacheBufferSize: Int
    Definition Classes
    GpuAvroReaderBase
  7. def canUseCombine: Boolean
  8. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  9. def close(): Unit
    Definition Classes
    MultiFileCloudPartitionReaderBaseFilePartitionReaderBase → Closeable → AutoCloseable
  10. def combineHMBs(results: Array[HostMemoryBuffersWithMetaDataBase]): HostMemoryBuffersWithMetaDataBase
  11. var combineLeftOverFiles: Option[Array[HostMemoryBuffersWithMetaDataBase]]
    Attributes
    protected
    Definition Classes
    MultiFileCloudPartitionReaderBase
  12. val conf: Configuration
  13. final def copyBlocksData(blocks: Seq[BlockInfo], in: FSDataInputStream, out: OutputStream, sync: Option[Array[Byte]] = None): Seq[BlockInfo]

    Copy the data specified by the blocks from in to out

    Copy the data specified by the blocks from in to out

    Attributes
    protected
    Definition Classes
    GpuAvroReaderBase
  14. var currentFileHostBuffers: Option[HostMemoryBuffersWithMetaDataBase]
    Attributes
    protected
    Definition Classes
    MultiFileCloudPartitionReaderBase
  15. def currentMetricsValues(): Array[CustomTaskMetric]
    Definition Classes
    PartitionReader
  16. val debugDumpAlways: Boolean
  17. val debugDumpPrefix: Option[String]
  18. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  19. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  20. final def estimateOutputSize(blocks: Seq[BlockInfo], headerSize: Long): Long

    Estimate the total size from the given blocks and header

    Estimate the total size from the given blocks and header

    Attributes
    protected
    Definition Classes
    GpuAvroReaderBase
  21. def fileSystemBytesRead(): Long
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
    Annotations
    @nowarn()
  22. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  23. def get(): ColumnarBatch
    Definition Classes
    FilePartitionReaderBase → PartitionReader
  24. def getBatchRunner(tc: TaskContext, file: PartitionedFile, origFile: Option[PartitionedFile], config: Configuration, filters: Array[Filter]): Callable[HostMemoryBuffersWithMetaDataBase]

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    tc

    task context to use

    file

    file to be read

    origFile

    optional original unmodified file if replaced with Alluxio

    filters

    push down filters

    returns

    Callable[HostMemoryBuffersWithMetaDataBase]

    Definition Classes
    GpuMultiFileCloudAvroPartitionReaderMultiFileCloudPartitionReaderBase
  25. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  26. final def getFileFormatShortName: String

    File format short name used for logging and other things to uniquely identity which file format is being used.

    File format short name used for logging and other things to uniquely identity which file format is being used.

    returns

    the file format short name

    Definition Classes
    GpuMultiFileCloudAvroPartitionReaderMultiFileCloudPartitionReaderBase
  27. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  28. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  29. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. var isDone: Boolean
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  31. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  32. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  33. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  34. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  39. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  41. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  44. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  46. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  47. def next(): Boolean
    Definition Classes
    MultiFileCloudPartitionReaderBase → PartitionReader
  48. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  49. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  50. final def populateCurrentBlockChunk(blockIter: BufferedIterator[BlockInfo], maxReadBatchSizeRows: Int, maxReadBatchSizeBytes: Long): Seq[BlockInfo]

    Get the block chunk according to the max batch size and max rows.

    Get the block chunk according to the max batch size and max rows.

    blockIter

    blocks to be evaluated

    maxReadBatchSizeRows

    soft limit on the maximum number of rows the reader reads per batch

    maxReadBatchSizeBytes

    soft limit on the maximum number of bytes the reader reads per batch

    Attributes
    protected
    Definition Classes
    GpuAvroReaderBase
  51. def readBatches(fileBufsAndMeta: HostMemoryBuffersWithMetaDataBase): Iterator[ColumnarBatch]

    Decode HostMemoryBuffers in GPU

    Decode HostMemoryBuffers in GPU

    fileBufsAndMeta

    the file HostMemoryBuffer read from a PartitionedFile

    returns

    an iterator of batches that were decoded

    Definition Classes
    GpuMultiFileCloudAvroPartitionReaderMultiFileCloudPartitionReaderBase
  52. val readDataSchema: StructType
  53. final def readPartFile(partFilePath: Path, blocks: Seq[BlockInfo], headerSize: Long, conf: Configuration): (HostMemoryBuffer, Long)

    Read a split into a host buffer, preparing for sending to GPU

    Read a split into a host buffer, preparing for sending to GPU

    Attributes
    protected
    Definition Classes
    GpuAvroReaderBase
  54. final def sendToGpu(hostBuf: HostMemoryBuffer, bufSize: Long, splits: Array[PartitionedFile]): Option[ColumnarBatch]

    Send a host buffer to GPU for decoding, and return it as a ColumnarBatch.

    Send a host buffer to GPU for decoding, and return it as a ColumnarBatch. The input hostBuf will be closed after returning, please do not use it anymore. 'splits' is used only for debugging.

    Attributes
    protected
    Definition Classes
    GpuAvroReaderBase
  55. final def sendToGpuUnchecked(hostBuf: HostMemoryBuffer, bufSize: Long, splits: Array[PartitionedFile]): Table

    Read the host data to GPU for decoding, and return it as a cuDF Table.

    Read the host data to GPU for decoding, and return it as a cuDF Table. The input host buffer should contain valid data, otherwise the behavior is undefined. 'splits' is used only for debugging.

    Attributes
    protected
    Definition Classes
    GpuAvroReaderBase
  56. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  57. def toString(): String
    Definition Classes
    AnyRef → Any
  58. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  59. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  60. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from GpuAvroReaderBase

Inherited from MultiFileReaderFunctions

Inherited from FilePartitionReaderBase

Inherited from ScanWithMetrics

Inherited from Logging

Inherited from PartitionReader[ColumnarBatch]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from AnyRef

Inherited from Any

Ungrouped