Packages

class ParquetFileFormat extends FileFormat with DataSourceRegister with Logging with Serializable

Linear Supertypes
Serializable, Serializable, Logging, DataSourceRegister, FileFormat, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ParquetFileFormat
  2. Serializable
  3. Serializable
  4. Logging
  5. DataSourceRegister
  6. FileFormat
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ParquetFileFormat()

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def buildReader(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    dataSchema

    The global data schema. It can be either specified by the user, or reconciled/merged from all underlying data files. If any partition columns are contained in the files, they are preserved in this schema.

    partitionSchema

    The schema of the partition column row that will be present in each PartitionedFile. These columns should be appended to the rows that are produced by the iterator.

    requiredSchema

    The schema of the data that should be output for each row. This may be a subset of the columns that are present in the file if column pruning has occurred.

    filters

    A set of filters than can optionally be used to reduce the number of rows output

    options

    A set of string -> string configuration options.

    Attributes
    protected
    Definition Classes
    FileFormat
  6. def buildReaderWithPartitionValues(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Build the reader.

    Build the reader.

    Definition Classes
    ParquetFileFormatFileFormat
    Note

    It is required to pass FileFormat.OPTION_RETURNING_BATCH in options, to indicate whether the reader should return row or columnar output. If the caller can handle both, pass FileFormat.OPTION_RETURNING_BATCH -> supportBatch(sparkSession, StructType(requiredSchema.fields ++ partitionSchema.fields)) as the option. It should be set to "true" only if this reader can support it.

  7. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  8. def createFileMetadataCol(): AttributeReference

    Create a file metadata struct column containing fields supported by the given file format.

    Create a file metadata struct column containing fields supported by the given file format.

    Definition Classes
    FileFormat
  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(other: Any): Boolean
    Definition Classes
    ParquetFileFormat → AnyRef → Any
  11. def fileConstantMetadataExtractors: Map[String, (PartitionedFile) ⇒ Any]

    The extractors to use when deriving file-constant metadata columns for this file format.

    The extractors to use when deriving file-constant metadata columns for this file format.

    Implementations that define custom constant metadata columns can override this method to associate a custom extractor with a given metadata column name, when a simple name-based lookup in PartitionedFile.extraConstantMetadataColumnValues is not expressive enough; extractors have access to the entire PartitionedFile and can perform arbitrary computations.

    NOTE: Extractors are lazy, invoked only if the query actually selects their column at runtime.

    See also FileFormat.getFileConstantMetadataColumnValue.

    Definition Classes
    FileFormat
  12. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  14. def hashCode(): Int
    Definition Classes
    ParquetFileFormat → AnyRef → Any
  15. def inferSchema(sparkSession: SparkSession, parameters: Map[String, String], files: Seq[FileStatus]): Option[StructType]

    When possible, this method should return the schema of the given files.

    When possible, this method should return the schema of the given files. When the format does not support inference, or no valid files are given should return None. In these cases Spark will require that user specify the schema manually.

    Definition Classes
    ParquetFileFormatFileFormat
  16. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  17. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  18. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  19. def isSplitable(sparkSession: SparkSession, options: Map[String, String], path: Path): Boolean

    Returns whether a file with path could be split or not.

    Returns whether a file with path could be split or not.

    Definition Classes
    ParquetFileFormatFileFormat
  20. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  21. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  22. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  23. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  24. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  25. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  26. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  28. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  29. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  31. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def metadataSchemaFields: Seq[StructField]

    All fields the file format's _metadata struct defines.

    All fields the file format's _metadata struct defines.

    Each metadata struct field is either "constant" or "generated" (respectively defined/matched by FileSourceConstantMetadataStructField or FileSourceGeneratedMetadataAttribute).

    Constant metadata columns are derived from the PartitionedFile instances a scan's FileIndex provides. Thus, a custom FileFormat that defines constant metadata columns will generally pair with a a custom FileIndex that populates PartitionedFile with appropriate metadata values. By default, constant attribute values are obtained by a simple name-based lookup in PartitionedFile.extraConstantMetadataColumnValues, but implementations can override fileConstantMetadataExtractors to define custom extractors that have access to the entire PartitionedFile when deriving the column's value.

    Generated metadata columns map to a hidden/internal column the underlying reader provides, and so will often pair with a custom reader that can populate those columns. For example, ParquetFileFormat defines a "_metadata.row_index" column that relies on VectorizedParquetRecordReader to extract the actual row index values from the parquet scan.

    Definition Classes
    ParquetFileFormatFileFormat
  34. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  35. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  36. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  37. def prepareWrite(sparkSession: SparkSession, job: Job, options: Map[String, String], dataSchema: StructType): OutputWriterFactory

    Prepares a write job and returns an OutputWriterFactory.

    Prepares a write job and returns an OutputWriterFactory. Client side job preparation can be put here. For example, user defined output committer can be configured here by setting the output committer class in the conf of spark.sql.sources.outputCommitterClass.

    Definition Classes
    ParquetFileFormatFileFormat
  38. def shortName(): String

    The string that represents the format that this data source provider uses.

    The string that represents the format that this data source provider uses. This is overridden by children to provide a nice alias for the data source. For example:

    override def shortName(): String = "parquet"
    Definition Classes
    ParquetFileFormatDataSourceRegister
    Since

    1.5.0

  39. def supportBatch(sparkSession: SparkSession, schema: StructType): Boolean

    Returns whether the reader can return the rows as batch or not.

    Returns whether the reader can return the rows as batch or not.

    Definition Classes
    ParquetFileFormatFileFormat
  40. def supportDataType(dataType: DataType): Boolean

    Returns whether this format supports the given DataType in read/write path.

    Returns whether this format supports the given DataType in read/write path. By default all data types are supported.

    Definition Classes
    ParquetFileFormatFileFormat
  41. def supportFieldName(name: String): Boolean

    Returns whether this format supports the given filed name in read/write path.

    Returns whether this format supports the given filed name in read/write path. By default all field name is supported.

    Definition Classes
    FileFormat
  42. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  43. def toString(): String
    Definition Classes
    ParquetFileFormat → AnyRef → Any
  44. def vectorTypes(requiredSchema: StructType, partitionSchema: StructType, sqlConf: SQLConf): Option[Seq[String]]

    Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.

    Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.

    Definition Classes
    ParquetFileFormatFileFormat
  45. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  46. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  47. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from Serializable

Inherited from Serializable

Inherited from Logging

Inherited from DataSourceRegister

Inherited from FileFormat

Inherited from AnyRef

Inherited from Any

Ungrouped