Packages

class TextFileFormat extends TextBasedFileFormat with DataSourceRegister

A data source for reading text files. The text files must be encoded as UTF-8.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. TextFileFormat
  2. DataSourceRegister
  3. TextBasedFileFormat
  4. FileFormat
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new TextFileFormat()

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def buildReader(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    Returns a function that can be used to read a single file in as an Iterator of InternalRow.

    dataSchema

    The global data schema. It can be either specified by the user, or reconciled/merged from all underlying data files. If any partition columns are contained in the files, they are preserved in this schema.

    partitionSchema

    The schema of the partition column row that will be present in each PartitionedFile. These columns should be appended to the rows that are produced by the iterator.

    requiredSchema

    The schema of the data that should be output for each row. This may be a subset of the columns that are present in the file if column pruning has occurred.

    filters

    A set of filters than can optionally be used to reduce the number of rows output

    options

    A set of string -> string configuration options.

    Definition Classes
    TextFileFormatFileFormat
  6. def buildReaderWithPartitionValues(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]

    Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.

    Exactly the same as buildReader except that the reader function returned by this method appends partition values to InternalRows produced by the reader function buildReader returns.

    Definition Classes
    FileFormat
  7. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  8. def createFileMetadataCol(): AttributeReference

    Create a file metadata struct column containing fields supported by the given file format.

    Create a file metadata struct column containing fields supported by the given file format.

    Definition Classes
    FileFormat
  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  11. def fileConstantMetadataExtractors: Map[String, (PartitionedFile) ⇒ Any]

    The extractors to use when deriving file-constant metadata columns for this file format.

    The extractors to use when deriving file-constant metadata columns for this file format.

    Implementations that define custom constant metadata columns can override this method to associate a custom extractor with a given metadata column name, when a simple name-based lookup in PartitionedFile.extraConstantMetadataColumnValues is not expressive enough; extractors have access to the entire PartitionedFile and can perform arbitrary computations.

    NOTE: Extractors are lazy, invoked only if the query actually selects their column at runtime.

    See also FileFormat.getFileConstantMetadataColumnValue.

    Definition Classes
    FileFormat
  12. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  14. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  15. def inferSchema(sparkSession: SparkSession, options: Map[String, String], files: Seq[FileStatus]): Option[StructType]

    When possible, this method should return the schema of the given files.

    When possible, this method should return the schema of the given files. When the format does not support inference, or no valid files are given should return None. In these cases Spark will require that user specify the schema manually.

    Definition Classes
    TextFileFormatFileFormat
  16. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  17. def isSplitable(sparkSession: SparkSession, options: Map[String, String], path: Path): Boolean

    Returns whether a file with path could be split or not.

    Returns whether a file with path could be split or not.

    Definition Classes
    TextFileFormatTextBasedFileFormatFileFormat
  18. def metadataSchemaFields: Seq[StructField]

    All fields the file format's _metadata struct defines.

    All fields the file format's _metadata struct defines.

    Each metadata struct field is either "constant" or "generated" (respectively defined/matched by FileSourceConstantMetadataStructField or FileSourceGeneratedMetadataAttribute).

    Constant metadata columns are derived from the PartitionedFile instances a scan's FileIndex provides. Thus, a custom FileFormat that defines constant metadata columns will generally pair with a a custom FileIndex that populates PartitionedFile with appropriate metadata values. By default, constant attribute values are obtained by a simple name-based lookup in PartitionedFile.extraConstantMetadataColumnValues, but implementations can override fileConstantMetadataExtractors to define custom extractors that have access to the entire PartitionedFile when deriving the column's value.

    Generated metadata columns map to a hidden/internal column the underlying reader provides, and so will often pair with a custom reader that can populate those columns. For example, ParquetFileFormat defines a "_metadata.row_index" column that relies on VectorizedParquetRecordReader to extract the actual row index values from the parquet scan.

    Definition Classes
    FileFormat
  19. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  20. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  21. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  22. def prepareWrite(sparkSession: SparkSession, job: Job, options: Map[String, String], dataSchema: StructType): OutputWriterFactory

    Prepares a write job and returns an OutputWriterFactory.

    Prepares a write job and returns an OutputWriterFactory. Client side job preparation can be put here. For example, user defined output committer can be configured here by setting the output committer class in the conf of spark.sql.sources.outputCommitterClass.

    Definition Classes
    TextFileFormatFileFormat
  23. def shortName(): String

    The string that represents the format that this data source provider uses.

    The string that represents the format that this data source provider uses. This is overridden by children to provide a nice alias for the data source. For example:

    override def shortName(): String = "parquet"
    Definition Classes
    TextFileFormatDataSourceRegister
    Since

    1.5.0

  24. def supportBatch(sparkSession: SparkSession, dataSchema: StructType): Boolean

    Returns whether this format supports returning columnar batch or not.

    Returns whether this format supports returning columnar batch or not. If columnar batch output is requested, users shall supply FileFormat.OPTION_RETURNING_BATCH -> true in relation options when calling buildReaderWithPartitionValues. This should only be passed as true if it can actually be supported. For ParquetFileFormat and OrcFileFormat, passing this option is required.

    TODO: we should just have different traits for the different formats.

    Definition Classes
    FileFormat
  25. def supportDataType(dataType: DataType): Boolean

    Returns whether this format supports the given DataType in read/write path.

    Returns whether this format supports the given DataType in read/write path. By default all data types are supported.

    Definition Classes
    TextFileFormatFileFormat
  26. def supportFieldName(name: String): Boolean

    Returns whether this format supports the given filed name in read/write path.

    Returns whether this format supports the given filed name in read/write path. By default all field name is supported.

    Definition Classes
    FileFormat
  27. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  28. def toString(): String
    Definition Classes
    TextFileFormat → AnyRef → Any
  29. def vectorTypes(requiredSchema: StructType, partitionSchema: StructType, sqlConf: SQLConf): Option[Seq[String]]

    Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.

    Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.

    Definition Classes
    FileFormat
  30. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from DataSourceRegister

Inherited from TextBasedFileFormat

Inherited from FileFormat

Inherited from AnyRef

Inherited from Any

Ungrouped