case class CatalogTable(identifier: TableIdentifier, tableType: CatalogTableType, storage: CatalogStorageFormat, schema: StructType, provider: Option[String] = None, partitionColumnNames: Seq[String] = Seq.empty, bucketSpec: Option[BucketSpec] = None, owner: String = "", createTime: Long = System.currentTimeMillis, lastAccessTime: Long = -1, createVersion: String = "", properties: Map[String, String] = Map.empty, stats: Option[CatalogStatistics] = None, viewText: Option[String] = None, comment: Option[String] = None, unsupportedFeatures: Seq[String] = Seq.empty, tracksPartitionsInCatalog: Boolean = false, schemaPreservesCase: Boolean = true, ignoredProperties: Map[String, String] = Map.empty, viewOriginalText: Option[String] = None) extends Product with Serializable
A table defined in the catalog.
Note that Hive's metastore also tracks skewed columns. We should consider adding that in the future once we have a better understanding of how we want to handle skewed columns.
- provider
the name of the data source provider for this table, e.g. parquet, json, etc. Can be None if this table is a View, should be "hive" for hive serde tables.
- createVersion
records the version of Spark that created this table metadata. The default is an empty string. We expect it will be read from the catalog or filled by ExternalCatalog.createTable. For temporary views, the value will be empty.
- unsupportedFeatures
is a list of string descriptions of features that are used by the underlying table but not supported by Spark SQL yet.
- tracksPartitionsInCatalog
whether this table's partition metadata is stored in the catalog. If false, it is inferred automatically based on file structure.
- schemaPreservesCase
Whether or not the schema resolved for this table is case-sensitive. When using a Hive Metastore, this flag is set to false if a case- sensitive schema was unable to be read from the table properties. Used to trigger case-sensitive schema inference at query time, when configured.
- ignoredProperties
is a list of table properties that are used by the underlying table but ignored by Spark SQL yet.
- Alphabetic
- By Inheritance
- CatalogTable
- Serializable
- Product
- Equals
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new CatalogTable(identifier: TableIdentifier, tableType: CatalogTableType, storage: CatalogStorageFormat, schema: StructType, provider: Option[String] = None, partitionColumnNames: Seq[String] = Seq.empty, bucketSpec: Option[BucketSpec] = None, owner: String = "", createTime: Long = System.currentTimeMillis, lastAccessTime: Long = -1, createVersion: String = "", properties: Map[String, String] = Map.empty, stats: Option[CatalogStatistics] = None, viewText: Option[String] = None, comment: Option[String] = None, unsupportedFeatures: Seq[String] = Seq.empty, tracksPartitionsInCatalog: Boolean = false, schemaPreservesCase: Boolean = true, ignoredProperties: Map[String, String] = Map.empty, viewOriginalText: Option[String] = None)
- provider
the name of the data source provider for this table, e.g. parquet, json, etc. Can be None if this table is a View, should be "hive" for hive serde tables.
- createVersion
records the version of Spark that created this table metadata. The default is an empty string. We expect it will be read from the catalog or filled by ExternalCatalog.createTable. For temporary views, the value will be empty.
- unsupportedFeatures
is a list of string descriptions of features that are used by the underlying table but not supported by Spark SQL yet.
- tracksPartitionsInCatalog
whether this table's partition metadata is stored in the catalog. If false, it is inferred automatically based on file structure.
- schemaPreservesCase
Whether or not the schema resolved for this table is case-sensitive. When using a Hive Metastore, this flag is set to false if a case- sensitive schema was unable to be read from the table properties. Used to trigger case-sensitive schema inference at query time, when configured.
- ignoredProperties
is a list of table properties that are used by the underlying table but ignored by Spark SQL yet.
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- val bucketSpec: Option[BucketSpec]
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- val comment: Option[String]
- val createTime: Long
- val createVersion: String
- def dataSchema: StructType
schema of this table's data columns
- def database: String
Return the database this table was specified to belong to, assuming it exists.
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- val identifier: TableIdentifier
- val ignoredProperties: Map[String, String]
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val lastAccessTime: Long
- def location: URI
Return the table location, assuming it is specified.
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- val owner: String
- val partitionColumnNames: Seq[String]
- def partitionSchema: StructType
schema of this table's partition columns
- def productElementNames: Iterator[String]
- Definition Classes
- Product
- val properties: Map[String, String]
- val provider: Option[String]
- def qualifiedName: String
Return the fully qualified name of this table, assuming the database was specified.
- val schema: StructType
- val schemaPreservesCase: Boolean
- def simpleString: String
Readable string representation for the CatalogTable.
- val stats: Option[CatalogStatistics]
- val storage: CatalogStorageFormat
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- val tableType: CatalogTableType
- def toLinkedHashMap: LinkedHashMap[String, String]
- def toString(): String
- Definition Classes
- CatalogTable → AnyRef → Any
- val tracksPartitionsInCatalog: Boolean
- val unsupportedFeatures: Seq[String]
- def viewCatalogAndNamespace: Seq[String]
Return the current catalog and namespace (concatenated as a Seq[String]) of when the view was created.
- val viewOriginalText: Option[String]
- def viewQueryColumnNames: Seq[String]
Return the output column names of the query that creates a view, the column names are used to resolve a view, should be empty if the CatalogTable is not a View or created by older versions of Spark(before 2.2.0).
- def viewReferredTempFunctionNames: Seq[String]
Return temporary function names the current view was referred.
Return temporary function names the current view was referred. should be empty if the CatalogTable is not a Temporary View or created by older versions of Spark(before 3.1.0).
- def viewReferredTempViewNames: Seq[Seq[String]]
Return temporary view names the current view was referred.
Return temporary view names the current view was referred. should be empty if the CatalogTable is not a Temporary View or created by older versions of Spark(before 3.1.0).
- def viewSQLConfigs: Map[String, String]
Return the SQL configs of when the view was created, the configs are applied when parsing and analyzing the view, should be empty if the CatalogTable is not a View or created by older versions of Spark(before 3.1.0).
- val viewText: Option[String]
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- def withNewStorage(locationUri: Option[URI] = storage.locationUri, inputFormat: Option[String] = storage.inputFormat, outputFormat: Option[String] = storage.outputFormat, compressed: Boolean = false, serde: Option[String] = storage.serde, properties: Map[String, String] = storage.properties): CatalogTable
Syntactic sugar to update a field in
storage.