package state
- Alphabetic
- By Inheritance
- state
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- case class AcquiredThreadInfo() extends Product with Serializable
- abstract class BaseStateStoreRDD[T, U] extends RDD[U]
- class ByteArrayPair extends AnyRef
Mutable and reusable pair of byte arrays
- trait HDFSBackedStateStoreMap extends AnyRef
- class InvalidUnsafeRowException extends RuntimeException
An exception thrown when an invalid UnsafeRow is detected in state store.
- class NoPrefixHDFSBackedStateStoreMap extends HDFSBackedStateStoreMap
- class NoPrefixKeyStateEncoder extends RocksDBStateEncoder
Encodes/decodes UnsafeRows to versioned byte arrays.
Encodes/decodes UnsafeRows to versioned byte arrays. It uses the first byte of the generated byte array to store the version the describes how the row is encoded in the rest of the byte array. Currently, the default version is 0,
VERSION 0: [ VERSION (1 byte) | ROW (N bytes) ] The bytes of a UnsafeRow is written unmodified to starting from offset 1 (offset 0 is the version byte of value 0). That is, if the unsafe row has N bytes, then the generated array byte will be N+1 bytes.
- class PrefixKeyScanStateEncoder extends RocksDBStateEncoder
- class PrefixScannableHDFSBackedStateStoreMap extends HDFSBackedStateStoreMap
- trait ReadStateStore extends AnyRef
Base trait for a versioned key-value store which provides read operations.
Base trait for a versioned key-value store which provides read operations. Each instance of a
ReadStateStorerepresents a specific version of state data, and such instances are created through a StateStoreProvider.abortmethod will be called when the task is completed - please clean up the resources in the method.IMPLEMENTATION NOTES: * The implementation can throw exception on calling prefixScan method if the functionality is not supported yet from the implementation. Note that some stateful operations would not work on disabling prefixScan functionality.
- class ReadStateStoreRDD[T, U] extends BaseStateStoreRDD[T, U]
An RDD that allows computations to be executed against ReadStateStores.
An RDD that allows computations to be executed against ReadStateStores. It uses the StateStoreCoordinator to get the locations of loaded state stores and use that as the preferred locations.
- class RocksDB extends Logging
Class representing a RocksDB instance that checkpoints version of data to DFS.
Class representing a RocksDB instance that checkpoints version of data to DFS. After a set of updates, a new version can be committed by calling
commit(). Any past version can be loaded by callingload(version).- Note
This class is not thread-safe, so use it only from one thread.
- See also
RocksDBFileManager to see how the files are laid out in local disk and DFS.
- case class RocksDBCheckpointMetadata(sstFiles: Seq[RocksDBSstFile], logFiles: Seq[RocksDBLogFile], numKeys: Long) extends Product with Serializable
Classes to represent metadata of checkpoints saved to DFS.
Classes to represent metadata of checkpoints saved to DFS. Since this is converted to JSON, any changes to this MUST be backward-compatible.
- case class RocksDBConf(minVersionsToRetain: Int, compactOnCommit: Boolean, blockSizeKB: Long, blockCacheSizeMB: Long, lockAcquireTimeoutMs: Long, resetStatsOnLoad: Boolean, formatVersion: Int) extends Product with Serializable
Configurations for optimizing RocksDB
Configurations for optimizing RocksDB
- compactOnCommit
Whether to compact RocksDB data before commit / checkpointing
- class RocksDBFileManager extends Logging
Class responsible for syncing RocksDB checkpoint files from local disk to DFS.
Class responsible for syncing RocksDB checkpoint files from local disk to DFS. For each version, checkpoint is saved in specific directory structure that allows successive versions to reuse to SST data files and archived log files. This allows each commit to be incremental, only new SST files and archived log files generated by RocksDB will be uploaded. The directory structures on local disk and in DFS are as follows.
Local checkpoint dir structure ------------------------------ RocksDB generates a bunch of files in the local checkpoint directory. The most important among them are the SST files; they are the actual log structured data files. Rest of the files contain the metadata necessary for RocksDB to read the SST files and start from the checkpoint. Note that the SST files are hard links to files in the RocksDB's working directory, and therefore successive checkpoints can share some of the SST files. So these SST files have to be copied to DFS in shared directory such that different committed versions can save them.
We consider both SST files and archived log files as immutable files which can be shared between different checkpoints.
localCheckpointDir | +-- OPTIONS-000005 +-- MANIFEST-000008 +-- CURRENT +-- 00007.sst +-- 00011.sst +-- archive | +-- 00008.log | +-- 00013.log ...
DFS directory structure after saving to DFS as version 10 ----------------------------------------------------------- The SST and archived log files are given unique file names and copied to the shared subdirectory. Every version maintains a mapping of local immutable file name to the unique file name in DFS. This mapping is saved in a JSON file (named
metadata), which is zipped along with other checkpoint files into a single file[version].zip.dfsRootDir | +-- SSTs | +-- 00007-[uuid1].sst | +-- 00011-[uuid2].sst +-- logs | +-- 00008-[uuid3].log | +-- 00013-[uuid4].log +-- 10.zip | +-- metadata <--- contains mapping between 00007.sst and [uuid1].sst, and the mapping between 00008.log and [uuid3].log | +-- OPTIONS-000005 | +-- MANIFEST-000008 | +-- CURRENT | ... | +-- 9.zip +-- 8.zip ...
Note the following. - Each [version].zip is a complete description of all the data and metadata needed to recover a RocksDB instance at the corresponding version. The SST files and log files are not included in the zip files, they can be shared cross different versions. This is unlike the [version].delta files of HDFSBackedStateStore where previous delta files needs to be read to be recovered. - This is safe wrt speculatively executed tasks running concurrently in different executors as each task would upload a different copy of the generated immutable files and atomically update the [version].zip. - Immutable files are identified uniquely based on their file name and file size. - Immutable files can be reused only across adjacent checkpoints/versions. - This class is thread-safe. Specifically, it is safe to concurrently delete old files from a different thread than the task thread saving files.
- case class RocksDBFileManagerMetrics(filesCopied: Long, bytesCopied: Long, filesReused: Long, zipFileBytesUncompressed: Option[Long] = None) extends Product with Serializable
Metrics regarding RocksDB file sync between local and DFS.
- sealed trait RocksDBImmutableFile extends AnyRef
A RocksDBImmutableFile maintains a mapping between a local RocksDB file name and the name of its copy on DFS.
A RocksDBImmutableFile maintains a mapping between a local RocksDB file name and the name of its copy on DFS. Since these files are immutable, their DFS copies can be reused.
- case class RocksDBMetrics(numCommittedKeys: Long, numUncommittedKeys: Long, memUsageBytes: Long, totalSSTFilesBytes: Long, nativeOpsHistograms: Map[String, RocksDBNativeHistogram], lastCommitLatencyMs: Map[String, Long], filesCopied: Long, bytesCopied: Long, filesReused: Long, zipFileBytesUncompressed: Option[Long], nativeOpsMetrics: Map[String, Long]) extends Product with Serializable
Class to represent stats from each commit.
- case class RocksDBNativeHistogram(sum: Long, avg: Double, stddev: Double, median: Double, p95: Double, p99: Double, count: Long) extends Product with Serializable
Class to wrap RocksDB's native histogram
- sealed trait RocksDBStateEncoder extends AnyRef
- class StateSchemaCompatibilityChecker extends Logging
- case class StateSchemaNotCompatible(message: String) extends Exception with Product with Serializable
- trait StateStore extends ReadStateStore
Base trait for a versioned key-value store which provides both read and write operations.
Base trait for a versioned key-value store which provides both read and write operations. Each instance of a
StateStorerepresents a specific version of state data, and such instances are created through a StateStoreProvider.Unlike ReadStateStore,
abortmethod may not be called if thecommitmethod succeeds to commit the change. (hasCommittedreturnstrue.) Otherwise,abortmethod will be called. Implementation should deal with resource cleanup in both methods, and also need to guard with double resource cleanup. - class StateStoreConf extends Serializable
A class that contains configuration parameters for StateStores.
- class StateStoreCoordinatorRef extends AnyRef
Reference to a StateStoreCoordinator that can be used to coordinate instances of StateStores across all the executors, and get their locations for job scheduling.
- trait StateStoreCustomMetric extends AnyRef
Name and description of custom implementation-specific metrics that a state store may wish to expose.
Name and description of custom implementation-specific metrics that a state store may wish to expose. Also provides SQLMetric instance to show the metric in UI and accumulate it at the query level.
- case class StateStoreCustomSizeMetric(name: String, desc: String) extends StateStoreCustomMetric with Product with Serializable
- case class StateStoreCustomSumMetric(name: String, desc: String) extends StateStoreCustomMetric with Product with Serializable
- case class StateStoreCustomTimingMetric(name: String, desc: String) extends StateStoreCustomMetric with Product with Serializable
- case class StateStoreId(checkpointRootLocation: String, operatorId: Long, partitionId: Int, storeName: String = StateStoreId.DEFAULT_STORE_NAME) extends Product with Serializable
Unique identifier for a bunch of keyed state data.
Unique identifier for a bunch of keyed state data.
- checkpointRootLocation
Root directory where all the state data of a query is stored
- operatorId
Unique id of a stateful operator
- partitionId
Index of the partition of an operators state data
- storeName
Optional, name of the store. Each partition can optionally use multiple state stores, but they have to be identified by distinct names.
- case class StateStoreMetrics(numKeys: Long, memoryUsedBytes: Long, customMetrics: Map[StateStoreCustomMetric, Long]) extends Product with Serializable
Metrics reported by a state store
Metrics reported by a state store
- numKeys
Number of keys in the state store
- memoryUsedBytes
Memory used by the state store
- customMetrics
Custom implementation-specific metrics The metrics reported through this must have the same
nameas those reported byStateStoreProvider.customMetrics.
- implicit class StateStoreOps[T] extends AnyRef
- trait StateStoreProvider extends AnyRef
Trait representing a provider that provide StateStore instances representing versions of state data.
Trait representing a provider that provide StateStore instances representing versions of state data.
The life cycle of a provider and its provide stores are as follows.
- A StateStoreProvider is created in a executor for each unique StateStoreId when the first batch of a streaming query is executed on the executor. All subsequent batches reuse this provider instance until the query is stopped.
- Every batch of streaming data request a specific version of the state data by invoking
getStore(version)which returns an instance of StateStore through which the required version of the data can be accessed. It is the responsible of the provider to populate this store with context information like the schema of keys and values, etc.- After the streaming query is stopped, the created provider instances are lazily disposed off.
- case class StateStoreProviderId(storeId: StateStoreId, queryRunId: UUID) extends Product with Serializable
Unique identifier for a provider, used to identify when providers can be reused.
Unique identifier for a provider, used to identify when providers can be reused. Note that
queryRunIdis used uniquely identify a provider, so that the same provider instance is not reused across query restarts. - class StateStoreRDD[T, U] extends BaseStateStoreRDD[T, U]
An RDD that allows computations to be executed against StateStores.
An RDD that allows computations to be executed against StateStores. It uses the StateStoreCoordinator to get the locations of loaded state stores and use that as the preferred locations.
- sealed trait StreamingAggregationStateManager extends Serializable
Base trait for state manager purposed to be used from streaming aggregations.
- abstract class StreamingAggregationStateManagerBaseImpl extends StreamingAggregationStateManager
- class StreamingAggregationStateManagerImplV1 extends StreamingAggregationStateManagerBaseImpl
The implementation of StreamingAggregationStateManager for state version 1.
The implementation of StreamingAggregationStateManager for state version 1. In state version 1, the schema of key and value in state are follow:
- key: Same as key expressions. - value: Same as input row attributes. The schema of value contains key expressions as well.
- class StreamingAggregationStateManagerImplV2 extends StreamingAggregationStateManagerBaseImpl
The implementation of StreamingAggregationStateManager for state version 2.
The implementation of StreamingAggregationStateManager for state version 2. In state version 2, the schema of key and value in state are follow:
- key: Same as key expressions. - value: The diff between input row attributes and key expressions.
The schema of value is changed to optimize the memory/space usage in state, via removing duplicated columns in key-value pair. Hence key columns are excluded from the schema of value.
- class StreamingSessionWindowHelper extends AnyRef
- sealed trait StreamingSessionWindowStateManager extends Serializable
- class StreamingSessionWindowStateManagerImplV1 extends StreamingSessionWindowStateManager with Logging
- class SymmetricHashJoinStateManager extends Logging
Helper class to manage state required by a single side of org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec.
Helper class to manage state required by a single side of org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec. The interface of this class is basically that of a multi-map: - Get: Returns an iterator of multiple values for given key - Append: Append a new value to the given key - Remove Data by predicate: Drop any state using a predicate condition on keys or values
- class UnsafeRowPair extends AnyRef
Mutable, and reusable class for representing a pair of UnsafeRows.
- class WrappedReadStateStore extends ReadStateStore
Wraps the instance of StateStore to make the instance read-only.
Value Members
- object FlatMapGroupsWithStateExecHelper
- object HDFSBackedStateStoreMap
- object RocksDBCheckpointMetadata extends Serializable
Helper class for RocksDBCheckpointMetadata
- object RocksDBConf extends Serializable
- object RocksDBFileManagerMetrics extends Serializable
Metrics to return when requested but no operation has been performed.
- object RocksDBImmutableFile
- object RocksDBLoader extends Logging
A wrapper for RocksDB library loading using an uninterruptible thread, as the native RocksDB code will throw an error when interrupted.
- object RocksDBMetrics extends Serializable
- object RocksDBNativeHistogram extends Serializable
- object RocksDBStateEncoder
- object RocksDBStateStoreProvider
- object SchemaHelper
Helper classes for reading/writing state schema.
- object StateSchemaCompatibilityChecker
- object StateStore extends Logging
Companion object to StateStore that provides helper methods to create and retrieve stores by their unique ids.
Companion object to StateStore that provides helper methods to create and retrieve stores by their unique ids. In addition, when a SparkContext is active (i.e. SparkEnv.get is not null), it also runs a periodic background task to do maintenance on the loaded stores. For each store, it uses the StateStoreCoordinator to ensure whether the current loaded instance of the store is the active instance. Accordingly, it either keeps it loaded and performs maintenance, or unloads the store.
- object StateStoreConf extends Serializable
- object StateStoreCoordinatorRef extends Logging
Helper object used to create reference to StateStoreCoordinator.
- object StateStoreId extends Serializable
- object StateStoreMetrics extends Serializable
- object StateStoreProvider
- object StateStoreProviderId extends Serializable
- object StreamingAggregationStateManager extends Logging with Serializable
- object StreamingSessionWindowStateManager extends Serializable
- object SymmetricHashJoinStateManager