class DelegatingLogStore extends LogStore with DeltaLogging
A delegating LogStore used to dynamically resolve LogStore implementation based on the scheme of paths.
- Alphabetic
- By Inheritance
- DelegatingLogStore
- DeltaLogging
- DatabricksLogging
- DeltaProgressReporter
- LoggingShims
- Logging
- LogStore
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new DelegatingLogStore(hadoopConf: Configuration)
Type Members
- implicit class LogStringContext extends AnyRef
- Definition Classes
- LoggingShims
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def deltaAssert(check: => Boolean, name: String, msg: String, deltaLog: DeltaLog = null, data: AnyRef = null, path: Option[Path] = None): Unit
Helper method to check invariants in Delta code.
Helper method to check invariants in Delta code. Fails when running in tests, records a delta assertion event and logs a warning otherwise.
- Attributes
- protected
- Definition Classes
- DeltaLogging
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getCommonTags(deltaLog: DeltaLog, tahoeId: String): Map[TagDefinition, String]
- Definition Classes
- DeltaLogging
- def getDelegate(path: Path): LogStore
- def getErrorData(e: Throwable): Map[String, Any]
- Definition Classes
- DeltaLogging
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def invalidateCache(): Unit
Invalidate any caching that the implementation may be using
Invalidate any caching that the implementation may be using
- Definition Classes
- DelegatingLogStore → LogStore
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def isPartialWriteVisible(path: Path, hadoopConf: Configuration): Boolean
Whether a partial write is visible when writing to
path.Whether a partial write is visible when writing to
path.As this depends on the underlying file system implementations, we require the input of
pathhere in order to identify the underlying file system, even though in most cases a log store only deals with one file system.The default value is only provided here for legacy reasons, which will be removed. Any LogStore implementation should override this instead of relying on the default.
Note: The default implementation ignores the
hadoopConfparameter to provide the backward compatibility. Subclasses should override this method and usehadoopConfproperly to support passing Hadoop file system configurations through DataFrame options.- Definition Classes
- DelegatingLogStore → LogStore
- def isPartialWriteVisible(path: Path): Boolean
Whether a partial write is visible when writing to
path.Whether a partial write is visible when writing to
path.As this depends on the underlying file system implementations, we require the input of
pathhere in order to identify the underlying file system, even though in most cases a log store only deals with one file system.The default value is only provided here for legacy reasons, which will be removed. Any LogStore implementation should override this instead of relying on the default.
- Definition Classes
- DelegatingLogStore → LogStore
- def isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def listFrom(path: Path, hadoopConf: Configuration): Iterator[FileStatus]
List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given
path.List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given
path. The result should also be sorted by the file name.Note: The default implementation ignores the
hadoopConfparameter to provide the backward compatibility. Subclasses should override this method and usehadoopConfproperly to support passing Hadoop file system configurations through DataFrame options.- Definition Classes
- DelegatingLogStore → LogStore
- def listFrom(path: Path): Iterator[FileStatus]
List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given
path.List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given
path. The result should also be sorted by the file name.- Definition Classes
- DelegatingLogStore → LogStore
- def log: Logger
- Attributes
- protected
- Definition Classes
- Logging
- def logConsole(line: String): Unit
- Definition Classes
- DatabricksLogging
- def logDebug(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logDebug(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logDebug(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logError(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logError(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logInfo(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logInfo(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logName: String
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logTrace(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logTrace(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(entry: LogEntry, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logWarning(entry: LogEntry): Unit
- Attributes
- protected
- Definition Classes
- LoggingShims
- def logWarning(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def read(path: Path, hadoopConf: Configuration): Seq[String]
Load the given file and return a
Seqof lines.Load the given file and return a
Seqof lines. The line break will be removed from each line. This method will load the entire file into the memory. CallreadAsIteratorif possible as its implementation may be more efficient.Note: The default implementation ignores the
hadoopConfparameter to provide the backward compatibility. Subclasses should override this method and usehadoopConfproperly to support passing Hadoop file system configurations through DataFrame options.- Definition Classes
- DelegatingLogStore → LogStore
- def read(path: Path): Seq[String]
Load the given file and return a
Seqof lines.Load the given file and return a
Seqof lines. The line break will be removed from each line. This method will load the entire file into the memory. CallreadAsIteratorif possible as its implementation may be more efficient.- Definition Classes
- DelegatingLogStore → LogStore
- final def read(fileStatus: FileStatus, hadoopConf: Configuration): Seq[String]
Load the given file represented by
fileStatusand return aSeqof lines.Load the given file represented by
fileStatusand return aSeqof lines. The line break will be removed from each line.Note: Using a stale
FileStatusmay get an incorrect result.- Definition Classes
- LogStore
- def readAsIterator(path: Path, hadoopConf: Configuration): ClosableIterator[String]
Load the given file and return an iterator of lines.
Load the given file and return an iterator of lines. The line break will be removed from each line. The default implementation calls
readto load the entire file into the memory. An implementation should provide a more efficient approach if possible. For example, the file content can be loaded on demand.Note: the returned ClosableIterator should be closed when it's no longer used to avoid resource leak.
Note: The default implementation ignores the
hadoopConfparameter to provide the backward compatibility. Subclasses should override this method and usehadoopConfproperly to support passing Hadoop file system configurations through DataFrame options.- Definition Classes
- DelegatingLogStore → LogStore
- def readAsIterator(path: Path): ClosableIterator[String]
Load the given file and return an iterator of lines.
Load the given file and return an iterator of lines. The line break will be removed from each line. The default implementation calls
readto load the entire file into the memory. An implementation should provide a more efficient approach if possible. For example, the file content can be loaded on demand.Note: the returned ClosableIterator should be closed when it's no longer used to avoid resource leak.
- Definition Classes
- DelegatingLogStore → LogStore
- def readAsIterator(fileStatus: FileStatus, hadoopConf: Configuration): ClosableIterator[String]
Load the file represented by given fileStatus and return an iterator of lines.
Load the file represented by given fileStatus and return an iterator of lines. The line break will be removed from each line.
Note-1: the returned ClosableIterator should be closed when it's no longer used to avoid resource leak.
Note-2: Using a stale
FileStatusmay get an incorrect result.- Definition Classes
- LogStore
- def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit
Used to record the occurrence of a single event or report detailed, operation specific statistics.
Used to record the occurrence of a single event or report detailed, operation specific statistics.
- path
Used to log the path of the delta table when
deltaLogis null.
- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: => A): A
Used to report the duration as well as the success or failure of an operation on a
deltaLog.Used to report the duration as well as the success or failure of an operation on a
deltaLog.- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: => A): A
Used to report the duration as well as the success or failure of an operation on a
tahoePath.Used to report the duration as well as the success or failure of an operation on a
tahoePath.- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
- Definition Classes
- DatabricksLogging
- def recordFrameProfile[T](group: String, name: String)(thunk: => T): T
- Attributes
- protected
- Definition Classes
- DeltaLogging
- def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: => S): S
- Definition Classes
- DatabricksLogging
- def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
- Definition Classes
- DatabricksLogging
- def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
- Definition Classes
- DatabricksLogging
- def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
- Definition Classes
- DatabricksLogging
- def resolvePathOnPhysicalStorage(path: Path, hadoopConf: Configuration): Path
Resolve the fully qualified path for the given
path.Resolve the fully qualified path for the given
path.Note: The default implementation ignores the
hadoopConfparameter to provide the backward compatibility. Subclasses should override this method and usehadoopConfproperly to support passing Hadoop file system configurations through DataFrame options.- Definition Classes
- DelegatingLogStore → LogStore
- def resolvePathOnPhysicalStorage(path: Path): Path
Resolve the fully qualified path for the given
path.Resolve the fully qualified path for the given
path.- Definition Classes
- DelegatingLogStore → LogStore
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: => T): T
Report a log to indicate some command is running.
Report a log to indicate some command is running.
- Definition Classes
- DeltaProgressReporter
- def write(path: Path, actions: Iterator[String], overwrite: Boolean, hadoopConf: Configuration): Unit
Write the given
actionsto the givenpathwith or without overwrite as indicated.Write the given
actionsto the givenpathwith or without overwrite as indicated. Implementation must throw java.nio.file.FileAlreadyExistsException exception if the file already exists and overwrite = false. Furthermore, implementation must ensure that the entire file is made visible atomically, that is, it should not generate partial files.Note: The default implementation ignores the
hadoopConfparameter to provide the backward compatibility. Subclasses should override this method and usehadoopConfproperly to support passing Hadoop file system configurations through DataFrame options.- Definition Classes
- DelegatingLogStore → LogStore
- def write(path: Path, actions: Iterator[String], overwrite: Boolean): Unit
Write the given
actionsto the givenpathwith or without overwrite as indicated.Write the given
actionsto the givenpathwith or without overwrite as indicated. Implementation must throw java.nio.file.FileAlreadyExistsException exception if the file already exists and overwrite = false. Furthermore, implementation must ensure that the entire file is made visible atomically, that is, it should not generate partial files.- Definition Classes
- DelegatingLogStore → LogStore
Deprecated Value Members
- final def listFrom(path: String): Iterator[FileStatus]
List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given
path.List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given
path. The result should also be sorted by the file name.- Definition Classes
- LogStore
- Annotations
- @deprecated
- Deprecated
call the method that asks for a Hadoop Configuration object instead
- final def read(path: String): Seq[String]
Load the given file and return a
Seqof lines.Load the given file and return a
Seqof lines. The line break will be removed from each line. This method will load the entire file into the memory. CallreadAsIteratorif possible as its implementation may be more efficient.- Definition Classes
- LogStore
- Annotations
- @deprecated
- Deprecated
call the method that asks for a Hadoop Configuration object instead
- final def readAsIterator(path: String): ClosableIterator[String]
Load the given file and return an iterator of lines.
Load the given file and return an iterator of lines. The line break will be removed from each line. The default implementation calls
readto load the entire file into the memory. An implementation should provide a more efficient approach if possible. For example, the file content can be loaded on demand.- Definition Classes
- LogStore
- Annotations
- @deprecated
- Deprecated
call the method that asks for a Hadoop Configuration object instead
- final def write(path: String, actions: Iterator[String]): Unit
Write the given
actionsto the givenpathwithout overwriting any existing file.Write the given
actionsto the givenpathwithout overwriting any existing file. Implementation must throw java.nio.file.FileAlreadyExistsException exception if the file already exists. Furthermore, implementation must ensure that the entire file is made visible atomically, that is, it should not generate partial files.- Definition Classes
- LogStore
- Annotations
- @deprecated
- Deprecated
call the method that asks for a Hadoop Configuration object instead