Packages

class DelegatingLogStore extends LogStore with DeltaLogging

A delegating LogStore used to dynamically resolve LogStore implementation based on the scheme of paths.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DelegatingLogStore
  2. DeltaLogging
  3. DatabricksLogging
  4. DeltaProgressReporter
  5. LoggingShims
  6. Logging
  7. LogStore
  8. AnyRef
  9. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DelegatingLogStore(hadoopConf: Configuration)

Type Members

  1. implicit class LogStringContext extends AnyRef
    Definition Classes
    LoggingShims

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  6. def deltaAssert(check: ⇒ Boolean, name: String, msg: String, deltaLog: DeltaLog = null, data: AnyRef = null, path: Option[Path] = None): Unit

    Helper method to check invariants in Delta code.

    Helper method to check invariants in Delta code. Fails when running in tests, records a delta assertion event and logs a warning otherwise.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  7. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  11. def getCommonTags(deltaLog: DeltaLog, tahoeId: String): Map[TagDefinition, String]
    Definition Classes
    DeltaLogging
  12. def getDelegate(path: Path): LogStore
  13. def getErrorData(e: Throwable): Map[String, Any]
    Definition Classes
    DeltaLogging
  14. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  15. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  16. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  17. def invalidateCache(): Unit

    Invalidate any caching that the implementation may be using

    Invalidate any caching that the implementation may be using

    Definition Classes
    DelegatingLogStoreLogStore
  18. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  19. def isPartialWriteVisible(path: Path, hadoopConf: Configuration): Boolean

    Whether a partial write is visible when writing to path.

    Whether a partial write is visible when writing to path.

    As this depends on the underlying file system implementations, we require the input of path here in order to identify the underlying file system, even though in most cases a log store only deals with one file system.

    The default value is only provided here for legacy reasons, which will be removed. Any LogStore implementation should override this instead of relying on the default.

    Note: The default implementation ignores the hadoopConf parameter to provide the backward compatibility. Subclasses should override this method and use hadoopConf properly to support passing Hadoop file system configurations through DataFrame options.

    Definition Classes
    DelegatingLogStoreLogStore
  20. def isPartialWriteVisible(path: Path): Boolean

    Whether a partial write is visible when writing to path.

    Whether a partial write is visible when writing to path.

    As this depends on the underlying file system implementations, we require the input of path here in order to identify the underlying file system, even though in most cases a log store only deals with one file system.

    The default value is only provided here for legacy reasons, which will be removed. Any LogStore implementation should override this instead of relying on the default.

    Definition Classes
    DelegatingLogStoreLogStore
  21. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  22. def listFrom(path: Path, hadoopConf: Configuration): Iterator[FileStatus]

    List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given path.

    List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given path. The result should also be sorted by the file name.

    Note: The default implementation ignores the hadoopConf parameter to provide the backward compatibility. Subclasses should override this method and use hadoopConf properly to support passing Hadoop file system configurations through DataFrame options.

    Definition Classes
    DelegatingLogStoreLogStore
  23. def listFrom(path: Path): Iterator[FileStatus]

    List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given path.

    List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given path. The result should also be sorted by the file name.

    Definition Classes
    DelegatingLogStoreLogStore
  24. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  25. def logConsole(line: String): Unit
    Definition Classes
    DatabricksLogging
  26. def logDebug(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  27. def logDebug(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  28. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  31. def logError(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  32. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logInfo(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  35. def logInfo(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  36. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  38. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  39. def logTrace(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  40. def logTrace(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  41. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logWarning(entry: LogEntry, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  44. def logWarning(entry: LogEntry): Unit
    Attributes
    protected
    Definition Classes
    LoggingShims
  45. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  47. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  48. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  49. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  50. def read(path: Path, hadoopConf: Configuration): Seq[String]

    Load the given file and return a Seq of lines.

    Load the given file and return a Seq of lines. The line break will be removed from each line. This method will load the entire file into the memory. Call readAsIterator if possible as its implementation may be more efficient.

    Note: The default implementation ignores the hadoopConf parameter to provide the backward compatibility. Subclasses should override this method and use hadoopConf properly to support passing Hadoop file system configurations through DataFrame options.

    Definition Classes
    DelegatingLogStoreLogStore
  51. def read(path: Path): Seq[String]

    Load the given file and return a Seq of lines.

    Load the given file and return a Seq of lines. The line break will be removed from each line. This method will load the entire file into the memory. Call readAsIterator if possible as its implementation may be more efficient.

    Definition Classes
    DelegatingLogStoreLogStore
  52. final def read(fileStatus: FileStatus, hadoopConf: Configuration): Seq[String]

    Load the given file represented by fileStatus and return a Seq of lines.

    Load the given file represented by fileStatus and return a Seq of lines. The line break will be removed from each line.

    Note: Using a stale FileStatus may get an incorrect result.

    Definition Classes
    LogStore
  53. def readAsIterator(path: Path, hadoopConf: Configuration): ClosableIterator[String]

    Load the given file and return an iterator of lines.

    Load the given file and return an iterator of lines. The line break will be removed from each line. The default implementation calls read to load the entire file into the memory. An implementation should provide a more efficient approach if possible. For example, the file content can be loaded on demand.

    Note: the returned ClosableIterator should be closed when it's no longer used to avoid resource leak.

    Note: The default implementation ignores the hadoopConf parameter to provide the backward compatibility. Subclasses should override this method and use hadoopConf properly to support passing Hadoop file system configurations through DataFrame options.

    Definition Classes
    DelegatingLogStoreLogStore
  54. def readAsIterator(path: Path): ClosableIterator[String]

    Load the given file and return an iterator of lines.

    Load the given file and return an iterator of lines. The line break will be removed from each line. The default implementation calls read to load the entire file into the memory. An implementation should provide a more efficient approach if possible. For example, the file content can be loaded on demand.

    Note: the returned ClosableIterator should be closed when it's no longer used to avoid resource leak.

    Definition Classes
    DelegatingLogStoreLogStore
  55. def readAsIterator(fileStatus: FileStatus, hadoopConf: Configuration): ClosableIterator[String]

    Load the file represented by given fileStatus and return an iterator of lines.

    Load the file represented by given fileStatus and return an iterator of lines. The line break will be removed from each line.

    Note-1: the returned ClosableIterator should be closed when it's no longer used to avoid resource leak.

    Note-2: Using a stale FileStatus may get an incorrect result.

    Definition Classes
    LogStore
  56. def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    path

    Used to log the path of the delta table when deltaLog is null.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  57. def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  58. def recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: ⇒ A): A

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  59. def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  60. def recordFrameProfile[T](group: String, name: String)(thunk: ⇒ T): T
    Attributes
    protected
    Definition Classes
    DeltaLogging
  61. def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = METRIC_OPERATION_DURATION, silent: Boolean = true)(thunk: ⇒ S): S
    Definition Classes
    DatabricksLogging
  62. def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  63. def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  64. def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  65. def resolvePathOnPhysicalStorage(path: Path, hadoopConf: Configuration): Path

    Resolve the fully qualified path for the given path.

    Resolve the fully qualified path for the given path.

    Note: The default implementation ignores the hadoopConf parameter to provide the backward compatibility. Subclasses should override this method and use hadoopConf properly to support passing Hadoop file system configurations through DataFrame options.

    Definition Classes
    DelegatingLogStoreLogStore
  66. def resolvePathOnPhysicalStorage(path: Path): Path

    Resolve the fully qualified path for the given path.

    Resolve the fully qualified path for the given path.

    Definition Classes
    DelegatingLogStoreLogStore
  67. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  68. def toString(): String
    Definition Classes
    AnyRef → Any
  69. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  70. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  71. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  72. def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: ⇒ T): T

    Report a log to indicate some command is running.

    Report a log to indicate some command is running.

    Definition Classes
    DeltaProgressReporter
  73. def write(path: Path, actions: Iterator[String], overwrite: Boolean, hadoopConf: Configuration): Unit

    Write the given actions to the given path with or without overwrite as indicated.

    Write the given actions to the given path with or without overwrite as indicated. Implementation must throw java.nio.file.FileAlreadyExistsException exception if the file already exists and overwrite = false. Furthermore, implementation must ensure that the entire file is made visible atomically, that is, it should not generate partial files.

    Note: The default implementation ignores the hadoopConf parameter to provide the backward compatibility. Subclasses should override this method and use hadoopConf properly to support passing Hadoop file system configurations through DataFrame options.

    Definition Classes
    DelegatingLogStoreLogStore
  74. def write(path: Path, actions: Iterator[String], overwrite: Boolean): Unit

    Write the given actions to the given path with or without overwrite as indicated.

    Write the given actions to the given path with or without overwrite as indicated. Implementation must throw java.nio.file.FileAlreadyExistsException exception if the file already exists and overwrite = false. Furthermore, implementation must ensure that the entire file is made visible atomically, that is, it should not generate partial files.

    Definition Classes
    DelegatingLogStoreLogStore

Deprecated Value Members

  1. final def listFrom(path: String): Iterator[FileStatus]

    List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given path.

    List the paths in the same directory that are lexicographically greater or equal to (UTF-8 sorting) the given path. The result should also be sorted by the file name.

    Definition Classes
    LogStore
    Annotations
    @deprecated
    Deprecated

    call the method that asks for a Hadoop Configuration object instead

  2. final def read(path: String): Seq[String]

    Load the given file and return a Seq of lines.

    Load the given file and return a Seq of lines. The line break will be removed from each line. This method will load the entire file into the memory. Call readAsIterator if possible as its implementation may be more efficient.

    Definition Classes
    LogStore
    Annotations
    @deprecated
    Deprecated

    call the method that asks for a Hadoop Configuration object instead

  3. final def readAsIterator(path: String): ClosableIterator[String]

    Load the given file and return an iterator of lines.

    Load the given file and return an iterator of lines. The line break will be removed from each line. The default implementation calls read to load the entire file into the memory. An implementation should provide a more efficient approach if possible. For example, the file content can be loaded on demand.

    Definition Classes
    LogStore
    Annotations
    @deprecated
    Deprecated

    call the method that asks for a Hadoop Configuration object instead

  4. final def write(path: String, actions: Iterator[String]): Unit

    Write the given actions to the given path without overwriting any existing file.

    Write the given actions to the given path without overwriting any existing file. Implementation must throw java.nio.file.FileAlreadyExistsException exception if the file already exists. Furthermore, implementation must ensure that the entire file is made visible atomically, that is, it should not generate partial files.

    Definition Classes
    LogStore
    Annotations
    @deprecated
    Deprecated

    call the method that asks for a Hadoop Configuration object instead

Inherited from DeltaLogging

Inherited from DatabricksLogging

Inherited from DeltaProgressReporter

Inherited from LoggingShims

Inherited from Logging

Inherited from LogStore

Inherited from AnyRef

Inherited from Any

Ungrouped