| Package | Description |
|---|---|
| org.apache.hadoop.fs.s3a |
S3A Filesystem.
|
| org.apache.hadoop.fs.s3a.commit |
Support for committing the output of analytics jobs directly to S3.
|
| org.apache.hadoop.fs.s3a.impl |
Implementation classes private to the S3A store.
|
| org.apache.hadoop.fs.s3a.s3guard |
This package contains classes related to S3Guard: a feature of S3A to mask
the eventual consistency behavior of S3 and optimize access patterns by
coordinating with a strongly consistent external store for file system
metadata.
|
| org.apache.hadoop.fs.s3a.tools |
S3A Command line tools independent of S3Guard.
|
| Modifier and Type | Method and Description |
|---|---|
BulkOperationState |
WriteOperationHelper.initiateCommitOperation(org.apache.hadoop.fs.Path path)
Initiate a commit operation through any metastore.
|
BulkOperationState |
WriteOperations.initiateCommitOperation(org.apache.hadoop.fs.Path path)
Initiate a commit operation through any metastore.
|
BulkOperationState |
WriteOperationHelper.initiateOperation(org.apache.hadoop.fs.Path path,
BulkOperationState.OperationType operationType)
Initiate a commit operation through any metastore.
|
BulkOperationState |
WriteOperations.initiateOperation(org.apache.hadoop.fs.Path path,
BulkOperationState.OperationType operationType)
Initiate a commit operation through any metastore.
|
| Modifier and Type | Method and Description |
|---|---|
com.amazonaws.services.s3.model.CompleteMultipartUploadResult |
WriteOperationHelper.commitUpload(String destKey,
String uploadId,
List<com.amazonaws.services.s3.model.PartETag> partETags,
long length,
BulkOperationState operationState)
This completes a multipart upload to the destination key via
finalizeMultipartUpload(). |
com.amazonaws.services.s3.model.CompleteMultipartUploadResult |
WriteOperations.commitUpload(String destKey,
String uploadId,
List<com.amazonaws.services.s3.model.PartETag> partETags,
long length,
BulkOperationState operationState)
This completes a multipart upload to the destination key via
finalizeMultipartUpload(). |
void |
S3AFileSystem.removeKeys(List<com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion> keysToDelete,
boolean deleteFakeDir,
BulkOperationState operationState)
Invoke
S3AFileSystem.removeKeysS3(List, boolean, boolean) with handling of
MultiObjectDeleteException. |
void |
WriteOperationHelper.revertCommit(String destKey,
BulkOperationState operationState)
Revert a commit by deleting the file.
|
void |
WriteOperations.revertCommit(String destKey,
BulkOperationState operationState)
Revert a commit by deleting the file.
|
| Modifier and Type | Method and Description |
|---|---|
void |
CommitOperations.revertCommit(SinglePendingCommit commit,
BulkOperationState operationState)
Revert a pending commit by deleting the destination.
|
| Modifier and Type | Method and Description |
|---|---|
BulkOperationState |
ActiveOperationContext.getBulkOperationState() |
| Modifier and Type | Method and Description |
|---|---|
void |
OperationCallbacks.deleteObjectAtPath(org.apache.hadoop.fs.Path path,
String key,
boolean isFile,
BulkOperationState operationState)
Delete an object, also updating the metastore.
|
com.amazonaws.services.s3.model.DeleteObjectsResult |
OperationCallbacks.removeKeys(List<com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion> keysToDelete,
boolean deleteFakeDir,
List<org.apache.hadoop.fs.Path> undeletedObjectsOnFailure,
BulkOperationState operationState,
boolean quiet)
Remove keys from the store, updating the metastore on a
partial delete represented as a MultiObjectDeleteException failure by
deleting all those entries successfully deleted and then rethrowing
the MultiObjectDeleteException.
|
| Constructor and Description |
|---|
ActiveOperationContext(long operationId,
S3AStatisticsContext statisticsContext,
BulkOperationState bulkOperationState) |
MultiObjectDeleteSupport(StoreContext context,
BulkOperationState operationState)
Initiate with a store context.
|
| Modifier and Type | Method and Description |
|---|---|
BulkOperationState |
RenameTracker.getOperationState() |
default BulkOperationState |
MetadataStore.initiateBulkWrite(BulkOperationState.OperationType operation,
org.apache.hadoop.fs.Path dest)
Initiate a bulk update and create an operation state for it.
|
static BulkOperationState |
S3Guard.initiateBulkWrite(MetadataStore metastore,
BulkOperationState.OperationType operation,
org.apache.hadoop.fs.Path path)
Initiate a bulk write and create an operation state for it.
|
| Modifier and Type | Method and Description |
|---|---|
static void |
S3Guard.addAncestors(MetadataStore metadataStore,
org.apache.hadoop.fs.Path qualifiedPath,
ITtlTimeProvider timeProvider,
BulkOperationState operationState)
This adds all new ancestors of a path as directories.
|
void |
DynamoDBMetadataStore.addAncestors(org.apache.hadoop.fs.Path qualifiedPath,
BulkOperationState operationState)
This adds all new ancestors of a path as directories.
|
void |
LocalMetadataStore.addAncestors(org.apache.hadoop.fs.Path qualifiedPath,
BulkOperationState operationState) |
void |
MetadataStore.addAncestors(org.apache.hadoop.fs.Path qualifiedPath,
BulkOperationState operationState)
This adds all new ancestors of a path as directories.
|
void |
NullMetadataStore.addAncestors(org.apache.hadoop.fs.Path qualifiedPath,
BulkOperationState operationState) |
void |
DynamoDBMetadataStore.delete(org.apache.hadoop.fs.Path path,
BulkOperationState operationState) |
void |
LocalMetadataStore.delete(org.apache.hadoop.fs.Path p,
BulkOperationState operationState) |
void |
MetadataStore.delete(org.apache.hadoop.fs.Path path,
BulkOperationState operationState)
Deletes exactly one path, leaving a tombstone to prevent lingering,
inconsistent copies of it from being listed.
|
void |
NullMetadataStore.delete(org.apache.hadoop.fs.Path path,
BulkOperationState operationState) |
void |
DynamoDBMetadataStore.deletePaths(Collection<org.apache.hadoop.fs.Path> paths,
BulkOperationState operationState) |
void |
LocalMetadataStore.deletePaths(Collection<org.apache.hadoop.fs.Path> paths,
BulkOperationState operationState) |
void |
MetadataStore.deletePaths(Collection<org.apache.hadoop.fs.Path> paths,
BulkOperationState operationState)
Delete the paths.
|
void |
NullMetadataStore.deletePaths(Collection<org.apache.hadoop.fs.Path> paths,
BulkOperationState operationState) |
void |
DynamoDBMetadataStore.deleteSubtree(org.apache.hadoop.fs.Path path,
BulkOperationState operationState) |
void |
LocalMetadataStore.deleteSubtree(org.apache.hadoop.fs.Path path,
BulkOperationState operationState) |
void |
MetadataStore.deleteSubtree(org.apache.hadoop.fs.Path path,
BulkOperationState operationState)
Deletes the entire sub-tree rooted at the given path, leaving tombstones
to prevent lingering, inconsistent copies of it from being listed.
|
void |
NullMetadataStore.deleteSubtree(org.apache.hadoop.fs.Path path,
BulkOperationState operationState) |
int |
DynamoDBMetadataStore.markAsAuthoritative(org.apache.hadoop.fs.Path dest,
BulkOperationState operationState)
Mark the directories instantiated under the destination path
as authoritative.
|
default int |
MetadataStore.markAsAuthoritative(org.apache.hadoop.fs.Path dest,
BulkOperationState operationState)
Mark all directories created/touched in an operation as authoritative.
|
void |
DynamoDBMetadataStore.move(Collection<org.apache.hadoop.fs.Path> pathsToDelete,
Collection<PathMetadata> pathsToCreate,
BulkOperationState operationState)
Record the effects of a
FileSystem.rename(Path, Path) in the
MetadataStore. |
void |
LocalMetadataStore.move(Collection<org.apache.hadoop.fs.Path> pathsToDelete,
Collection<PathMetadata> pathsToCreate,
BulkOperationState operationState) |
void |
MetadataStore.move(Collection<org.apache.hadoop.fs.Path> pathsToDelete,
Collection<PathMetadata> pathsToCreate,
BulkOperationState operationState)
Record the effects of a
FileSystem.rename(Path, Path) in the
MetadataStore. |
void |
NullMetadataStore.move(Collection<org.apache.hadoop.fs.Path> pathsToDelete,
Collection<PathMetadata> pathsToCreate,
BulkOperationState operationState) |
void |
DynamoDBMetadataStore.put(Collection<? extends PathMetadata> metas,
BulkOperationState operationState) |
void |
LocalMetadataStore.put(Collection<? extends PathMetadata> metas,
BulkOperationState operationState) |
void |
MetadataStore.put(Collection<? extends PathMetadata> metas,
BulkOperationState operationState)
Saves metadata for any number of paths.
|
void |
NullMetadataStore.put(Collection<? extends PathMetadata> meta,
BulkOperationState operationState) |
void |
DynamoDBMetadataStore.put(DirListingMetadata meta,
List<org.apache.hadoop.fs.Path> unchangedEntries,
BulkOperationState operationState)
Save directory listing metadata.
|
void |
LocalMetadataStore.put(DirListingMetadata meta,
List<org.apache.hadoop.fs.Path> unchangedEntries,
BulkOperationState operationState) |
void |
MetadataStore.put(DirListingMetadata meta,
List<org.apache.hadoop.fs.Path> unchangedEntries,
BulkOperationState operationState)
Save directory listing metadata.
|
void |
NullMetadataStore.put(DirListingMetadata meta,
List<org.apache.hadoop.fs.Path> unchangedEntries,
BulkOperationState operationState) |
void |
DynamoDBMetadataStore.put(PathMetadata meta,
BulkOperationState operationState) |
void |
LocalMetadataStore.put(PathMetadata meta,
BulkOperationState operationState) |
void |
MetadataStore.put(PathMetadata meta,
BulkOperationState operationState)
Saves metadata for exactly one path, potentially
using any bulk operation state to eliminate duplicate work.
|
void |
NullMetadataStore.put(PathMetadata meta,
BulkOperationState operationState) |
static S3AFileStatus |
S3Guard.putAndReturn(MetadataStore ms,
S3AFileStatus status,
ITtlTimeProvider timeProvider,
BulkOperationState operationState)
Helper function which puts a given S3AFileStatus into the MetadataStore and
returns the same S3AFileStatus.
|
static void |
S3Guard.putAuthDirectoryMarker(MetadataStore ms,
S3AFileStatus status,
ITtlTimeProvider timeProvider,
BulkOperationState operationState)
Creates an authoritative directory marker for the store.
|
static void |
S3Guard.putWithTtl(MetadataStore ms,
Collection<? extends PathMetadata> fileMetas,
ITtlTimeProvider timeProvider,
BulkOperationState operationState)
Put entries, using the time provider to set their timestamp.
|
static void |
S3Guard.putWithTtl(MetadataStore ms,
DirListingMetadata dirMeta,
List<org.apache.hadoop.fs.Path> unchangedEntries,
ITtlTimeProvider timeProvider,
BulkOperationState operationState)
Put a directory entry, setting the updated timestamp of the
directory and its children.
|
static void |
S3Guard.putWithTtl(MetadataStore ms,
PathMetadata fileMeta,
ITtlTimeProvider timeProvider,
BulkOperationState operationState)
Put an entry, using the time provider to set its timestamp.
|
| Constructor and Description |
|---|
DelayedUpdateRenameTracker(StoreContext storeContext,
MetadataStore metadataStore,
org.apache.hadoop.fs.Path sourceRoot,
org.apache.hadoop.fs.Path dest,
BulkOperationState operationState) |
ProgressiveRenameTracker(StoreContext storeContext,
MetadataStore metadataStore,
org.apache.hadoop.fs.Path sourceRoot,
org.apache.hadoop.fs.Path dest,
BulkOperationState operationState) |
RenameTracker(String name,
StoreContext storeContext,
MetadataStore metadataStore,
org.apache.hadoop.fs.Path sourceRoot,
org.apache.hadoop.fs.Path dest,
BulkOperationState operationState)
Constructor.
|
| Modifier and Type | Method and Description |
|---|---|
com.amazonaws.services.s3.model.DeleteObjectsResult |
MarkerToolOperations.removeKeys(List<com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion> keysToDelete,
boolean deleteFakeDir,
List<org.apache.hadoop.fs.Path> undeletedObjectsOnFailure,
BulkOperationState operationState,
boolean quiet)
Remove keys from the store, updating the metastore on a
partial delete represented as a MultiObjectDeleteException failure by
deleting all those entries successfully deleted and then rethrowing
the MultiObjectDeleteException.
|
com.amazonaws.services.s3.model.DeleteObjectsResult |
MarkerToolOperationsImpl.removeKeys(List<com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion> keysToDelete,
boolean deleteFakeDir,
List<org.apache.hadoop.fs.Path> undeletedObjectsOnFailure,
BulkOperationState operationState,
boolean quiet) |
Copyright © 2008–2022 Apache Software Foundation. All rights reserved.