public interface OperationCallbacks
RenameOperation
and DeleteOperation operations need,
derived from the appropriate S3AFileSystem methods.| Modifier and Type | Method and Description |
|---|---|
boolean |
allowAuthoritative(org.apache.hadoop.fs.Path p)
Is the path for this instance considered authoritative on the client,
that is: will listing/status operations only be handled by the metastore,
with no fallback to S3.
|
com.amazonaws.services.s3.transfer.model.CopyResult |
copyFile(String srcKey,
String destKey,
S3ObjectAttributes srcAttributes,
S3AReadOpContext readContext)
Copy a single object in the bucket via a COPY operation.
|
S3ObjectAttributes |
createObjectAttributes(org.apache.hadoop.fs.Path path,
String eTag,
String versionId,
long len)
Create the attributes of an object for subsequent use.
|
S3ObjectAttributes |
createObjectAttributes(S3AFileStatus fileStatus)
Create the attributes of an object for subsequent use.
|
S3AReadOpContext |
createReadContext(org.apache.hadoop.fs.FileStatus fileStatus)
Create the read context for reading from the referenced file,
using FS state as well as the status.
|
void |
deleteObjectAtPath(org.apache.hadoop.fs.Path path,
String key,
boolean isFile,
BulkOperationState operationState)
Delete an object, also updating the metastore.
|
void |
finishRename(org.apache.hadoop.fs.Path sourceRenamed,
org.apache.hadoop.fs.Path destCreated)
The rename has finished; perform any store cleanup operations
such as creating/deleting directory markers.
|
org.apache.hadoop.fs.RemoteIterator<S3ALocatedFileStatus> |
listFilesAndDirectoryMarkers(org.apache.hadoop.fs.Path path,
S3AFileStatus status,
boolean collectTombstones,
boolean includeSelf)
Recursive list of files and directory markers.
|
org.apache.hadoop.fs.RemoteIterator<S3AFileStatus> |
listObjects(org.apache.hadoop.fs.Path path,
String key)
Create an iterator over objects in S3 only; S3Guard
is not involved.
|
com.amazonaws.services.s3.model.DeleteObjectsResult |
removeKeys(List<com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion> keysToDelete,
boolean deleteFakeDir,
List<org.apache.hadoop.fs.Path> undeletedObjectsOnFailure,
BulkOperationState operationState,
boolean quiet)
Remove keys from the store, updating the metastore on a
partial delete represented as a MultiObjectDeleteException failure by
deleting all those entries successfully deleted and then rethrowing
the MultiObjectDeleteException.
|
S3ObjectAttributes createObjectAttributes(org.apache.hadoop.fs.Path path, String eTag, String versionId, long len)
path - path path of the request.eTag - the eTag of the S3 objectversionId - S3 object version IDlen - length of the fileS3ObjectAttributes createObjectAttributes(S3AFileStatus fileStatus)
fileStatus - file status to build from.S3AReadOpContext createReadContext(org.apache.hadoop.fs.FileStatus fileStatus)
fileStatus - file status.void finishRename(org.apache.hadoop.fs.Path sourceRenamed,
org.apache.hadoop.fs.Path destCreated)
throws IOException
sourceRenamed - renamed sourcedestCreated - destination file created.IOException - failure@Retries.RetryTranslated void deleteObjectAtPath(org.apache.hadoop.fs.Path path, String key, boolean isFile, BulkOperationState operationState) throws IOException
path - path to deletekey - key of entryisFile - is the path a file (used for instrumentation only)operationState - (nullable) operational state for a bulk updatecom.amazonaws.AmazonClientException - problems working with S3IOException - IO failure in the metastore@Retries.RetryTranslated org.apache.hadoop.fs.RemoteIterator<S3ALocatedFileStatus> listFilesAndDirectoryMarkers(org.apache.hadoop.fs.Path path, S3AFileStatus status, boolean collectTombstones, boolean includeSelf) throws IOException
path - path to list fromstatus - optional status of path to list.collectTombstones - should tombstones be collected from S3Guard?includeSelf - should the listing include this path if present?IOException - failure@Retries.RetryTranslated com.amazonaws.services.s3.transfer.model.CopyResult copyFile(String srcKey, String destKey, S3ObjectAttributes srcAttributes, S3AReadOpContext readContext) throws IOException
srcKey - source object pathsrcAttributes - S3 attributes of the source objectreadContext - the read contextInterruptedIOException - the operation was interruptedIOException - Other IO problems@Retries.RetryMixed com.amazonaws.services.s3.model.DeleteObjectsResult removeKeys(List<com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion> keysToDelete, boolean deleteFakeDir, List<org.apache.hadoop.fs.Path> undeletedObjectsOnFailure, BulkOperationState operationState, boolean quiet) throws com.amazonaws.services.s3.model.MultiObjectDeleteException, com.amazonaws.AmazonClientException, IOException
keysToDelete - collection of keys to delete on the s3-backend.
if empty, no request is made of the object store.deleteFakeDir - indicates whether this is for deleting fake dirs.undeletedObjectsOnFailure - List which will be built up of all
files that were not deleted. This happens even as an exception
is raised.operationState - bulk operation statequiet - should a bulk query be quiet, or should its result list
all deleted keysorg.apache.hadoop.fs.InvalidRequestException - if the request was rejected due to
a mistaken attempt to delete the root directory.com.amazonaws.services.s3.model.MultiObjectDeleteException - one or more of the keys could not
be deleted in a multiple object delete operation.com.amazonaws.AmazonClientException - amazon-layer failure.IOException - other IO Exception.boolean allowAuthoritative(org.apache.hadoop.fs.Path p)
p - path@Retries.RetryTranslated org.apache.hadoop.fs.RemoteIterator<S3AFileStatus> listObjects(org.apache.hadoop.fs.Path path, String key) throws IOException
path - path of the listing.key - object keyIOException - failure.Copyright © 2008–2022 Apache Software Foundation. All rights reserved.