| Package | Description |
|---|---|
| org.apache.hadoop.fs.s3a |
S3A Filesystem.
|
| org.apache.hadoop.fs.s3a.auth |
Authentication and permissions support.
|
| org.apache.hadoop.fs.s3a.auth.delegation |
Extensible delegation token support for the S3A connector.
|
| org.apache.hadoop.fs.s3a.impl |
Implementation classes private to the S3A store.
|
| org.apache.hadoop.fs.s3a.tools |
S3A Command line tools independent of S3Guard.
|
| Modifier and Type | Method and Description |
|---|---|
void |
WriteOperations.abortMultipartCommit(String destKey,
String uploadId)
Abort a multipart commit operation.
|
void |
WriteOperationHelper.abortMultipartCommit(String destKey,
String uploadId)
Abort a multipart commit operation.
|
void |
WriteOperations.abortMultipartUpload(software.amazon.awssdk.services.s3.model.MultipartUpload upload)
Abort a multipart commit operation.
|
void |
WriteOperationHelper.abortMultipartUpload(software.amazon.awssdk.services.s3.model.MultipartUpload upload)
Abort a multipart commit operation.
|
void |
WriteOperations.abortMultipartUpload(String destKey,
String uploadId,
boolean shouldRetry,
Invoker.Retried retrying)
Abort a multipart upload operation.
|
void |
WriteOperationHelper.abortMultipartUpload(String destKey,
String uploadId,
boolean shouldRetry,
Invoker.Retried retrying)
Abort a multipart upload operation.
|
long |
S3AInternals.abortMultipartUploads(org.apache.hadoop.fs.Path path)
Abort multipart uploads under a path.
|
int |
WriteOperations.abortMultipartUploadsUnderPath(String prefix)
Abort multipart uploads under a path: limited to the first
few hundred.
|
int |
WriteOperationHelper.abortMultipartUploadsUnderPath(String prefix)
Abort multipart uploads under a path: limited to the first
few hundred.
|
void |
S3AFileSystem.abortOutstandingMultipartUploads(long seconds)
Abort all outstanding MPUs older than a given age.
|
software.amazon.awssdk.services.s3.model.CompleteMultipartUploadResponse |
WriteOperations.commitUpload(String destKey,
String uploadId,
List<software.amazon.awssdk.services.s3.model.CompletedPart> partETags,
long length)
This completes a multipart upload to the destination key via
finalizeMultipartUpload(). |
software.amazon.awssdk.services.s3.model.CompleteMultipartUploadResponse |
WriteOperationHelper.commitUpload(String destKey,
String uploadId,
List<software.amazon.awssdk.services.s3.model.CompletedPart> partETags,
long length)
This completes a multipart upload to the destination key via
finalizeMultipartUpload(). |
software.amazon.awssdk.services.s3.model.CompleteMultipartUploadResponse |
WriteOperations.completeMPUwithRetries(String destKey,
String uploadId,
List<software.amazon.awssdk.services.s3.model.CompletedPart> partETags,
long length,
AtomicInteger errorCount,
PutObjectOptions putOptions)
This completes a multipart upload to the destination key via
finalizeMultipartUpload(). |
software.amazon.awssdk.services.s3.model.CompleteMultipartUploadResponse |
WriteOperationHelper.completeMPUwithRetries(String destKey,
String uploadId,
List<software.amazon.awssdk.services.s3.model.CompletedPart> partETags,
long length,
AtomicInteger errorCount,
PutObjectOptions putOptions)
This completes a multipart upload to the destination key via
finalizeMultipartUpload(). |
boolean |
S3AFileSystem.delete(org.apache.hadoop.fs.Path f,
boolean recursive)
Delete a Path.
|
String |
S3AInternals.getBucketLocation()
Get the region of a bucket.
|
String |
S3AFileSystem.getBucketLocation()
S3AInternals method.
|
String |
S3AInternals.getBucketLocation(String bucketName)
Get the region of a bucket; fixing up the region so it can be used
in the builders of other AWS clients.
|
software.amazon.awssdk.services.s3.model.HeadBucketResponse |
S3AInternals.getBucketMetadata()
Request bucket metadata.
|
protected software.amazon.awssdk.services.s3.model.HeadBucketResponse |
S3AFileSystem.getBucketMetadata()
Request bucket metadata.
|
org.apache.hadoop.fs.ContentSummary |
S3AFileSystem.getContentSummary(org.apache.hadoop.fs.Path f)
This is a very slow operation against object storage.
|
org.apache.hadoop.fs.store.EtagChecksum |
S3AFileSystem.getFileChecksum(org.apache.hadoop.fs.Path f,
long length)
When enabled, get the etag of a object at the path via HEAD request and
return it as a checksum object.
|
org.apache.hadoop.fs.FileStatus |
S3AFileSystem.getFileStatus(org.apache.hadoop.fs.Path f)
Return a file status object that represents the path.
|
software.amazon.awssdk.services.s3.model.HeadObjectResponse |
S3AInternals.getObjectMetadata(org.apache.hadoop.fs.Path path)
Low-level call to get at the object metadata.
|
software.amazon.awssdk.services.s3.model.HeadObjectResponse |
S3AFileSystem.getObjectMetadata(org.apache.hadoop.fs.Path path)
Deprecated.
use S3AInternals API.
|
String |
WriteOperations.initiateMultiPartUpload(String destKey,
PutObjectOptions options)
Start the multipart upload process.
|
String |
WriteOperationHelper.initiateMultiPartUpload(String destKey,
PutObjectOptions options)
Start the multipart upload process.
|
org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> |
S3AFileSystem.listFiles(org.apache.hadoop.fs.Path f,
boolean recursive)
.
|
org.apache.hadoop.fs.RemoteIterator<S3ALocatedFileStatus> |
S3AFileSystem.listFilesAndEmptyDirectories(org.apache.hadoop.fs.Path f,
boolean recursive)
Recursive List of files and empty directories.
|
List<software.amazon.awssdk.services.s3.model.MultipartUpload> |
WriteOperationHelper.listMultipartUploads(String prefix) |
List<software.amazon.awssdk.services.s3.model.MultipartUpload> |
S3AFileSystem.listMultipartUploads(String prefix)
Listing all multipart uploads; limited to the first few hundred.
|
org.apache.hadoop.fs.RemoteIterator<software.amazon.awssdk.services.s3.model.MultipartUpload> |
S3AFileSystem.listUploads(String prefix)
List any pending multipart uploads whose keys begin with prefix, using
an iterator that can handle an unlimited number of entries.
|
org.apache.hadoop.fs.RemoteIterator<software.amazon.awssdk.services.s3.model.MultipartUpload> |
S3AFileSystem.listUploadsUnderPrefix(StoreContext storeContext,
String prefix)
List any pending multipart uploads whose keys begin with prefix, using
an iterator that can handle an unlimited number of entries.
|
protected void |
S3AFileSystem.maybeCreateFakeParentDirectory(org.apache.hadoop.fs.Path path)
Create a fake parent directory if required.
|
void |
Invoker.maybeRetry(boolean doRetry,
String action,
String path,
boolean idempotent,
org.apache.hadoop.util.functional.InvocationRaisingIOE operation)
Execute a void operation with the default retry callback invoked when
doRetry=true, else just once.
|
<T> T |
Invoker.maybeRetry(boolean doRetry,
String action,
String path,
boolean idempotent,
Invoker.Retried retrying,
org.apache.hadoop.util.functional.CallableRaisingIOE<T> operation)
Execute a function with retry processing when doRetry=true, else just once.
|
void |
Invoker.maybeRetry(boolean doRetry,
String action,
String path,
boolean idempotent,
Invoker.Retried retrying,
org.apache.hadoop.util.functional.InvocationRaisingIOE operation)
Execute a void operation with retry processing when doRetry=true, else
just once.
|
org.apache.hadoop.fs.FSDataInputStream |
S3AFileSystem.open(org.apache.hadoop.fs.Path f,
int bufferSize)
Opens an FSDataInputStream at the indicated Path.
|
CompletableFuture<org.apache.hadoop.fs.FSDataInputStream> |
S3AFileSystem.openFileWithOptions(org.apache.hadoop.fs.Path rawPath,
org.apache.hadoop.fs.impl.OpenFileParameters parameters)
Initiate the open() operation.
|
software.amazon.awssdk.services.s3.model.PutObjectResponse |
WriteOperations.putObject(software.amazon.awssdk.services.s3.model.PutObjectRequest putObjectRequest,
PutObjectOptions putOptions,
S3ADataBlocks.BlockUploadData uploadData,
boolean isFile,
org.apache.hadoop.fs.statistics.DurationTrackerFactory durationTrackerFactory)
PUT an object directly (i.e.
|
software.amazon.awssdk.services.s3.model.PutObjectResponse |
WriteOperationHelper.putObject(software.amazon.awssdk.services.s3.model.PutObjectRequest putObjectRequest,
PutObjectOptions putOptions,
S3ADataBlocks.BlockUploadData uploadData,
boolean isFile,
org.apache.hadoop.fs.statistics.DurationTrackerFactory durationTrackerFactory)
PUT an object directly (i.e.
|
int |
S3AInputStream.read() |
int |
S3AInputStream.read(byte[] buf,
int off,
int len)
This updates the statistics on read operations started and whether
or not the read operation "completed", that is: returned the exact
number of bytes requested.
|
void |
S3AInputStream.readFully(long position,
byte[] buffer,
int offset,
int length)
Subclass
readFully() operation which only seeks at the start
of the series of operations; seeking back at the end. |
boolean |
S3AFileSystem.rename(org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Renames Path src to Path dst.
|
<T> T |
Invoker.retry(String action,
String path,
boolean idempotent,
org.apache.hadoop.util.functional.CallableRaisingIOE<T> operation)
Execute a function with the default retry callback invoked.
|
void |
Invoker.retry(String action,
String path,
boolean idempotent,
org.apache.hadoop.util.functional.InvocationRaisingIOE operation)
Execute a void operation with the default retry callback invoked.
|
<T> T |
Invoker.retry(String action,
String path,
boolean idempotent,
Invoker.Retried retrying,
org.apache.hadoop.util.functional.CallableRaisingIOE<T> operation)
Execute a function with retry processing.
|
void |
Invoker.retry(String action,
String path,
boolean idempotent,
Invoker.Retried retrying,
org.apache.hadoop.util.functional.InvocationRaisingIOE operation)
Execute a void operation with retry processing.
|
software.amazon.awssdk.services.s3.model.UploadPartResponse |
WriteOperations.uploadPart(software.amazon.awssdk.services.s3.model.UploadPartRequest request,
software.amazon.awssdk.core.sync.RequestBody body,
org.apache.hadoop.fs.statistics.DurationTrackerFactory durationTrackerFactory)
Upload part of a multi-partition file.
|
software.amazon.awssdk.services.s3.model.UploadPartResponse |
WriteOperationHelper.uploadPart(software.amazon.awssdk.services.s3.model.UploadPartRequest request,
software.amazon.awssdk.core.sync.RequestBody body,
org.apache.hadoop.fs.statistics.DurationTrackerFactory durationTrackerFactory)
Upload part of a multi-partition file.
|
protected void |
S3AFileSystem.verifyBucketExists()
Verify that the bucket exists.
|
| Constructor and Description |
|---|
UploadIterator(StoreContext storeContext,
software.amazon.awssdk.services.s3.S3Client s3,
int maxKeys,
String prefix)
Construct an iterator to list uploads under a path.
|
| Modifier and Type | Method and Description |
|---|---|
software.amazon.awssdk.services.sts.model.Credentials |
STSClientFactory.STSClient.requestRole(String roleARN,
String sessionName,
String policy,
long duration,
TimeUnit timeUnit)
Request a set of role credentials.
|
static MarshalledCredentials |
MarshalledCredentialBinding.requestSessionCredentials(software.amazon.awssdk.auth.credentials.AwsCredentialsProvider parentCredentials,
org.apache.hadoop.conf.Configuration configuration,
String stsEndpoint,
String stsRegion,
int duration,
Invoker invoker,
String bucket)
Request a set of credentials from an STS endpoint.
|
software.amazon.awssdk.services.sts.model.Credentials |
STSClientFactory.STSClient.requestSessionCredentials(long duration,
TimeUnit timeUnit)
Request a set of session credentials.
|
| Modifier and Type | Method and Description |
|---|---|
SessionTokenIdentifier |
SessionTokenBinding.createTokenIdentifier(Optional<RoleModel.Policy> policy,
EncryptionSecrets encryptionSecrets,
org.apache.hadoop.io.Text renewer) |
RoleTokenIdentifier |
RoleTokenBinding.createTokenIdentifier(Optional<RoleModel.Policy> policy,
EncryptionSecrets encryptionSecrets,
org.apache.hadoop.io.Text renewer)
Create the Token Identifier.
|
| Modifier and Type | Method and Description |
|---|---|
default long |
OperationCallbacks.abortMultipartUploadsUnderPrefix(String prefix)
Abort multipart uploads under a path; paged.
|
software.amazon.awssdk.services.s3.model.CopyObjectResponse |
OperationCallbacks.copyFile(String srcKey,
String destKey,
S3ObjectAttributes srcAttributes,
S3AReadOpContext readContext)
Copy a single object in the bucket via a COPY operation.
|
void |
MkdirOperation.MkdirCallbacks.createFakeDirectory(org.apache.hadoop.fs.Path dir,
boolean keepMarkers)
Create a fake directory, always ending in "/".
|
protected void |
DeleteOperation.deleteDirectoryTree(org.apache.hadoop.fs.Path path,
String dirKey)
Delete a directory tree.
|
void |
OperationCallbacks.deleteObjectAtPath(org.apache.hadoop.fs.Path path,
String key,
boolean isFile)
Delete an object.
|
Boolean |
DeleteOperation.execute()
Delete a file or directory tree.
|
Void |
CopyFromLocalOperation.execute()
Executes the
CopyFromLocalOperation. |
Boolean |
MkdirOperation.execute()
Make the given path and all non-existent parents into
directories.
|
org.apache.hadoop.fs.ContentSummary |
GetContentSummaryOperation.execute()
Return the
ContentSummary of a given path. |
String |
ContextAccessors.getBucketLocation()
Get the region of a bucket.
|
software.amazon.awssdk.services.s3.model.HeadBucketResponse |
HeaderProcessing.HeaderProcessingCallbacks.getBucketMetadata()
Retrieve the bucket metadata.
|
software.amazon.awssdk.services.s3.model.HeadObjectResponse |
HeaderProcessing.HeaderProcessingCallbacks.getObjectMetadata(String key)
Retrieve the object metadata.
|
org.apache.hadoop.fs.RemoteIterator<S3ALocatedFileStatus> |
OperationCallbacks.listFilesAndDirectoryMarkers(org.apache.hadoop.fs.Path path,
S3AFileStatus status,
boolean includeSelf)
Recursive list of files and directory markers.
|
org.apache.hadoop.fs.RemoteIterator<S3AFileStatus> |
OperationCallbacks.listObjects(org.apache.hadoop.fs.Path path,
String key)
Create an iterator over objects in S3.
|
S3AFileStatus |
MkdirOperation.MkdirCallbacks.probePathStatus(org.apache.hadoop.fs.Path path,
Set<StatusProbeEnum> probes)
Get the status of a path.
|
S3AFileStatus |
GetContentSummaryOperation.GetContentSummaryCallbacks.probePathStatus(org.apache.hadoop.fs.Path path,
Set<StatusProbeEnum> probes)
Get the status of a path.
|
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.fs.RemoteIterator<S3AFileStatus> |
MarkerToolOperations.listObjects(org.apache.hadoop.fs.Path path,
String key)
Create an iterator over objects in S3.
|
Copyright © 2008–2024 Apache Software Foundation. All rights reserved.