@InterfaceAudience.Private
@InterfaceStability.Evolving
public final class OBSFileSystem
extends org.apache.hadoop.fs.FileSystem
This subclass is marked as private as code should not be creating it
directly; use FileSystem.get(Configuration) and variants to create
one.
If cast to OBSFileSystem, extra methods and features may be
accessed. Consider those private and unstable.
Because it prints some of the state of the instrumentation, the output of
toString() must also be considered unstable.
| Modifier and Type | Field and Description |
|---|---|
static org.slf4j.Logger |
LOG
Class logger.
|
| Constructor and Description |
|---|
OBSFileSystem() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.fs.FSDataOutputStream |
append(org.apache.hadoop.fs.Path f,
int bufferSize,
org.apache.hadoop.util.Progressable progress)
Append to an existing file (optional operation).
|
protected URI |
canonicalizeUri(URI rawUri)
Canonicalize the given URI.
|
void |
checkPath(org.apache.hadoop.fs.Path path)
Check that a Path belongs to this FileSystem.
|
void |
close()
Close the filesystem.
|
void |
copyFromLocalFile(boolean delSrc,
boolean overwrite,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Copy the
src file on the local disk to the filesystem at the given
dst name. |
org.apache.hadoop.fs.FSDataOutputStream |
create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
boolean overwrite,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress)
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
|
org.apache.hadoop.fs.FSDataOutputStream |
create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress,
org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt)
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
|
org.apache.hadoop.fs.FSDataOutputStream |
createNonRecursive(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission,
EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress)
Open an FSDataOutputStream at the indicated Path with write-progress
reporting.
|
boolean |
delete(org.apache.hadoop.fs.Path f,
boolean recursive)
Delete a Path.
|
boolean |
exists(org.apache.hadoop.fs.Path f)
Check if a path exists.
|
String |
getCanonicalServiceName()
Override
getCanonicalServiceName and return null since
delegation token is not supported. |
org.apache.hadoop.fs.ContentSummary |
getContentSummary(org.apache.hadoop.fs.Path f)
Return the
ContentSummary of a given Path. |
long |
getDefaultBlockSize()
Deprecated.
use
getDefaultBlockSize(Path) instead |
long |
getDefaultBlockSize(org.apache.hadoop.fs.Path f)
Imitate HDFS to return the number of bytes that large input files should be
optimally split into to minimize I/O time.
|
int |
getDefaultPort()
Return the default port for this FileSystem.
|
org.apache.hadoop.fs.FileStatus |
getFileStatus(org.apache.hadoop.fs.Path f)
Return a file status object that represents the path.
|
String |
getScheme()
Return the protocol scheme for the FileSystem.
|
URI |
getUri()
Return a URI whose scheme and authority identify this FileSystem.
|
org.apache.hadoop.fs.Path |
getWorkingDirectory()
Return the current working directory for the given file system.
|
void |
initialize(URI name,
org.apache.hadoop.conf.Configuration originalConf)
Initialize a FileSystem.
|
org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> |
listFiles(org.apache.hadoop.fs.Path f,
boolean recursive)
List the statuses and block locations of the files in the given path.
|
org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> |
listLocatedStatus(org.apache.hadoop.fs.Path f)
List the statuses of the files/directories in the given path if the path is
a directory.
|
org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> |
listLocatedStatus(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.PathFilter filter)
List a directory.
|
org.apache.hadoop.fs.FileStatus[] |
listStatus(org.apache.hadoop.fs.Path f)
List the statuses of the files/directories in the given path if the path is
a directory.
|
org.apache.hadoop.fs.FileStatus[] |
listStatus(org.apache.hadoop.fs.Path f,
boolean recursive)
This public interface is provided specially for Huawei MRS.
|
boolean |
mkdirs(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission)
Make the given path and all non-existent parents into directories.
|
org.apache.hadoop.fs.FSDataInputStream |
open(org.apache.hadoop.fs.Path f,
int bufferSize)
Open an FSDataInputStream at the indicated Path.
|
boolean |
rename(org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Rename Path src to Path dst.
|
void |
setWorkingDirectory(org.apache.hadoop.fs.Path newDir)
Set the current working directory for the file system.
|
String |
toString()
Return a string that describes this filesystem instance.
|
access, append, append, append, append, appendFile, areSymlinksEnabled, cancelDeleteOnExit, clearStatistics, closeAll, closeAllForUGI, completeLocalOutput, concat, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, create, create, create, create, create, create, create, createDataInputStreamBuilder, createDataInputStreamBuilder, createDataOutputStreamBuilder, createFile, createMultipartUploader, createNewFile, createNonRecursive, createNonRecursive, createPathHandle, createSnapshot, createSnapshot, createSymlink, delete, deleteOnExit, deleteSnapshot, enableSymlinks, fixRelativePart, get, get, get, getAclStatus, getAdditionalTokenIssuers, getAllStatistics, getAllStoragePolicies, getBlockSize, getCanonicalUri, getChildFileSystems, getDefaultReplication, getDefaultReplication, getDefaultUri, getDelegationToken, getEnclosingRoot, getFileBlockLocations, getFileBlockLocations, getFileChecksum, getFileChecksum, getFileLinkStatus, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getHomeDirectory, getInitialWorkingDirectory, getLength, getLinkTarget, getLocal, getName, getNamed, getPathHandle, getQuotaUsage, getReplication, getServerDefaults, getServerDefaults, getStatistics, getStatistics, getStatus, getStatus, getStoragePolicy, getStorageStatistics, getTrashRoot, getTrashRoots, getUsed, getUsed, getXAttr, getXAttrs, getXAttrs, globStatus, globStatus, hasPathCapability, isDirectory, isFile, listCorruptFileBlocks, listStatus, listStatus, listStatus, listStatusBatch, listStatusIterator, listXAttrs, makeQualified, mkdirs, mkdirs, modifyAclEntries, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, msync, newInstance, newInstance, newInstance, newInstanceLocal, open, open, open, openFile, openFile, openFileWithOptions, openFileWithOptions, primitiveCreate, primitiveMkdir, primitiveMkdir, printStatistics, processDeleteOnExit, removeAcl, removeAclEntries, removeDefaultAcl, removeXAttr, rename, renameSnapshot, resolveLink, resolvePath, satisfyStoragePolicy, setAcl, setDefaultUri, setDefaultUri, setOwner, setPermission, setQuota, setQuotaByStorageType, setReplication, setStoragePolicy, setTimes, setVerifyChecksum, setWriteChecksum, setXAttr, setXAttr, startLocalOutput, supportsSymlinks, truncate, unsetStoragePolicypublic void initialize(URI name, org.apache.hadoop.conf.Configuration originalConf) throws IOException
initialize in class org.apache.hadoop.fs.FileSystemname - a URI whose authority section names the host, port,
etc. for this FileSystemoriginalConf - the configuration to use for the FS. The
bucket-specific options are patched over the base ones
before any use is made of the config.IOExceptionpublic String getScheme()
getScheme in class org.apache.hadoop.fs.FileSystempublic URI getUri()
getUri in class org.apache.hadoop.fs.FileSystempublic int getDefaultPort()
getDefaultPort in class org.apache.hadoop.fs.FileSystemURI.getPort()public void checkPath(org.apache.hadoop.fs.Path path)
checkPath in class org.apache.hadoop.fs.FileSystempath - the path to checkIllegalArgumentException - if there is an FS mismatchprotected URI canonicalizeUri(URI rawUri)
canonicalizeUri in class org.apache.hadoop.fs.FileSystemrawUri - the URI to be canonicalizedpublic org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f,
int bufferSize)
throws IOException
open in class org.apache.hadoop.fs.FileSystemf - the file path to openbufferSize - the size of the buffer to be usedIOException - on any failure to open the filepublic org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
boolean overwrite,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress)
throws IOException
create in class org.apache.hadoop.fs.FileSystemf - the file path to createpermission - the permission to setoverwrite - if a file with this name already exists, then if true,
the file will be overwritten, and if false an error will
be thrownbufferSize - the size of the buffer to be usedreplication - required block replication for the fileblkSize - the requested block sizeprogress - the progress reporterIOException - on any failure to create the fileFileSystem.setPermission(Path, FsPermission)public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress,
org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt)
throws IOException
create in class org.apache.hadoop.fs.FileSystemf - the file name to createpermission - permission offlags - CreateFlags to use for this streambufferSize - the size of the buffer to be usedreplication - required block replication for the fileblkSize - block sizeprogress - progresschecksumOpt - check sum optionIOException - io exceptionpublic org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission,
EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress)
throws IOException
createNonRecursive in class org.apache.hadoop.fs.FileSystempath - the file path to createpermission - file permissionflags - CreateFlags to use for this streambufferSize - the size of the buffer to be usedreplication - required block replication for the fileblkSize - block sizeprogress - the progress reporterIOException - IO failurepublic org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f,
int bufferSize,
org.apache.hadoop.util.Progressable progress)
throws IOException
append in class org.apache.hadoop.fs.FileSystemf - the existing file to be appendedbufferSize - the size of the buffer to be usedprogress - for reporting progress if it is not nullIOException - indicating that append is not supportedpublic boolean exists(org.apache.hadoop.fs.Path f)
throws IOException
exists in class org.apache.hadoop.fs.FileSystemf - source pathIOException - IO failurepublic boolean rename(org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
throws IOException
rename in class org.apache.hadoop.fs.FileSystemsrc - path to be renameddst - new path after renameIOException - on IO failurepublic boolean delete(org.apache.hadoop.fs.Path f,
boolean recursive)
throws IOException
O(files), with added
overheads to enumerate the path. It is also not atomic.delete in class org.apache.hadoop.fs.FileSystemf - the path to deleterecursive - if path is a directory and set to true, the directory is
deleted else throws an exception. In case of a file the
recursive can be set to either true or falseIOException - due to inability to delete a directory or filepublic org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path f)
throws FileNotFoundException,
IOException
listStatus in class org.apache.hadoop.fs.FileSystemf - given pathFileNotFoundException - when the path does not existIOException - see specific implementationpublic org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path f,
boolean recursive)
throws FileNotFoundException,
IOException
f - given pathrecursive - whether to iterator objects in sub direcotriesFileNotFoundException - when the path does not existIOException - see specific implementationpublic org.apache.hadoop.fs.Path getWorkingDirectory()
getWorkingDirectory in class org.apache.hadoop.fs.FileSystempublic void setWorkingDirectory(org.apache.hadoop.fs.Path newDir)
setWorkingDirectory in class org.apache.hadoop.fs.FileSystemnewDir - the new working directorypublic boolean mkdirs(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission)
throws IOException,
org.apache.hadoop.fs.FileAlreadyExistsException
'mkdir -p'. Existence of the directory hierarchy
is not an error.mkdirs in class org.apache.hadoop.fs.FileSystempath - path to createpermission - to apply to forg.apache.hadoop.fs.FileAlreadyExistsException - there is a file at the path specifiedIOException - other IO problemspublic org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f)
throws FileNotFoundException,
IOException
getFileStatus in class org.apache.hadoop.fs.FileSystemf - the path we want information fromFileNotFoundException - when the path does not existIOException - on other problemspublic org.apache.hadoop.fs.ContentSummary getContentSummary(org.apache.hadoop.fs.Path f)
throws FileNotFoundException,
IOException
ContentSummary of a given Path.getContentSummary in class org.apache.hadoop.fs.FileSystemf - path to useContentSummaryFileNotFoundException - if the path does not resolveIOException - IO failurepublic void copyFromLocalFile(boolean delSrc,
boolean overwrite,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
throws org.apache.hadoop.fs.FileAlreadyExistsException,
IOException
src file on the local disk to the filesystem at the given
dst name.copyFromLocalFile in class org.apache.hadoop.fs.FileSystemdelSrc - whether to delete the srcoverwrite - whether to overwrite an existing filesrc - pathdst - pathorg.apache.hadoop.fs.FileAlreadyExistsException - if the destination file exists and
overwrite == falseIOException - IO problempublic void close()
throws IOException
close in interface Closeableclose in interface AutoCloseableclose in class org.apache.hadoop.fs.FileSystemIOException - IO problempublic String getCanonicalServiceName()
getCanonicalServiceName and return null since
delegation token is not supported.getCanonicalServiceName in interface org.apache.hadoop.security.token.DelegationTokenIssuergetCanonicalServiceName in class org.apache.hadoop.fs.FileSystempublic long getDefaultBlockSize()
getDefaultBlockSize(Path) insteadgetDefaultBlockSize in class org.apache.hadoop.fs.FileSystempublic long getDefaultBlockSize(org.apache.hadoop.fs.Path f)
getDefaultBlockSize in class org.apache.hadoop.fs.FileSystemf - path of filepublic String toString()
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listFiles(org.apache.hadoop.fs.Path f,
boolean recursive)
throws FileNotFoundException,
IOException
If the path is a directory, if recursive is false, returns files in the directory; if recursive is true, return files in the subtree rooted at the path. If the path is a file, return the file's status and block locations.
listFiles in class org.apache.hadoop.fs.FileSystemf - a pathrecursive - if the subdirectories need to be traversed recursivelyFileNotFoundException - if path does not existIOException - if any I/O error occurredpublic org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path f)
throws FileNotFoundException,
IOException
If a returned status is a file, it contains the file's block locations.
listLocatedStatus in class org.apache.hadoop.fs.FileSystemf - is the pathFileNotFoundException - If f does not existIOException - If an I/O error occurredpublic org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.PathFilter filter)
throws FileNotFoundException,
IOException
listLocatedStatus in class org.apache.hadoop.fs.FileSystemf - a pathfilter - a path filterFileNotFoundException - if f does not existIOException - if any I/O error occurredCopyright © 2008–2024 Apache Software Foundation. All rights reserved.