Class FileSystem
- java.lang.Object
-
- org.apache.hadoop.conf.Configured
-
- org.apache.hadoop.fs.FileSystem
-
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.hadoop.conf.Configurable,org.apache.hadoop.security.token.DelegationTokenIssuer
@Public @Stable public abstract class FileSystem extends org.apache.hadoop.conf.Configured implements Closeable, org.apache.hadoop.security.token.DelegationTokenIssuer
An abstract base class for a fairly generic filesystem. It may be implemented as a distributed filesystem, or as a "local" one that reflects the locally-connected disk. The local version exists for small Hadoop instances and for testing.All user code that may potentially use the Hadoop Distributed File System should be written to use a FileSystem object or its successor,
FileContext.The local implementation is
LocalFileSystemand distributed implementation is DistributedFileSystem. There are other implementations for object stores and (outside the Apache Hadoop codebase), third party filesystems.Notes
- The behaviour of the filesystem is specified in the Hadoop documentation. However, the normative specification of the behavior of this class is actually HDFS: if HDFS does not behave the way these Javadocs or the specification in the Hadoop documentations define, assume that the documentation is incorrect.
- The term
FileSystemrefers to an instance of this class. - The acronym "FS" is used as an abbreviation of FileSystem.
- The term
filesystemrefers to the distributed/local filesystem itself, rather than the class used to interact with it. - The term "file" refers to a file in the remote filesystem,
rather than instances of
java.io.File.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classFileSystem.DirectoryEntriesRepresents a batch of directory entries when iteratively listing a directory.protected classFileSystem.DirListingIterator<T extends org.apache.hadoop.fs.FileStatus>Generic iterator for implementinglistStatusIterator(Path).static classFileSystem.StatisticsTracks statistics about how many reads, writes, and so forth have been done in a FileSystem.
-
Field Summary
Fields Modifier and Type Field Description static StringDEFAULT_FSstatic StringFS_DEFAULT_NAME_KEYstatic org.apache.commons.logging.LogLOGThis log is widely used in the org.apache.hadoop.fs code and tests, so must be considered something to only be changed with care.static intSHUTDOWN_HOOK_PRIORITYPriority of the FileSystem shutdown hook: 10.protected FileSystem.StatisticsstatisticsThe statistics for this file system.static StringTRASH_PREFIXPrefix for trash directory: ".Trash".static StringUSER_HOME_PREFIX
-
Constructor Summary
Constructors Modifier Constructor Description protectedFileSystem()
-
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Concrete Methods Deprecated Methods Modifier and Type Method Description voidaccess(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode)Checks if the user can access a path.org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f)Append to an existing file (optional operation).org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, int bufferSize)Append to an existing file (optional operation).abstract org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress)Append to an existing file (optional operation).org.apache.hadoop.fs.FSDataOutputStreamBuilderappendFile(org.apache.hadoop.fs.Path path)Create a Builder to append a file.static booleanareSymlinksEnabled()booleancancelDeleteOnExit(org.apache.hadoop.fs.Path f)Cancel the scheduled deletion of the path when the FileSystem is closed.protected URIcanonicalizeUri(URI uri)Canonicalize the given URI.protected voidcheckPath(org.apache.hadoop.fs.Path path)Check that a Path belongs to this FileSystem.static voidclearStatistics()Reset all statistics for all file systems.voidclose()Close this FileSystem instance.static voidcloseAll()Close all cached FileSystem instances.static voidcloseAllForUGI(org.apache.hadoop.security.UserGroupInformation ugi)Close all cached FileSystem instances for a given UGI.voidcompleteLocalOutput(org.apache.hadoop.fs.Path fsOutputFile, org.apache.hadoop.fs.Path tmpLocalFile)Called when we're all done writing to the target.voidconcat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs)Concat existing files together.voidcopyFromLocalFile(boolean delSrc, boolean overwrite, org.apache.hadoop.fs.Path[] srcs, org.apache.hadoop.fs.Path dst)The src files are on the local disk.voidcopyFromLocalFile(boolean delSrc, boolean overwrite, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)The src file is on the local disk.voidcopyFromLocalFile(boolean delSrc, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)The src file is on the local disk.voidcopyFromLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)The src file is on the local disk.voidcopyToLocalFile(boolean delSrc, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)Copy it a file from a remote filesystem to the local one.voidcopyToLocalFile(boolean delSrc, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, boolean useRawLocalFileSystem)The src file is under this filesystem, and the dst is on the local disk.voidcopyToLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)Copy it a file from the remote filesystem to the local one.static org.apache.hadoop.fs.FSDataOutputStreamcreate(FileSystem fs, org.apache.hadoop.fs.Path file, org.apache.hadoop.fs.permission.FsPermission permission)Create a file with the provided permission.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f)Create an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite)Create an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize)Create an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize)Create an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)Create an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, org.apache.hadoop.util.Progressable progress)Create anFSDataOutputStreamat the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, short replication)Create an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, short replication, org.apache.hadoop.util.Progressable progress)Create an FSDataOutputStream at the indicated Path with write-progress reporting.abstract org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)Create an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)Create an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt)Create an FSDataOutputStream at the indicated Path with a custom checksum option.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.util.Progressable progress)Create an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamBuildercreateFile(org.apache.hadoop.fs.Path path)Create a new FSDataOutputStreamBuilder for the file with path.booleancreateNewFile(org.apache.hadoop.fs.Path f)Creates the given Path as a brand-new zero-length file.org.apache.hadoop.fs.FSDataOutputStreamcreateNonRecursive(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)Opens an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreateNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)Opens an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreateNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)Opens an FSDataOutputStream at the indicated Path with write-progress reporting.protected org.apache.hadoop.fs.PathHandlecreatePathHandle(org.apache.hadoop.fs.FileStatus stat, org.apache.hadoop.fs.Options.HandleOpt... opt)Hook to implement support forPathHandleoperations.org.apache.hadoop.fs.PathcreateSnapshot(org.apache.hadoop.fs.Path path)Create a snapshot with a default name.org.apache.hadoop.fs.PathcreateSnapshot(org.apache.hadoop.fs.Path path, String snapshotName)Create a snapshot.voidcreateSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent)SeeFileContext.createSymlink(Path, Path, boolean).booleandelete(org.apache.hadoop.fs.Path f)Deprecated.Usedelete(Path, boolean)instead.abstract booleandelete(org.apache.hadoop.fs.Path f, boolean recursive)Delete a file.booleandeleteOnExit(org.apache.hadoop.fs.Path f)Mark a path to be deleted when its FileSystem is closed.voiddeleteSnapshot(org.apache.hadoop.fs.Path path, String snapshotName)Delete a snapshot of a directory.static voidenableSymlinks()booleanexists(org.apache.hadoop.fs.Path f)Check if a path exists.protected org.apache.hadoop.fs.PathfixRelativePart(org.apache.hadoop.fs.Path p)SeeFileContext.fixRelativePart(org.apache.hadoop.fs.Path).static FileSystemget(URI uri, org.apache.hadoop.conf.Configuration conf)Get a FileSystem for this URI's scheme and authority.static FileSystemget(URI uri, org.apache.hadoop.conf.Configuration conf, String user)Get a FileSystem instance based on the uri, the passed in configuration and the user.static FileSystemget(org.apache.hadoop.conf.Configuration conf)Returns the configured FileSystem implementation.org.apache.hadoop.fs.permission.AclStatusgetAclStatus(org.apache.hadoop.fs.Path path)Gets the ACL of a file or directory.org.apache.hadoop.security.token.DelegationTokenIssuer[]getAdditionalTokenIssuers()static List<FileSystem.Statistics>getAllStatistics()Deprecated.Collection<? extends org.apache.hadoop.fs.BlockStoragePolicySpi>getAllStoragePolicies()Retrieve all the storage policies supported by this file system.longgetBlockSize(org.apache.hadoop.fs.Path f)Deprecated.UsegetFileStatus(Path)insteadStringgetCanonicalServiceName()Get a canonical service name for this FileSystem.protected URIgetCanonicalUri()Return a canonicalized form of this FileSystem's URI.FileSystem[]getChildFileSystems()Get all the immediate child FileSystems embedded in this FileSystem.org.apache.hadoop.fs.ContentSummarygetContentSummary(org.apache.hadoop.fs.Path f)Return theContentSummaryof a givenPath.longgetDefaultBlockSize()Deprecated.usegetDefaultBlockSize(Path)insteadlonggetDefaultBlockSize(org.apache.hadoop.fs.Path f)Return the number of bytes that large input files should be optimally be split into to minimize I/O time.protected intgetDefaultPort()Get the default port for this FileSystem.shortgetDefaultReplication()Deprecated.usegetDefaultReplication(Path)insteadshortgetDefaultReplication(org.apache.hadoop.fs.Path path)Get the default replication for a path.static URIgetDefaultUri(org.apache.hadoop.conf.Configuration conf)Get the default FileSystem URI from a configuration.org.apache.hadoop.security.token.Token<?>getDelegationToken(String renewer)Get a new delegation token for this FileSystem.org.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len)Return an array containing hostnames, offset and size of portions of the given file.org.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.Path p, long start, long len)Return an array containing hostnames, offset and size of portions of the given file.org.apache.hadoop.fs.FileChecksumgetFileChecksum(org.apache.hadoop.fs.Path f)Get the checksum of a file, if the FS supports checksums.org.apache.hadoop.fs.FileChecksumgetFileChecksum(org.apache.hadoop.fs.Path f, long length)Get the checksum of a file, from the beginning of the file till the specific length.org.apache.hadoop.fs.FileStatusgetFileLinkStatus(org.apache.hadoop.fs.Path f)SeeFileContext.getFileLinkStatus(Path).abstract org.apache.hadoop.fs.FileStatusgetFileStatus(org.apache.hadoop.fs.Path f)Return a file status object that represents the path.static Class<? extends FileSystem>getFileSystemClass(String scheme, org.apache.hadoop.conf.Configuration conf)Get the FileSystem implementation class of a filesystem.protected static FileSystemgetFSofPath(org.apache.hadoop.fs.Path absOrFqPath, org.apache.hadoop.conf.Configuration conf)static org.apache.hadoop.fs.GlobalStorageStatisticsgetGlobalStorageStatistics()Get the global storage statistics.org.apache.hadoop.fs.PathgetHomeDirectory()Return the current user's home directory in this FileSystem.protected org.apache.hadoop.fs.PathgetInitialWorkingDirectory()Note: with the new FileContext class, getWorkingDirectory() will be removed.longgetLength(org.apache.hadoop.fs.Path f)Deprecated.UsegetFileStatus(Path)instead.org.apache.hadoop.fs.PathgetLinkTarget(org.apache.hadoop.fs.Path f)SeeFileContext.getLinkTarget(Path).static org.apache.hadoop.fs.LocalFileSystemgetLocal(org.apache.hadoop.conf.Configuration conf)Get the local FileSystem.StringgetName()Deprecated.callgetUri()instead.static FileSystemgetNamed(String name, org.apache.hadoop.conf.Configuration conf)Deprecated.callget(URI, Configuration)instead.org.apache.hadoop.fs.PathHandlegetPathHandle(org.apache.hadoop.fs.FileStatus stat, org.apache.hadoop.fs.Options.HandleOpt... opt)Create a durable, serializable handle to the referent of the given entity.org.apache.hadoop.fs.QuotaUsagegetQuotaUsage(org.apache.hadoop.fs.Path f)Return theQuotaUsageof a givenPath.shortgetReplication(org.apache.hadoop.fs.Path src)Deprecated.UsegetFileStatus(Path)insteadStringgetScheme()Return the protocol scheme for this FileSystem.org.apache.hadoop.fs.FsServerDefaultsgetServerDefaults()Deprecated.usegetServerDefaults(Path)insteadorg.apache.hadoop.fs.FsServerDefaultsgetServerDefaults(org.apache.hadoop.fs.Path p)Return a set of server default configuration values.static Map<String,FileSystem.Statistics>getStatistics()Deprecated.static FileSystem.StatisticsgetStatistics(String scheme, Class<? extends FileSystem> cls)Deprecated.org.apache.hadoop.fs.FsStatusgetStatus()Returns a status object describing the use and capacity of the filesystem.org.apache.hadoop.fs.FsStatusgetStatus(org.apache.hadoop.fs.Path p)Returns a status object describing the use and capacity of the filesystem.org.apache.hadoop.fs.BlockStoragePolicySpigetStoragePolicy(org.apache.hadoop.fs.Path src)Query the effective storage policy ID for the given file or directory.org.apache.hadoop.fs.StorageStatisticsgetStorageStatistics()Get the StorageStatistics for this FileSystem object.org.apache.hadoop.fs.PathgetTrashRoot(org.apache.hadoop.fs.Path path)Get the root directory of Trash for current user when the path specified is deleted.Collection<org.apache.hadoop.fs.FileStatus>getTrashRoots(boolean allUsers)Get all the trash roots for current user or all users.abstract URIgetUri()Returns a URI which identifies this FileSystem.longgetUsed()Return the total size of all files in the filesystem.longgetUsed(org.apache.hadoop.fs.Path path)Return the total size of all files from a specified path.abstract org.apache.hadoop.fs.PathgetWorkingDirectory()Get the current working directory for the given FileSystembyte[]getXAttr(org.apache.hadoop.fs.Path path, String name)Get an xattr name and value for a file or directory.Map<String,byte[]>getXAttrs(org.apache.hadoop.fs.Path path)Get all of the xattr name/value pairs for a file or directory.Map<String,byte[]>getXAttrs(org.apache.hadoop.fs.Path path, List<String> names)Get all of the xattrs name/value pairs for a file or directory.org.apache.hadoop.fs.FileStatus[]globStatus(org.apache.hadoop.fs.Path pathPattern)Return all the files that match filePattern and are not checksum files.org.apache.hadoop.fs.FileStatus[]globStatus(org.apache.hadoop.fs.Path pathPattern, org.apache.hadoop.fs.PathFilter filter)Return an array ofFileStatusobjects whose path names matchpathPatternand is accepted by the user-supplied path filter.voidinitialize(URI name, org.apache.hadoop.conf.Configuration conf)Initialize a FileSystem.booleanisDirectory(org.apache.hadoop.fs.Path f)Deprecated.UsegetFileStatus(Path)insteadbooleanisFile(org.apache.hadoop.fs.Path f)Deprecated.UsegetFileStatus(Path)insteadorg.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.Path>listCorruptFileBlocks(org.apache.hadoop.fs.Path path)List corrupted file blocks.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus>listFiles(org.apache.hadoop.fs.Path f, boolean recursive)List the statuses and block locations of the files in the given path.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus>listLocatedStatus(org.apache.hadoop.fs.Path f)List the statuses of the files/directories in the given path if the path is a directory.protected org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus>listLocatedStatus(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.PathFilter filter)List a directory.abstract org.apache.hadoop.fs.FileStatus[]listStatus(org.apache.hadoop.fs.Path f)List the statuses of the files/directories in the given path if the path is a directory.org.apache.hadoop.fs.FileStatus[]listStatus(org.apache.hadoop.fs.Path[] files)Filter files/directories in the given list of paths using default path filter.org.apache.hadoop.fs.FileStatus[]listStatus(org.apache.hadoop.fs.Path[] files, org.apache.hadoop.fs.PathFilter filter)Filter files/directories in the given list of paths using user-supplied path filter.org.apache.hadoop.fs.FileStatus[]listStatus(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.PathFilter filter)Filter files/directories in the given path using the user-supplied path filter.protected FileSystem.DirectoryEntrieslistStatusBatch(org.apache.hadoop.fs.Path f, byte[] token)Given an opaque iteration token, return the next batch of entries in a directory.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.FileStatus>listStatusIterator(org.apache.hadoop.fs.Path p)Returns a remote iterator so that followup calls are made on demand while consuming the entries.List<String>listXAttrs(org.apache.hadoop.fs.Path path)Get all of the xattr names for a file or directory.org.apache.hadoop.fs.PathmakeQualified(org.apache.hadoop.fs.Path path)Qualify a path to one which uses this FileSystem and, if relative, made absolute.static booleanmkdirs(FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.permission.FsPermission permission)Create a directory with the provided permission.booleanmkdirs(org.apache.hadoop.fs.Path f)Callmkdirs(Path, FsPermission)with default permission.abstract booleanmkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission)Make the given file and all non-existent parents into directories.voidmodifyAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec)Modifies ACL entries of files and directories.voidmoveFromLocalFile(org.apache.hadoop.fs.Path[] srcs, org.apache.hadoop.fs.Path dst)The src files is on the local disk.voidmoveFromLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)The src file is on the local disk.voidmoveToLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)Copy a file to the local filesystem, then delete it from the remote filesystem (if successfully copied).static FileSystemnewInstance(URI uri, org.apache.hadoop.conf.Configuration config)Returns the FileSystem for this URI's scheme and authority.static FileSystemnewInstance(URI uri, org.apache.hadoop.conf.Configuration conf, String user)Returns the FileSystem for this URI's scheme and authority and the given user.static FileSystemnewInstance(org.apache.hadoop.conf.Configuration conf)Returns a unique configured FileSystem implementation for the default filesystem of the supplied configuration.static org.apache.hadoop.fs.LocalFileSystemnewInstanceLocal(org.apache.hadoop.conf.Configuration conf)Get a unique local FileSystem object.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.Path f)Opens an FSDataInputStream at the indicated Path.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.PathHandle fd)Open an FSDataInputStream matching the PathHandle instance.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.PathHandle fd, int bufferSize)Open an FSDataInputStream matching the PathHandle instance.abstract org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.Path f, int bufferSize)Opens an FSDataInputStream at the indicated Path.protected org.apache.hadoop.fs.FSDataOutputStreamprimitiveCreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt)Deprecated.protected booleanprimitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission)Deprecated.protected voidprimitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, boolean createParent)Deprecated.static voidprintStatistics()Print all statistics for all file systems toSystem.outprotected voidprocessDeleteOnExit()Delete all paths that were marked as delete-on-exit.voidremoveAcl(org.apache.hadoop.fs.Path path)Removes all but the base ACL entries of files and directories.voidremoveAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec)Removes ACL entries from files and directories.voidremoveDefaultAcl(org.apache.hadoop.fs.Path path)Removes all default ACL entries from files and directories.voidremoveXAttr(org.apache.hadoop.fs.Path path, String name)Remove an xattr of a file or directory.abstract booleanrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)Renames Path src to Path dst.protected voidrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, org.apache.hadoop.fs.Options.Rename... options)Deprecated.voidrenameSnapshot(org.apache.hadoop.fs.Path path, String snapshotOldName, String snapshotNewName)Rename a snapshot.protected org.apache.hadoop.fs.PathresolveLink(org.apache.hadoop.fs.Path f)SeeAbstractFileSystem.getLinkTarget(Path).org.apache.hadoop.fs.PathresolvePath(org.apache.hadoop.fs.Path p)Return the fully-qualified path of path, resolving the path through any symlinks or mount point.voidsetAcl(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec)Fully replaces ACL of files and directories, discarding all existing entries.static voidsetDefaultUri(org.apache.hadoop.conf.Configuration conf, String uri)Set the default FileSystem URI in a configuration.static voidsetDefaultUri(org.apache.hadoop.conf.Configuration conf, URI uri)Set the default FileSystem URI in a configuration.voidsetOwner(org.apache.hadoop.fs.Path p, String username, String groupname)Set owner of a path (i.e.voidsetPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission)Set permission of a path.booleansetReplication(org.apache.hadoop.fs.Path src, short replication)Set the replication for an existing file.voidsetStoragePolicy(org.apache.hadoop.fs.Path src, String policyName)Set the storage policy for a given file or directory.voidsetTimes(org.apache.hadoop.fs.Path p, long mtime, long atime)Set access time of a file.voidsetVerifyChecksum(boolean verifyChecksum)Set the verify checksum flag.abstract voidsetWorkingDirectory(org.apache.hadoop.fs.Path new_dir)Set the current working directory for the given FileSystem.voidsetWriteChecksum(boolean writeChecksum)Set the write checksum flag.voidsetXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value)Set an xattr of a file or directory.voidsetXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag)Set an xattr of a file or directory.org.apache.hadoop.fs.PathstartLocalOutput(org.apache.hadoop.fs.Path fsOutputFile, org.apache.hadoop.fs.Path tmpLocalFile)Returns a local file that the user can write output to.booleansupportsSymlinks()SeeAbstractFileSystem.supportsSymlinks().booleantruncate(org.apache.hadoop.fs.Path f, long newLength)Truncate the file in the indicated path to the indicated size.voidunsetStoragePolicy(org.apache.hadoop.fs.Path src)Unset the storage policy set for a given file or directory.
-
-
-
Field Detail
-
FS_DEFAULT_NAME_KEY
public static final String FS_DEFAULT_NAME_KEY
- See Also:
- Constant Field Values
-
DEFAULT_FS
public static final String DEFAULT_FS
- See Also:
- Constant Field Values
-
LOG
@Private public static final org.apache.commons.logging.Log LOG
This log is widely used in the org.apache.hadoop.fs code and tests, so must be considered something to only be changed with care.
-
SHUTDOWN_HOOK_PRIORITY
public static final int SHUTDOWN_HOOK_PRIORITY
Priority of the FileSystem shutdown hook: 10.- See Also:
- Constant Field Values
-
TRASH_PREFIX
public static final String TRASH_PREFIX
Prefix for trash directory: ".Trash".- See Also:
- Constant Field Values
-
USER_HOME_PREFIX
public static final String USER_HOME_PREFIX
- See Also:
- Constant Field Values
-
statistics
protected FileSystem.Statistics statistics
The statistics for this file system.
-
-
Method Detail
-
get
public static FileSystem get(URI uri, org.apache.hadoop.conf.Configuration conf, String user) throws IOException, InterruptedException
Get a FileSystem instance based on the uri, the passed in configuration and the user.- Parameters:
uri- of the filesystemconf- the configuration to useuser- to perform the get as- Returns:
- the filesystem instance
- Throws:
IOException- failure to loadInterruptedException- If theUGI.doAs()call was somehow interrupted.
-
get
public static FileSystem get(org.apache.hadoop.conf.Configuration conf) throws IOException
Returns the configured FileSystem implementation.- Parameters:
conf- the configuration to use- Throws:
IOException
-
getDefaultUri
public static URI getDefaultUri(org.apache.hadoop.conf.Configuration conf)
Get the default FileSystem URI from a configuration.- Parameters:
conf- the configuration to use- Returns:
- the uri of the default filesystem
-
setDefaultUri
public static void setDefaultUri(org.apache.hadoop.conf.Configuration conf, URI uri)Set the default FileSystem URI in a configuration.- Parameters:
conf- the configuration to alteruri- the new default filesystem uri
-
setDefaultUri
public static void setDefaultUri(org.apache.hadoop.conf.Configuration conf, String uri)Set the default FileSystem URI in a configuration.- Parameters:
conf- the configuration to alteruri- the new default filesystem uri
-
initialize
public void initialize(URI name, org.apache.hadoop.conf.Configuration conf) throws IOException
Initialize a FileSystem. Called after the new FileSystem instance is constructed, and before it is ready for use. FileSystem implementations overriding this method MUST forward it to their superclass, though the order in which it is done, and whether to alter the configuration before the invocation are options of the subclass.- Parameters:
name- a URI whose authority section names the host, port, etc. for this FileSystemconf- the configuration- Throws:
IOException- on any failure to initialize this instance.IllegalArgumentException- if the URI is considered invalid.
-
getScheme
public String getScheme()
Return the protocol scheme for this FileSystem.This implementation throws an
UnsupportedOperationException.- Returns:
- the protocol scheme for this FileSystem.
- Throws:
UnsupportedOperationException- if the operation is unsupported (default).
-
getUri
public abstract URI getUri()
Returns a URI which identifies this FileSystem.- Returns:
- the URI of this filesystem.
-
getCanonicalUri
protected URI getCanonicalUri()
Return a canonicalized form of this FileSystem's URI. The default implementation simply callscanonicalizeUri(URI)on the filesystem's own URI, so subclasses typically only need to implement that method.- See Also:
canonicalizeUri(URI)
-
canonicalizeUri
protected URI canonicalizeUri(URI uri)
Canonicalize the given URI. This is implementation-dependent, and may for example consist of canonicalizing the hostname using DNS and adding the default port if not specified. The default implementation simply fills in the default port if not specified and ifgetDefaultPort()returns a default port.- Returns:
- URI
- See Also:
NetUtils.getCanonicalUri(URI, int)
-
getDefaultPort
protected int getDefaultPort()
Get the default port for this FileSystem.- Returns:
- the default port or 0 if there isn't one
-
getFSofPath
protected static FileSystem getFSofPath(org.apache.hadoop.fs.Path absOrFqPath, org.apache.hadoop.conf.Configuration conf) throws org.apache.hadoop.fs.UnsupportedFileSystemException, IOException
- Throws:
org.apache.hadoop.fs.UnsupportedFileSystemExceptionIOException
-
getCanonicalServiceName
@Public @Evolving public String getCanonicalServiceName()
Get a canonical service name for this FileSystem. The token cache is the only user of the canonical service name, and uses it to lookup this FileSystem's service tokens. If the file system provides a token of its own then it must have a canonical name, otherwise the canonical name can be null. Default implementation: If the FileSystem has child file systems (such as an embedded file system) then it is assumed that the FS has no tokens of its own and hence returns a null name; otherwise a service name is built using Uri and port.- Specified by:
getCanonicalServiceNamein interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Returns:
- a service string that uniquely identifies this file system, null if the filesystem does not implement tokens
- See Also:
SecurityUtil.buildDTServiceName(URI, int)
-
getName
@Deprecated public String getName()
Deprecated.callgetUri()instead.
-
getNamed
@Deprecated public static FileSystem getNamed(String name, org.apache.hadoop.conf.Configuration conf) throws IOException
Deprecated.callget(URI, Configuration)instead.- Throws:
IOException
-
getLocal
public static org.apache.hadoop.fs.LocalFileSystem getLocal(org.apache.hadoop.conf.Configuration conf) throws IOExceptionGet the local FileSystem.- Parameters:
conf- the configuration to configure the FileSystem with if it is newly instantiated.- Returns:
- a LocalFileSystem
- Throws:
IOException- if somehow the local FS cannot be instantiated.
-
get
public static FileSystem get(URI uri, org.apache.hadoop.conf.Configuration conf) throws IOException
Get a FileSystem for this URI's scheme and authority.-
If the configuration has the property
"fs.$SCHEME.impl.disable.cache"set to true, a new instance will be created, initialized with the supplied URI and configuration, then returned without being cached. - If the there is a cached FS instance matching the same URI, it will be returned.
- Otherwise: a new FS instance will be created, initialized with the configuration and URI, cached and returned to the caller.
- Throws:
IOException- if the FileSystem cannot be instantiated.
-
If the configuration has the property
-
newInstance
public static FileSystem newInstance(URI uri, org.apache.hadoop.conf.Configuration conf, String user) throws IOException, InterruptedException
Returns the FileSystem for this URI's scheme and authority and the given user. Internally invokesnewInstance(URI, Configuration)- Parameters:
uri- of the filesystemconf- the configuration to useuser- to perform the get as- Returns:
- filesystem instance
- Throws:
IOException- if the FileSystem cannot be instantiated.InterruptedException- If theUGI.doAs()call was somehow interrupted.
-
newInstance
public static FileSystem newInstance(URI uri, org.apache.hadoop.conf.Configuration config) throws IOException
Returns the FileSystem for this URI's scheme and authority. The entire URI is passed to the FileSystem instance's initialize method. This always returns a new FileSystem object.- Parameters:
uri- FS URIconfig- configuration to use- Returns:
- the new FS instance
- Throws:
IOException- FS creation or initialization failure.
-
newInstance
public static FileSystem newInstance(org.apache.hadoop.conf.Configuration conf) throws IOException
Returns a unique configured FileSystem implementation for the default filesystem of the supplied configuration. This always returns a new FileSystem object.- Parameters:
conf- the configuration to use- Returns:
- the new FS instance
- Throws:
IOException- FS creation or initialization failure.
-
newInstanceLocal
public static org.apache.hadoop.fs.LocalFileSystem newInstanceLocal(org.apache.hadoop.conf.Configuration conf) throws IOExceptionGet a unique local FileSystem object.- Parameters:
conf- the configuration to configure the FileSystem with- Returns:
- a new LocalFileSystem object.
- Throws:
IOException- FS creation or initialization failure.
-
closeAll
public static void closeAll() throws IOExceptionClose all cached FileSystem instances. After this operation, they may not be used in any operations.- Throws:
IOException- a problem arose closing one or more filesystem.
-
closeAllForUGI
public static void closeAllForUGI(org.apache.hadoop.security.UserGroupInformation ugi) throws IOExceptionClose all cached FileSystem instances for a given UGI. Be sure those filesystems are not used anymore.- Parameters:
ugi- user group info to close- Throws:
IOException- a problem arose closing one or more filesystem.
-
makeQualified
public org.apache.hadoop.fs.Path makeQualified(org.apache.hadoop.fs.Path path)
Qualify a path to one which uses this FileSystem and, if relative, made absolute.- Parameters:
path- to qualify.- Returns:
- this path if it contains a scheme and authority and is absolute, or a new path that includes a path and authority and is fully qualified
- Throws:
IllegalArgumentException- if the path has a schema/URI different from this FileSystem.- See Also:
Path.makeQualified(URI, Path)
-
getDelegationToken
@Private public org.apache.hadoop.security.token.Token<?> getDelegationToken(String renewer) throws IOException
Get a new delegation token for this FileSystem. This is an internal method that should have been declared protected but wasn't historically. Callers should useDelegationTokenIssuer.addDelegationTokens(String, Credentials)- Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Parameters:
renewer- the account name that is allowed to renew the token.- Returns:
- a new delegation token or null if the FS does not support tokens.
- Throws:
IOException- on any problem obtaining a token
-
getChildFileSystems
@LimitedPrivate("HDFS") public FileSystem[] getChildFileSystems()Get all the immediate child FileSystems embedded in this FileSystem. It does not recurse and get grand children. If a FileSystem has multiple child FileSystems, then it must return a unique list of those FileSystems. Default is to return null to signify no children.- Returns:
- FileSystems that are direct children of this FileSystem, or null for "no children"
-
getAdditionalTokenIssuers
@Private public org.apache.hadoop.security.token.DelegationTokenIssuer[] getAdditionalTokenIssuers() throws IOException- Specified by:
getAdditionalTokenIssuersin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Throws:
IOException
-
create
public static org.apache.hadoop.fs.FSDataOutputStream create(FileSystem fs, org.apache.hadoop.fs.Path file, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException
Create a file with the provided permission. The permission of the file is set to be the provided permission as in setPermission, not permission&~umask The HDFS implementation is implemented using two RPCs. It is understood that it is inefficient, but the implementation is thread-safe. The other option is to change the value of umask in configuration to be 0, but it is not thread-safe.- Parameters:
fs- FileSystemfile- the name of the file to be createdpermission- the permission of the file- Returns:
- an output stream
- Throws:
IOException- IO failure
-
mkdirs
public static boolean mkdirs(FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException
Create a directory with the provided permission. The permission of the directory is set to be the provided permission as in setPermission, not permission&~umask- Parameters:
fs- FileSystem handledir- the name of the directory to be createdpermission- the permission of the directory- Returns:
- true if the directory creation succeeds; false otherwise
- Throws:
IOException- A problem creating the directories.- See Also:
create(FileSystem, Path, FsPermission)
-
checkPath
protected void checkPath(org.apache.hadoop.fs.Path path)
Check that a Path belongs to this FileSystem. The base implementation performs case insensitive equality checks of the URIs' schemes and authorities. Subclasses may implement slightly different checks.- Parameters:
path- to check- Throws:
IllegalArgumentException- if the path is not considered to be part of this FileSystem.
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len) throws IOExceptionReturn an array containing hostnames, offset and size of portions of the given file. For nonexistent file or regions,nullis returned.if f == null : result = null elif f.getLen() <= start: result = [] else result = [ locations(FS, b) for b in blocks(FS, p, s, s+l)]This call is most helpful with and distributed filesystem where the hostnames of machines that contain blocks of the given file can be determined. The default implementation returns an array containing one element:BlockLocation( { "localhost:9866" }, { "localhost" }, 0, file.getLen())In HDFS, if file is three-replicated, the returned array contains elements like:BlockLocation(offset: 0, length: BLOCK_SIZE, hosts: {"host1:9866", "host2:9866, host3:9866"}) BlockLocation(offset: BLOCK_SIZE, length: BLOCK_SIZE, hosts: {"host2:9866", "host3:9866, host4:9866"})And if a file is erasure-coded, the returned BlockLocation are logical block groups. Suppose we have a RS_3_2 coded file (3 data units and 2 parity units). 1. If the file size is less than one stripe size, say 2 * CELL_SIZE, then there will be one BlockLocation returned, with 0 offset, actual file size and 4 hosts (2 data blocks and 2 parity blocks) hosting the actual blocks. 3. If the file size is less than one group size but greater than one stripe size, then there will be one BlockLocation returned, with 0 offset, actual file size with 5 hosts (3 data blocks and 2 parity blocks) hosting the actual blocks. 4. If the file size is greater than one group size, 3 * BLOCK_SIZE + 123 for example, then the result will be like:BlockLocation(offset: 0, length: 3 * BLOCK_SIZE, hosts: {"host1:9866", "host2:9866","host3:9866","host4:9866","host5:9866"}) BlockLocation(offset: 3 * BLOCK_SIZE, length: 123, hosts: {"host1:9866", "host4:9866", "host5:9866"})- Parameters:
file- FilesStatus to get data fromstart- offset into the given filelen- length for which to get locations for- Throws:
IOException- IO failure
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.Path p, long start, long len) throws IOExceptionReturn an array containing hostnames, offset and size of portions of the given file. For a nonexistent file or regions,nullis returned. This call is most helpful with location-aware distributed filesystems, where it returns hostnames of machines that contain the given file. A FileSystem will normally return the equivalent result of passing theFileStatusof the path togetFileBlockLocations(FileStatus, long, long)- Parameters:
p- path is used to identify an FS since an FS could have another FS that it could be delegating the call tostart- offset into the given filelen- length for which to get locations for- Throws:
FileNotFoundException- when the path does not existIOException- IO failure
-
getServerDefaults
@Deprecated public org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
Deprecated.usegetServerDefaults(Path)insteadReturn a set of server default configuration values.- Returns:
- server default configuration values
- Throws:
IOException- IO failure
-
getServerDefaults
public org.apache.hadoop.fs.FsServerDefaults getServerDefaults(org.apache.hadoop.fs.Path p) throws IOExceptionReturn a set of server default configuration values.- Parameters:
p- path is used to identify an FS since an FS could have another FS that it could be delegating the call to- Returns:
- server default configuration values
- Throws:
IOException- IO failure
-
resolvePath
public org.apache.hadoop.fs.Path resolvePath(org.apache.hadoop.fs.Path p) throws IOExceptionReturn the fully-qualified path of path, resolving the path through any symlinks or mount point.- Parameters:
p- path to be resolved- Returns:
- fully qualified path
- Throws:
FileNotFoundException- if the path is not presentIOException- for any other error
-
open
public abstract org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f, int bufferSize) throws IOExceptionOpens an FSDataInputStream at the indicated Path.- Parameters:
f- the file name to openbufferSize- the size of the buffer to be used.- Throws:
IOException- IO failure
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f) throws IOExceptionOpens an FSDataInputStream at the indicated Path.- Parameters:
f- the file to open- Throws:
IOException- IO failure
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.PathHandle fd) throws IOExceptionOpen an FSDataInputStream matching the PathHandle instance. The implementation may encode metadata in PathHandle to address the resource directly and verify that the resource referenced satisfies constraints specified at its construciton.- Parameters:
fd- PathHandle object returned by the FS authority.- Throws:
org.apache.hadoop.fs.InvalidPathHandleException- IfPathHandleconstraints are not satisfiedIOException- IO failureUnsupportedOperationException- Ifopen(PathHandle, int)not overridden by subclass
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.PathHandle fd, int bufferSize) throws IOExceptionOpen an FSDataInputStream matching the PathHandle instance. The implementation may encode metadata in PathHandle to address the resource directly and verify that the resource referenced satisfies constraints specified at its construciton.- Parameters:
fd- PathHandle object returned by the FS authority.bufferSize- the size of the buffer to use- Throws:
org.apache.hadoop.fs.InvalidPathHandleException- IfPathHandleconstraints are not satisfiedIOException- IO failureUnsupportedOperationException- If not overridden by subclass
-
getPathHandle
public final org.apache.hadoop.fs.PathHandle getPathHandle(org.apache.hadoop.fs.FileStatus stat, org.apache.hadoop.fs.Options.HandleOpt... opt)Create a durable, serializable handle to the referent of the given entity.- Parameters:
stat- Referent in the target FileSystemopt- If absent, assumeOptions.HandleOpt.path().- Throws:
IllegalArgumentException- If the FileStatus does not belong to this FileSystemUnsupportedOperationException- IfcreatePathHandle(org.apache.hadoop.fs.FileStatus, org.apache.hadoop.fs.Options.HandleOpt...)not overridden by subclass.UnsupportedOperationException- If this FileSystem cannot enforce the specified constraints.
-
createPathHandle
protected org.apache.hadoop.fs.PathHandle createPathHandle(org.apache.hadoop.fs.FileStatus stat, org.apache.hadoop.fs.Options.HandleOpt... opt)Hook to implement support forPathHandleoperations.- Parameters:
stat- Referent in the target FileSystemopt- Constraints that determine the validity of thePathHandlereference.
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f) throws IOExceptionCreate an FSDataOutputStream at the indicated Path. Files are overwritten by default.- Parameters:
f- the file to create- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite) throws IOExceptionCreate an FSDataOutputStream at the indicated Path.- Parameters:
f- the file to createoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an exception will be thrown.- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.util.Progressable progress) throws IOExceptionCreate an FSDataOutputStream at the indicated Path with write-progress reporting. Files are overwritten by default.- Parameters:
f- the file to createprogress- to report progress- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, short replication) throws IOExceptionCreate an FSDataOutputStream at the indicated Path. Files are overwritten by default.- Parameters:
f- the file to createreplication- the replication factor- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, short replication, org.apache.hadoop.util.Progressable progress) throws IOExceptionCreate an FSDataOutputStream at the indicated Path with write-progress reporting. Files are overwritten by default.- Parameters:
f- the file to createreplication- the replication factorprogress- to report progress- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize) throws IOExceptionCreate an FSDataOutputStream at the indicated Path.- Parameters:
f- the file to createoverwrite- if a path with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionCreate anFSDataOutputStreamat the indicated Path with write-progress reporting. The frequency of callbacks is implementation-specific; it may be "none".- Parameters:
f- the path of the file to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize) throws IOExceptionCreate an FSDataOutputStream at the indicated Path.- Parameters:
f- the file name to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.replication- required block replication for the file.- Throws:
IOException- IO failure
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionCreate an FSDataOutputStream at the indicated Path with write-progress reporting.- Parameters:
f- the file name to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.replication- required block replication for the file.- Throws:
IOException- IO failure
-
create
public abstract org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionCreate an FSDataOutputStream at the indicated Path with write-progress reporting.- Parameters:
f- the file name to openpermission- file permissionoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.replication- required block replication for the file.blockSize- block sizeprogress- the progress reporter- Throws:
IOException- IO failure- See Also:
setPermission(Path, FsPermission)
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionCreate an FSDataOutputStream at the indicated Path with write-progress reporting.- Parameters:
f- the file name to openpermission- file permissionflags-CreateFlags to use for this stream.bufferSize- the size of the buffer to be used.replication- required block replication for the file.blockSize- block sizeprogress- the progress reporter- Throws:
IOException- IO failure- See Also:
setPermission(Path, FsPermission)
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOExceptionCreate an FSDataOutputStream at the indicated Path with a custom checksum option.- Parameters:
f- the file name to openpermission- file permissionflags-CreateFlags to use for this stream.bufferSize- the size of the buffer to be used.replication- required block replication for the file.blockSize- block sizeprogress- the progress reporterchecksumOpt- checksum parameter. If null, the values found in conf will be used.- Throws:
IOException- IO failure- See Also:
setPermission(Path, FsPermission)
-
primitiveCreate
@Deprecated protected org.apache.hadoop.fs.FSDataOutputStream primitiveCreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException
Deprecated.This create has been added to support the FileContext that processes the permission with umask before calling this method. This a temporary method added to support the transition from FileSystem to FileContext for user applications.- Throws:
IOException- IO failure
-
primitiveMkdir
@Deprecated protected boolean primitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission) throws IOException
Deprecated.This version of the mkdirs method assumes that the permission is absolute. It has been added to support the FileContext that processes the permission with umask before calling this method. This a temporary method added to support the transition from FileSystem to FileContext for user applications.- Parameters:
f- pathabsolutePermission- permissions- Returns:
- true if the directory was actually created.
- Throws:
IOException- IO failure- See Also:
mkdirs(Path, FsPermission)
-
primitiveMkdir
@Deprecated protected void primitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, boolean createParent) throws IOException
Deprecated.This version of the mkdirs method assumes that the permission is absolute. It has been added to support the FileContext that processes the permission with umask before calling this method. This a temporary method added to support the transition from FileSystem to FileContext for user applications.- Throws:
IOException
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionOpens an FSDataOutputStream at the indicated Path with write-progress reporting. Same as create(), except fails if parent directory doesn't already exist.- Parameters:
f- the file name to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.replication- required block replication for the file.blockSize- block sizeprogress- the progress reporter- Throws:
IOException- IO failure- See Also:
setPermission(Path, FsPermission)
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionOpens an FSDataOutputStream at the indicated Path with write-progress reporting. Same as create(), except fails if parent directory doesn't already exist.- Parameters:
f- the file name to openpermission- file permissionoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.replication- required block replication for the file.blockSize- block sizeprogress- the progress reporter- Throws:
IOException- IO failure- See Also:
setPermission(Path, FsPermission)
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionOpens an FSDataOutputStream at the indicated Path with write-progress reporting. Same as create(), except fails if parent directory doesn't already exist.- Parameters:
f- the file name to openpermission- file permissionflags-CreateFlags to use for this stream.bufferSize- the size of the buffer to be used.replication- required block replication for the file.blockSize- block sizeprogress- the progress reporter- Throws:
IOException- IO failure- See Also:
setPermission(Path, FsPermission)
-
createNewFile
public boolean createNewFile(org.apache.hadoop.fs.Path f) throws IOExceptionCreates the given Path as a brand-new zero-length file. If create fails, or if it already existed, return false. Important: the default implementation is not atomic- Parameters:
f- path to use for create- Throws:
IOException- IO failure
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f) throws IOExceptionAppend to an existing file (optional operation). Same asappend(f, getConf().getInt(IO_FILE_BUFFER_SIZE_KEY, IO_FILE_BUFFER_SIZE_DEFAULT), null)- Parameters:
f- the existing file to be appended.- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default).
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize) throws IOExceptionAppend to an existing file (optional operation). Same as append(f, bufferSize, null).- Parameters:
f- the existing file to be appended.bufferSize- the size of the buffer to be used.- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default).
-
append
public abstract org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOExceptionAppend to an existing file (optional operation).- Parameters:
f- the existing file to be appended.bufferSize- the size of the buffer to be used.progress- for reporting progress if it is not null.- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default).
-
concat
public void concat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs) throws IOExceptionConcat existing files together.- Parameters:
trg- the path to the target destination.psrcs- the paths to the sources to use for the concatenation.- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default).
-
getReplication
@Deprecated public short getReplication(org.apache.hadoop.fs.Path src) throws IOException
Deprecated.UsegetFileStatus(Path)insteadGet the replication factor.- Parameters:
src- file name- Returns:
- file replication
- Throws:
FileNotFoundException- if the path does not resolve.IOException- an IO failure
-
setReplication
public boolean setReplication(org.apache.hadoop.fs.Path src, short replication) throws IOExceptionSet the replication for an existing file. If a filesystem does not support replication, it will always return true: the check for a file existing may be bypassed. This is the default behavior.- Parameters:
src- file namereplication- new replication- Returns:
- true if successful, or the feature in unsupported; false if replication is supported but the file does not exist, or is a directory
- Throws:
IOException
-
rename
public abstract boolean rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionRenames Path src to Path dst.- Parameters:
src- path to be renameddst- new path after rename- Returns:
- true if rename is successful
- Throws:
IOException- on failure
-
rename
@Deprecated protected void rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
Deprecated.Renames Path src to Path dst- Fails if src is a file and dst is a directory.
- Fails if src is a directory and dst is a file.
- Fails if the parent of dst does not exist or is a file.
If OVERWRITE option is not passed as an argument, rename fails if the dst already exists.
If OVERWRITE option is passed as an argument, rename overwrites the dst if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
Note that atomicity of rename is dependent on the file system implementation. Please refer to the file system documentation for details. This default implementation is non atomic.
This method is deprecated since it is a temporary method added to support the transition from FileSystem to FileContext for user applications.
- Parameters:
src- path to be renameddst- new path after rename- Throws:
FileNotFoundException- src path does not exist, or the parent path of dst does not exist.org.apache.hadoop.fs.FileAlreadyExistsException- dest path exists and is a fileorg.apache.hadoop.fs.ParentNotDirectoryException- if the parent path of dest is not a directoryIOException- on failure
-
truncate
public boolean truncate(org.apache.hadoop.fs.Path f, long newLength) throws IOExceptionTruncate the file in the indicated path to the indicated size.- Fails if path is a directory.
- Fails if path does not exist.
- Fails if path is not closed.
- Fails if new size is greater than current size.
- Parameters:
f- The path to the file to be truncatednewLength- The size the file is to be truncated to- Returns:
trueif the file has been truncated to the desirednewLengthand is immediately available to be reused for write operations such asappend, orfalseif a background process of adjusting the length of the last block has been started, and clients should wait for it to complete before proceeding with further file updates.- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default).
-
delete
@Deprecated public boolean delete(org.apache.hadoop.fs.Path f) throws IOException
Deprecated.Usedelete(Path, boolean)instead.Delete a file/directory.- Throws:
IOException
-
delete
public abstract boolean delete(org.apache.hadoop.fs.Path f, boolean recursive) throws IOExceptionDelete a file.- Parameters:
f- the path to delete.recursive- if path is a directory and set to true, the directory is deleted else throws an exception. In case of a file the recursive can be set to either true or false.- Returns:
- true if delete is successful else false.
- Throws:
IOException- IO failure
-
deleteOnExit
public boolean deleteOnExit(org.apache.hadoop.fs.Path f) throws IOExceptionMark a path to be deleted when its FileSystem is closed. When the JVM shuts down cleanly, all cached FileSystem objects will be closed automatically. These the marked paths will be deleted as a result. If a FileSystem instance is not cached, i.e. has been created withcreateFileSystem(URI, Configuration), then the paths will be deleted in whenclose()is called on that instance. The path must exist in the filesystem at the time of the method call; it does not have to exist at the time of JVM shutdown. Notes- Clean shutdown of the JVM cannot be guaranteed.
- The time to shut down a FileSystem will depends on the number of files to delete. For filesystems where the cost of checking for the existence of a file/directory and the actual delete operation (for example: object stores) is high, the time to shutdown the JVM can be significantly extended by over-use of this feature.
- Connectivity problems with a remote filesystem may delay shutdown further, and may cause the files to not be deleted.
- Parameters:
f- the path to delete.- Returns:
- true if deleteOnExit is successful, otherwise false.
- Throws:
IOException- IO failure
-
cancelDeleteOnExit
public boolean cancelDeleteOnExit(org.apache.hadoop.fs.Path f)
Cancel the scheduled deletion of the path when the FileSystem is closed.- Parameters:
f- the path to cancel deletion- Returns:
- true if the path was found in the delete-on-exit list.
-
processDeleteOnExit
protected void processDeleteOnExit()
Delete all paths that were marked as delete-on-exit. This recursively deletes all files and directories in the specified paths. The time to process this operation isO(paths), with the actual time dependent on the time for existence and deletion operations to complete, successfully or not.
-
exists
public boolean exists(org.apache.hadoop.fs.Path f) throws IOExceptionCheck if a path exists. It is highly discouraged to call this method back to back with othergetFileStatus(Path)calls, as this will involve multiple redundant RPC calls in HDFS.- Parameters:
f- source path- Returns:
- true if the path exists
- Throws:
IOException- IO failure
-
isDirectory
@Deprecated public boolean isDirectory(org.apache.hadoop.fs.Path f) throws IOException
Deprecated.UsegetFileStatus(Path)insteadTrue iff the named path is a directory. Note: Avoid using this method. Instead reuse the FileStatus returned by getFileStatus() or listStatus() methods.- Parameters:
f- path to check- Throws:
IOException- IO failure
-
isFile
@Deprecated public boolean isFile(org.apache.hadoop.fs.Path f) throws IOException
Deprecated.UsegetFileStatus(Path)insteadTrue iff the named path is a regular file. Note: Avoid using this method. Instead reuse the FileStatus returned bygetFileStatus(Path)or listStatus() methods.- Parameters:
f- path to check- Throws:
IOException- IO failure
-
getLength
@Deprecated public long getLength(org.apache.hadoop.fs.Path f) throws IOException
Deprecated.UsegetFileStatus(Path)instead.The number of bytes in a file.- Returns:
- the number of bytes; 0 for a directory
- Throws:
FileNotFoundException- if the path does not resolveIOException- IO failure
-
getContentSummary
public org.apache.hadoop.fs.ContentSummary getContentSummary(org.apache.hadoop.fs.Path f) throws IOExceptionReturn theContentSummaryof a givenPath.- Parameters:
f- path to use- Throws:
FileNotFoundException- if the path does not resolveIOException- IO failure
-
getQuotaUsage
public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(org.apache.hadoop.fs.Path f) throws IOExceptionReturn theQuotaUsageof a givenPath.- Parameters:
f- path to use- Returns:
- the quota usage
- Throws:
IOException- IO failure
-
listStatus
public abstract org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path f) throws FileNotFoundException, IOExceptionList the statuses of the files/directories in the given path if the path is a directory.Does not guarantee to return the List of files/directories status in a sorted order.
Will not return null. Expect IOException upon access error.
- Parameters:
f- given path- Returns:
- the statuses of the files/directories in the given patch
- Throws:
FileNotFoundException- when the path does not existIOException- see specific implementation
-
listStatusBatch
@Private protected FileSystem.DirectoryEntries listStatusBatch(org.apache.hadoop.fs.Path f, byte[] token) throws FileNotFoundException, IOException
Given an opaque iteration token, return the next batch of entries in a directory. This is a private API not meant for use by end users.This method should be overridden by FileSystem subclasses that want to use the generic
listStatusIterator(Path)implementation.- Parameters:
f- Path to listtoken- opaque iteration token returned by previous call, or null if this is the first call.- Returns:
- Throws:
FileNotFoundExceptionIOException
-
listCorruptFileBlocks
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.Path> listCorruptFileBlocks(org.apache.hadoop.fs.Path path) throws IOExceptionList corrupted file blocks.- Returns:
- an iterator over the corrupt files under the given path (may contain duplicates if a file has more than one corrupt block)
- Throws:
UnsupportedOperationException- if the operation is unsupported (default).IOException- IO failure
-
listStatus
public org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.PathFilter filter) throws FileNotFoundException, IOExceptionFilter files/directories in the given path using the user-supplied path filter.Does not guarantee to return the List of files/directories status in a sorted order.
- Parameters:
f- a path namefilter- the user-supplied path filter- Returns:
- an array of FileStatus objects for the files under the given path after applying the filter
- Throws:
FileNotFoundException- when the path does not existIOException- see specific implementation
-
listStatus
public org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path[] files) throws FileNotFoundException, IOExceptionFilter files/directories in the given list of paths using default path filter.Does not guarantee to return the List of files/directories status in a sorted order.
- Parameters:
files- a list of paths- Returns:
- a list of statuses for the files under the given paths after applying the filter default Path filter
- Throws:
FileNotFoundException- when the path does not existIOException- see specific implementation
-
listStatus
public org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path[] files, org.apache.hadoop.fs.PathFilter filter) throws FileNotFoundException, IOExceptionFilter files/directories in the given list of paths using user-supplied path filter.Does not guarantee to return the List of files/directories status in a sorted order.
- Parameters:
files- a list of pathsfilter- the user-supplied path filter- Returns:
- a list of statuses for the files under the given paths after applying the filter
- Throws:
FileNotFoundException- when the path does not existIOException- see specific implementation
-
globStatus
public org.apache.hadoop.fs.FileStatus[] globStatus(org.apache.hadoop.fs.Path pathPattern) throws IOExceptionReturn all the files that match filePattern and are not checksum files. Results are sorted by their names.
A filename pattern is composed of regular characters and special pattern matching characters, which are:
-
- ?
- Matches any single character.
- *
- Matches zero or more characters.
- [abc]
- Matches a single character from character set
{a,b,c}.
- [a-b]
- Matches a single character from the character range
{a...b}. Note that character a must be
lexicographically less than or equal to character b.
- [^a]
- Matches a single character that is not from character set or range
{a}. Note that the ^ character must occur
immediately to the right of the opening bracket.
- \c
- Removes (escapes) any special meaning of character c.
- {ab,cd}
- Matches a string from the string set {ab, cd}
- {ab,c{de,fh}}
- Matches a string from the string set {ab, cde, cfh}
- Parameters:
pathPattern- a glob specifying a path pattern- Returns:
- an array of paths that match the path pattern
- Throws:
IOException- IO failure
-
-
globStatus
public org.apache.hadoop.fs.FileStatus[] globStatus(org.apache.hadoop.fs.Path pathPattern, org.apache.hadoop.fs.PathFilter filter) throws IOExceptionReturn an array ofFileStatusobjects whose path names matchpathPatternand is accepted by the user-supplied path filter. Results are sorted by their path names.- Parameters:
pathPattern- a glob specifying the path patternfilter- a user-supplied path filter- Returns:
- null if
pathPatternhas no glob and the path does not exist an empty array ifpathPatternhas a glob and no path matches it else an array ofFileStatusobjects matching the pattern - Throws:
IOException- if any I/O error occurs when fetching file status
-
listLocatedStatus
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path f) throws FileNotFoundException, IOExceptionList the statuses of the files/directories in the given path if the path is a directory. Return the file's status and block locations If the path is a file. If a returned status is a file, it contains the file's block locations.- Parameters:
f- is the path- Returns:
- an iterator that traverses statuses of the files/directories in the given path
- Throws:
FileNotFoundException- Iffdoes not existIOException- If an I/O error occurred
-
listLocatedStatus
protected org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.PathFilter filter) throws FileNotFoundException, IOExceptionList a directory. The returned results include its block location if it is a file The results are filtered by the given path filter- Parameters:
f- a pathfilter- a path filter- Returns:
- an iterator that traverses statuses of the files/directories in the given path
- Throws:
FileNotFoundException- iffdoes not existIOException- if any I/O error occurred
-
listStatusIterator
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.FileStatus> listStatusIterator(org.apache.hadoop.fs.Path p) throws FileNotFoundException, IOExceptionReturns a remote iterator so that followup calls are made on demand while consuming the entries. Each FileSystem implementation should override this method and provide a more efficient implementation, if possible. Does not guarantee to return the iterator that traverses statuses of the files in a sorted order.- Parameters:
p- target path- Returns:
- remote iterator
- Throws:
FileNotFoundException- ifpdoes not existIOException- if any I/O error occurred
-
listFiles
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listFiles(org.apache.hadoop.fs.Path f, boolean recursive) throws FileNotFoundException, IOExceptionList the statuses and block locations of the files in the given path. Does not guarantee to return the iterator that traverses statuses of the files in a sorted order.If the path is a directory, if recursive is false, returns files in the directory; if recursive is true, return files in the subtree rooted at the path. If the path is a file, return the file's status and block locations.
- Parameters:
f- is the pathrecursive- if the subdirectories need to be traversed recursively- Returns:
- an iterator that traverses statuses of the files
- Throws:
FileNotFoundException- when the path does not exist;IOException- see specific implementation
-
getHomeDirectory
public org.apache.hadoop.fs.Path getHomeDirectory()
Return the current user's home directory in this FileSystem. The default implementation returns"/user/$USER/".
-
setWorkingDirectory
public abstract void setWorkingDirectory(org.apache.hadoop.fs.Path new_dir)
Set the current working directory for the given FileSystem. All relative paths will be resolved relative to it.- Parameters:
new_dir- Path of new working directory
-
getWorkingDirectory
public abstract org.apache.hadoop.fs.Path getWorkingDirectory()
Get the current working directory for the given FileSystem- Returns:
- the directory pathname
-
getInitialWorkingDirectory
protected org.apache.hadoop.fs.Path getInitialWorkingDirectory()
Note: with the new FileContext class, getWorkingDirectory() will be removed. The working directory is implemented in FileContext. Some FileSystems like LocalFileSystem have an initial workingDir that we use as the starting workingDir. For other file systems like HDFS there is no built in notion of an initial workingDir.- Returns:
- if there is built in notion of workingDir then it is returned; else a null is returned.
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path f) throws IOExceptionCallmkdirs(Path, FsPermission)with default permission.- Parameters:
f- path- Returns:
- true if the directory was created
- Throws:
IOException- IO failure
-
mkdirs
public abstract boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws IOExceptionMake the given file and all non-existent parents into directories. Has roughly the semantics of Unix @{code mkdir -p}. Existence of the directory hierarchy is not an error.- Parameters:
f- path to createpermission- to apply to f- Throws:
IOException- IO failure
-
copyFromLocalFile
public void copyFromLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionThe src file is on the local disk. Add it to filesystem at the given dst name and the source is kept intact afterwards- Parameters:
src- pathdst- path- Throws:
IOException- IO failure
-
moveFromLocalFile
public void moveFromLocalFile(org.apache.hadoop.fs.Path[] srcs, org.apache.hadoop.fs.Path dst) throws IOExceptionThe src files is on the local disk. Add it to filesystem at the given dst name, removing the source afterwards.- Parameters:
srcs- source pathsdst- path- Throws:
IOException- IO failure
-
moveFromLocalFile
public void moveFromLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionThe src file is on the local disk. Add it to the filesystem at the given dst name, removing the source afterwards.- Parameters:
src- local pathdst- path- Throws:
IOException- IO failure
-
copyFromLocalFile
public void copyFromLocalFile(boolean delSrc, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionThe src file is on the local disk. Add it to the filesystem at the given dst name. delSrc indicates if the source should be removed- Parameters:
delSrc- whether to delete the srcsrc- pathdst- path- Throws:
IOException
-
copyFromLocalFile
public void copyFromLocalFile(boolean delSrc, boolean overwrite, org.apache.hadoop.fs.Path[] srcs, org.apache.hadoop.fs.Path dst) throws IOExceptionThe src files are on the local disk. Add it to the filesystem at the given dst name. delSrc indicates if the source should be removed- Parameters:
delSrc- whether to delete the srcoverwrite- whether to overwrite an existing filesrcs- array of paths which are sourcedst- path- Throws:
IOException- IO failure
-
copyFromLocalFile
public void copyFromLocalFile(boolean delSrc, boolean overwrite, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionThe src file is on the local disk. Add it to the filesystem at the given dst name. delSrc indicates if the source should be removed- Parameters:
delSrc- whether to delete the srcoverwrite- whether to overwrite an existing filesrc- pathdst- path- Throws:
IOException- IO failure
-
copyToLocalFile
public void copyToLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionCopy it a file from the remote filesystem to the local one.- Parameters:
src- path src file in the remote filesystemdst- path local destination- Throws:
IOException- IO failure
-
moveToLocalFile
public void moveToLocalFile(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionCopy a file to the local filesystem, then delete it from the remote filesystem (if successfully copied).- Parameters:
src- path src file in the remote filesystemdst- path local destination- Throws:
IOException- IO failure
-
copyToLocalFile
public void copyToLocalFile(boolean delSrc, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOExceptionCopy it a file from a remote filesystem to the local one. delSrc indicates if the src will be removed or not.- Parameters:
delSrc- whether to delete the srcsrc- path src file in the remote filesystemdst- path local destination- Throws:
IOException- IO failure
-
copyToLocalFile
public void copyToLocalFile(boolean delSrc, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, boolean useRawLocalFileSystem) throws IOExceptionThe src file is under this filesystem, and the dst is on the local disk. Copy it from the remote filesystem to the local dst name. delSrc indicates if the src will be removed or not. useRawLocalFileSystem indicates whether to use RawLocalFileSystem as the local file system or not. RawLocalFileSystem is non checksumming, So, It will not create any crc files at local.- Parameters:
delSrc- whether to delete the srcsrc- pathdst- pathuseRawLocalFileSystem- whether to use RawLocalFileSystem as local file system or not.- Throws:
IOException- for any IO error
-
startLocalOutput
public org.apache.hadoop.fs.Path startLocalOutput(org.apache.hadoop.fs.Path fsOutputFile, org.apache.hadoop.fs.Path tmpLocalFile) throws IOExceptionReturns a local file that the user can write output to. The caller provides both the eventual target name in this FileSystem and the local working file path. If this FileSystem is local, we write directly into the target. If the FileSystem is not local, we write into the tmp local area.- Parameters:
fsOutputFile- path of output filetmpLocalFile- path of local tmp file- Throws:
IOException- IO failure
-
completeLocalOutput
public void completeLocalOutput(org.apache.hadoop.fs.Path fsOutputFile, org.apache.hadoop.fs.Path tmpLocalFile) throws IOExceptionCalled when we're all done writing to the target. A local FS will do nothing, because we've written to exactly the right place. A remote FS will copy the contents of tmpLocalFile to the correct target at fsOutputFile.- Parameters:
fsOutputFile- path of output filetmpLocalFile- path to local tmp file- Throws:
IOException- IO failure
-
close
public void close() throws IOExceptionClose this FileSystem instance. Will release any held locks, delete all files queued for deletion through calls todeleteOnExit(Path), and remove this FS instance from the cache, if cached. After this operation, the outcome of any method call on this FileSystem instance, or any input/output stream created by it is undefined.- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable- Throws:
IOException- IO failure
-
getUsed
public long getUsed() throws IOExceptionReturn the total size of all files in the filesystem.- Throws:
IOException- IO failure
-
getUsed
public long getUsed(org.apache.hadoop.fs.Path path) throws IOExceptionReturn the total size of all files from a specified path.- Throws:
IOException- IO failure
-
getBlockSize
@Deprecated public long getBlockSize(org.apache.hadoop.fs.Path f) throws IOException
Deprecated.UsegetFileStatus(Path)insteadGet the block size for a particular file.- Parameters:
f- the filename- Returns:
- the number of bytes in a block
- Throws:
FileNotFoundException- if the path is not presentIOException- IO failure
-
getDefaultBlockSize
@Deprecated public long getDefaultBlockSize()
Deprecated.usegetDefaultBlockSize(Path)insteadReturn the number of bytes that large input files should be optimally be split into to minimize I/O time.
-
getDefaultBlockSize
public long getDefaultBlockSize(org.apache.hadoop.fs.Path f)
Return the number of bytes that large input files should be optimally be split into to minimize I/O time. The given path will be used to locate the actual filesystem. The full path does not have to exist.- Parameters:
f- path of file- Returns:
- the default block size for the path's filesystem
-
getDefaultReplication
@Deprecated public short getDefaultReplication()
Deprecated.usegetDefaultReplication(Path)insteadGet the default replication.- Returns:
- the replication; the default value is "1".
-
getDefaultReplication
public short getDefaultReplication(org.apache.hadoop.fs.Path path)
Get the default replication for a path. The given path will be used to locate the actual FileSystem to query. The full path does not have to exist.- Parameters:
path- of the file- Returns:
- default replication for the path's filesystem
-
getFileStatus
public abstract org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f) throws IOExceptionReturn a file status object that represents the path.- Parameters:
f- The path we want information from- Returns:
- a FileStatus object
- Throws:
FileNotFoundException- when the path does not existIOException- see specific implementation
-
access
@LimitedPrivate({"HDFS","Hive"}) public void access(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOExceptionChecks if the user can access a path. The mode specifies which access checks to perform. If the requested permissions are granted, then the method returns normally. If access is denied, then the method throws anAccessControlException.The default implementation calls
getFileStatus(Path)and checks the returned permissions against the requested permissions. Note that thegetFileStatus(Path)call will be subject to authorization checks. Typically, this requires search (execute) permissions on each directory in the path's prefix, but this is implementation-defined. Any file system that provides a richer authorization model (such as ACLs) may override the default implementation so that it checks against that model instead.In general, applications should avoid using this method, due to the risk of time-of-check/time-of-use race conditions. The permissions on a file may change immediately after the access call returns. Most applications should prefer running specific file system actions as the desired user represented by a
UserGroupInformation.- Parameters:
path- Path to checkmode- type of access to check- Throws:
org.apache.hadoop.security.AccessControlException- if access is deniedFileNotFoundException- if the path does not existIOException- see specific implementation
-
fixRelativePart
protected org.apache.hadoop.fs.Path fixRelativePart(org.apache.hadoop.fs.Path p)
SeeFileContext.fixRelativePart(org.apache.hadoop.fs.Path).
-
createSymlink
public void createSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.FileAlreadyExistsException, FileNotFoundException, org.apache.hadoop.fs.ParentNotDirectoryException, org.apache.hadoop.fs.UnsupportedFileSystemException, IOExceptionSeeFileContext.createSymlink(Path, Path, boolean).- Throws:
org.apache.hadoop.security.AccessControlExceptionorg.apache.hadoop.fs.FileAlreadyExistsExceptionFileNotFoundExceptionorg.apache.hadoop.fs.ParentNotDirectoryExceptionorg.apache.hadoop.fs.UnsupportedFileSystemExceptionIOException
-
getFileLinkStatus
public org.apache.hadoop.fs.FileStatus getFileLinkStatus(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnsupportedFileSystemException, IOExceptionSeeFileContext.getFileLinkStatus(Path).- Throws:
FileNotFoundException- when the path does not existIOException- see specific implementationorg.apache.hadoop.security.AccessControlExceptionorg.apache.hadoop.fs.UnsupportedFileSystemException
-
supportsSymlinks
public boolean supportsSymlinks()
SeeAbstractFileSystem.supportsSymlinks().
-
getLinkTarget
public org.apache.hadoop.fs.Path getLinkTarget(org.apache.hadoop.fs.Path f) throws IOExceptionSeeFileContext.getLinkTarget(Path).- Throws:
UnsupportedOperationException- if the operation is unsupported (default outcome).IOException
-
resolveLink
protected org.apache.hadoop.fs.Path resolveLink(org.apache.hadoop.fs.Path f) throws IOExceptionSeeAbstractFileSystem.getLinkTarget(Path).- Throws:
UnsupportedOperationException- if the operation is unsupported (default outcome).IOException
-
getFileChecksum
public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f) throws IOExceptionGet the checksum of a file, if the FS supports checksums.- Parameters:
f- The file path- Returns:
- The file checksum. The default return value is null, which indicates that no checksum algorithm is implemented in the corresponding FileSystem.
- Throws:
IOException- IO failure
-
getFileChecksum
public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f, long length) throws IOExceptionGet the checksum of a file, from the beginning of the file till the specific length.- Parameters:
f- The file pathlength- The length of the file range for checksum calculation- Returns:
- The file checksum or null if checksums are not supported.
- Throws:
IOException- IO failure
-
setVerifyChecksum
public void setVerifyChecksum(boolean verifyChecksum)
Set the verify checksum flag. This is only applicable if the corresponding filesystem supports checksums. By default doesn't do anything.- Parameters:
verifyChecksum- Verify checksum flag
-
setWriteChecksum
public void setWriteChecksum(boolean writeChecksum)
Set the write checksum flag. This is only applicable if the corresponding filesystem supports checksums. By default doesn't do anything.- Parameters:
writeChecksum- Write checksum flag
-
getStatus
public org.apache.hadoop.fs.FsStatus getStatus() throws IOExceptionReturns a status object describing the use and capacity of the filesystem. If the filesystem has multiple partitions, the use and capacity of the root partition is reflected.- Returns:
- a FsStatus object
- Throws:
IOException- see specific implementation
-
getStatus
public org.apache.hadoop.fs.FsStatus getStatus(org.apache.hadoop.fs.Path p) throws IOExceptionReturns a status object describing the use and capacity of the filesystem. If the filesystem has multiple partitions, the use and capacity of the partition pointed to by the specified path is reflected.- Parameters:
p- Path for which status should be obtained. null means the default partition.- Returns:
- a FsStatus object
- Throws:
IOException- see specific implementation
-
setPermission
public void setPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws IOExceptionSet permission of a path.- Parameters:
p- The pathpermission- permission- Throws:
IOException- IO failure
-
setOwner
public void setOwner(org.apache.hadoop.fs.Path p, String username, String groupname) throws IOExceptionSet owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.- Parameters:
p- The pathusername- If it is null, the original username remains unchanged.groupname- If it is null, the original groupname remains unchanged.- Throws:
IOException- IO failure
-
setTimes
public void setTimes(org.apache.hadoop.fs.Path p, long mtime, long atime) throws IOExceptionSet access time of a file.- Parameters:
p- The pathmtime- Set the modification time of this file. The number of milliseconds since Jan 1, 1970. A value of -1 means that this call should not set modification time.atime- Set the access time of this file. The number of milliseconds since Jan 1, 1970. A value of -1 means that this call should not set access time.- Throws:
IOException- IO failure
-
createSnapshot
public final org.apache.hadoop.fs.Path createSnapshot(org.apache.hadoop.fs.Path path) throws IOExceptionCreate a snapshot with a default name.- Parameters:
path- The directory where snapshots will be taken.- Returns:
- the snapshot path.
- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported
-
createSnapshot
public org.apache.hadoop.fs.Path createSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) throws IOExceptionCreate a snapshot.- Parameters:
path- The directory where snapshots will be taken.snapshotName- The name of the snapshot- Returns:
- the snapshot path.
- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported
-
renameSnapshot
public void renameSnapshot(org.apache.hadoop.fs.Path path, String snapshotOldName, String snapshotNewName) throws IOExceptionRename a snapshot.- Parameters:
path- The directory path where the snapshot was takensnapshotOldName- Old name of the snapshotsnapshotNewName- New name of the snapshot- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
deleteSnapshot
public void deleteSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) throws IOExceptionDelete a snapshot of a directory.- Parameters:
path- The directory that the to-be-deleted snapshot belongs tosnapshotName- The name of the snapshot- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
modifyAclEntries
public void modifyAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOExceptionModifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)- Parameters:
path- Path to modifyaclSpec- Listdescribing modifications - Throws:
IOException- if an ACL could not be modifiedUnsupportedOperationException- if the operation is unsupported (default outcome).
-
removeAclEntries
public void removeAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOExceptionRemoves ACL entries from files and directories. Other ACL entries are retained.- Parameters:
path- Path to modifyaclSpec- List describing entries to remove- Throws:
IOException- if an ACL could not be modifiedUnsupportedOperationException- if the operation is unsupported (default outcome).
-
removeDefaultAcl
public void removeDefaultAcl(org.apache.hadoop.fs.Path path) throws IOExceptionRemoves all default ACL entries from files and directories.- Parameters:
path- Path to modify- Throws:
IOException- if an ACL could not be modifiedUnsupportedOperationException- if the operation is unsupported (default outcome).
-
removeAcl
public void removeAcl(org.apache.hadoop.fs.Path path) throws IOExceptionRemoves all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.- Parameters:
path- Path to modify- Throws:
IOException- if an ACL could not be removedUnsupportedOperationException- if the operation is unsupported (default outcome).
-
setAcl
public void setAcl(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOExceptionFully replaces ACL of files and directories, discarding all existing entries.- Parameters:
path- Path to modifyaclSpec- List describing modifications, which must include entries for user, group, and others for compatibility with permission bits.- Throws:
IOException- if an ACL could not be modifiedUnsupportedOperationException- if the operation is unsupported (default outcome).
-
getAclStatus
public org.apache.hadoop.fs.permission.AclStatus getAclStatus(org.apache.hadoop.fs.Path path) throws IOExceptionGets the ACL of a file or directory.- Parameters:
path- Path to get- Returns:
- AclStatus describing the ACL of the file or directory
- Throws:
IOException- if an ACL could not be readUnsupportedOperationException- if the operation is unsupported (default outcome).
-
setXAttr
public void setXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value) throws IOExceptionSet an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to modifyname- xattr name.value- xattr value.- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
setXAttr
public void setXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOExceptionSet an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to modifyname- xattr name.value- xattr value.flag- xattr set flag- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
getXAttr
public byte[] getXAttr(org.apache.hadoop.fs.Path path, String name) throws IOExceptionGet an xattr name and value for a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributename- xattr name.- Returns:
- byte[] xattr value.
- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
getXAttrs
public Map<String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path) throws IOException
Get all of the xattr name/value pairs for a file or directory. Only those xattrs which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributes- Returns:
- Map describing the XAttrs of the file or directory
- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
getXAttrs
public Map<String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path, List<String> names) throws IOException
Get all of the xattrs name/value pairs for a file or directory. Only those xattrs which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributesnames- XAttr names.- Returns:
- Map describing the XAttrs of the file or directory
- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
listXAttrs
public List<String> listXAttrs(org.apache.hadoop.fs.Path path) throws IOException
Get all of the xattr names for a file or directory. Only those xattr names which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to get extended attributes- Returns:
- List
of the XAttr names of the file or directory - Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
removeXAttr
public void removeXAttr(org.apache.hadoop.fs.Path path, String name) throws IOExceptionRemove an xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
path- Path to remove extended attributename- xattr name- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
setStoragePolicy
public void setStoragePolicy(org.apache.hadoop.fs.Path src, String policyName) throws IOExceptionSet the storage policy for a given file or directory.- Parameters:
src- file or directory path.policyName- the name of the target storage policy. The list of supported Storage policies can be retrieved viagetAllStoragePolicies().- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
unsetStoragePolicy
public void unsetStoragePolicy(org.apache.hadoop.fs.Path src) throws IOExceptionUnset the storage policy set for a given file or directory.- Parameters:
src- file or directory path.- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
getStoragePolicy
public org.apache.hadoop.fs.BlockStoragePolicySpi getStoragePolicy(org.apache.hadoop.fs.Path src) throws IOExceptionQuery the effective storage policy ID for the given file or directory.- Parameters:
src- file or directory path.- Returns:
- storage policy for give file.
- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
getAllStoragePolicies
public Collection<? extends org.apache.hadoop.fs.BlockStoragePolicySpi> getAllStoragePolicies() throws IOException
Retrieve all the storage policies supported by this file system.- Returns:
- all storage policies supported by this filesystem.
- Throws:
IOException- IO failureUnsupportedOperationException- if the operation is unsupported (default outcome).
-
getTrashRoot
public org.apache.hadoop.fs.Path getTrashRoot(org.apache.hadoop.fs.Path path)
Get the root directory of Trash for current user when the path specified is deleted.- Parameters:
path- the trash root of the path to be determined.- Returns:
- the default implementation returns
/user/$USER/.Trash
-
getTrashRoots
public Collection<org.apache.hadoop.fs.FileStatus> getTrashRoots(boolean allUsers)
Get all the trash roots for current user or all users.- Parameters:
allUsers- return trash roots for all users if true.- Returns:
- all the trash root directories.
Default FileSystem returns .Trash under users' home directories if
/user/$USER/.Trashexists.
-
getFileSystemClass
public static Class<? extends FileSystem> getFileSystemClass(String scheme, org.apache.hadoop.conf.Configuration conf) throws IOException
Get the FileSystem implementation class of a filesystem. This triggers a scan and load of all FileSystem implementations listed as services and discovered via theServiceLoader- Parameters:
scheme- URL scheme of FSconf- configuration: can be null, in which case the check for a filesystem binding declaration in the configuration is skipped.- Returns:
- the filesystem
- Throws:
org.apache.hadoop.fs.UnsupportedFileSystemException- if there was no known implementation for the scheme.IOException- if the filesystem could not be loaded
-
getStatistics
@Deprecated public static Map<String,FileSystem.Statistics> getStatistics()
Deprecated.Get the Map of Statistics object indexed by URI Scheme.- Returns:
- a Map having a key as URI scheme and value as Statistics object
-
getAllStatistics
@Deprecated public static List<FileSystem.Statistics> getAllStatistics()
Deprecated.Return the FileSystem classes that have Statistics.
-
getStatistics
@Deprecated public static FileSystem.Statistics getStatistics(String scheme, Class<? extends FileSystem> cls)
Deprecated.Get the statistics for a particular file system.- Parameters:
cls- the class to lookup- Returns:
- a statistics object
-
clearStatistics
public static void clearStatistics()
Reset all statistics for all file systems.
-
printStatistics
public static void printStatistics() throws IOExceptionPrint all statistics for all file systems toSystem.out- Throws:
IOException
-
areSymlinksEnabled
public static boolean areSymlinksEnabled()
-
enableSymlinks
public static void enableSymlinks()
-
getStorageStatistics
public org.apache.hadoop.fs.StorageStatistics getStorageStatistics()
Get the StorageStatistics for this FileSystem object. These statistics are per-instance. They are not shared with any other FileSystem object.This is a default method which is intended to be overridden by subclasses. The default implementation returns an empty storage statistics object.
- Returns:
- The StorageStatistics for this FileSystem instance. Will never be null.
-
getGlobalStorageStatistics
public static org.apache.hadoop.fs.GlobalStorageStatistics getGlobalStorageStatistics()
Get the global storage statistics.
-
createFile
public org.apache.hadoop.fs.FSDataOutputStreamBuilder createFile(org.apache.hadoop.fs.Path path)
Create a new FSDataOutputStreamBuilder for the file with path. Files are overwritten by default.- Parameters:
path- file path- Returns:
- a FSDataOutputStreamBuilder object to build the file HADOOP-14384. Temporarily reduce the visibility of method before the builder interface becomes stable.
-
appendFile
public org.apache.hadoop.fs.FSDataOutputStreamBuilder appendFile(org.apache.hadoop.fs.Path path)
Create a Builder to append a file.- Parameters:
path- file path.- Returns:
- a
FSDataOutputStreamBuilderto build file append request.
-
-