Uses of Class
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo

Packages that use DatanodeStorageInfo
org.apache.hadoop.hdfs.server.blockmanagement   
 

Uses of DatanodeStorageInfo in org.apache.hadoop.hdfs.server.blockmanagement
 

Fields in org.apache.hadoop.hdfs.server.blockmanagement declared as DatanodeStorageInfo
static DatanodeStorageInfo[] DatanodeStorageInfo.EMPTY_ARRAY
           
 

Methods in org.apache.hadoop.hdfs.server.blockmanagement that return DatanodeStorageInfo
protected  DatanodeStorageInfo BlockPlacementPolicyWithNodeGroup.chooseLocalRack(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes)
           
protected  DatanodeStorageInfo BlockPlacementPolicyWithNodeGroup.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes, boolean fallbackToLocalRack)
          choose local node of localMachine as the target.
 DatanodeStorageInfo[] BlockInfoUnderConstruction.getExpectedStorageLocations()
          Create array of expected replica locations (as has been assigned by chooseTargets()).
 

Methods in org.apache.hadoop.hdfs.server.blockmanagement that return types with arguments of type DatanodeStorageInfo
 Collection<DatanodeStorageInfo> BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first, Collection<DatanodeStorageInfo> second)
          Pick up replica node set for deleting replica as over-replicated.
 

Methods in org.apache.hadoop.hdfs.server.blockmanagement with parameters of type DatanodeStorageInfo
static void DatanodeStorageInfo.incrementBlocksScheduled(DatanodeStorageInfo... storages)
          Increment the number of blocks scheduled for each given storage
 void BlockInfoUnderConstruction.setExpectedLocations(DatanodeStorageInfo[] targets)
          Set expected locations
 BlockInfoUnderConstruction MutableBlockCollection.setLastBlock(org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo lastBlock, DatanodeStorageInfo[] storages)
          Convert the last block of the collection to an under-construction block and set the locations.
static org.apache.hadoop.hdfs.protocol.DatanodeInfo[] DatanodeStorageInfo.toDatanodeInfos(DatanodeStorageInfo[] storages)
           
static String[] DatanodeStorageInfo.toStorageIDs(DatanodeStorageInfo[] storages)
           
static StorageType[] DatanodeStorageInfo.toStorageTypes(DatanodeStorageInfo[] storages)
           
 

Method parameters in org.apache.hadoop.hdfs.server.blockmanagement with type arguments of type DatanodeStorageInfo
protected  DatanodeStorageInfo BlockPlacementPolicyWithNodeGroup.chooseLocalRack(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes)
           
protected  DatanodeStorageInfo BlockPlacementPolicyWithNodeGroup.chooseLocalStorage(org.apache.hadoop.net.Node localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes, boolean fallbackToLocalRack)
          choose local node of localMachine as the target.
protected  void BlockPlacementPolicyWithNodeGroup.chooseRemoteRack(int numOfReplicas, org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxReplicasPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<StorageType,Integer> storageTypes)
          Choose numOfReplicas nodes from the racks that localMachine is NOT on.
 Collection<DatanodeStorageInfo> BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first, Collection<DatanodeStorageInfo> second)
          Pick up replica node set for deleting replica as over-replicated.
 Collection<DatanodeStorageInfo> BlockPlacementPolicyWithNodeGroup.pickupReplicaSet(Collection<DatanodeStorageInfo> first, Collection<DatanodeStorageInfo> second)
          Pick up replica node set for deleting replica as over-replicated.
 

Constructors in org.apache.hadoop.hdfs.server.blockmanagement with parameters of type DatanodeStorageInfo
BlockInfoUnderConstruction(org.apache.hadoop.hdfs.protocol.Block blk, int replication, HdfsServerConstants.BlockUCState state, DatanodeStorageInfo[] targets)
          Create a block that is currently being constructed.
 



Copyright © 2014 Apache Software Foundation. All Rights Reserved.