|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyWithNodeGroup
public class BlockPlacementPolicyWithNodeGroup
The class is responsible for choosing the desired number of targets for placing block replicas on environment with node-group layer. The replica placement strategy is adjusted to: If the writer is on a datanode, the 1st replica is placed on the local node (or local node-group), otherwise a random datanode. The 2nd replica is placed on a datanode that is on a different rack with 1st replica node. The 3rd replica is placed on a datanode which is on a different node-group but the same rack as the second replica node.
| Field Summary |
|---|
| Fields inherited from class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault |
|---|
clusterMap, considerLoad, heartbeatInterval, threadLocalBuilder, tolerateHeartbeatMultiplier |
| Method Summary | |
|---|---|
protected int |
addToExcludedNodes(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes)
Find other nodes in the same nodegroup of localMachine and add them into excludeNodes as replica should not be duplicated for nodes within the same nodegroup |
protected void |
adjustExcludedNodes(HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
org.apache.hadoop.net.Node chosenNode)
After choosing a node to place replica, adjust excluded nodes accordingly. |
protected org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor |
chooseLocalNode(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> results,
boolean avoidStaleNodes)
choose local node of localMachine as the target. |
protected org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor |
chooseLocalRack(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> results,
boolean avoidStaleNodes)
|
protected void |
chooseRemoteRack(int numOfReplicas,
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxReplicasPerRack,
List<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> results,
boolean avoidStaleNodes)
|
protected String |
getRack(org.apache.hadoop.hdfs.protocol.DatanodeInfo cur)
Get rack string from a data node |
void |
initialize(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.hdfs.server.namenode.FSClusterStats stats,
org.apache.hadoop.net.NetworkTopology clusterMap)
Used to setup a BlockPlacementPolicy object. |
Iterator<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> |
pickupReplicaSet(Collection<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> first,
Collection<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> second)
Pick up replica node set for deleting replica as over-replicated. |
| Methods inherited from class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault |
|---|
chooseRandom, chooseRandom, chooseReplicaToDelete, chooseTarget, chooseTarget, isGoodTarget, verifyBlockPlacement |
| Methods inherited from class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy |
|---|
adjustSetsWithChosenReplica, getInstance, splitNodesWithRack |
| Methods inherited from class java.lang.Object |
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
| Method Detail |
|---|
public void initialize(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.hdfs.server.namenode.FSClusterStats stats,
org.apache.hadoop.net.NetworkTopology clusterMap)
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
initialize in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultconf - the configuration objectstats - retrieve cluster status from hereclusterMap - cluster topology
protected org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor chooseLocalNode(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> results,
boolean avoidStaleNodes)
throws org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
chooseLocalNode in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
protected void adjustExcludedNodes(HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
org.apache.hadoop.net.Node chosenNode)
adjustExcludedNodes in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
protected org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor chooseLocalRack(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> results,
boolean avoidStaleNodes)
throws org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
chooseLocalRack in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
protected void chooseRemoteRack(int numOfReplicas,
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxReplicasPerRack,
List<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> results,
boolean avoidStaleNodes)
throws org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasException
chooseRemoteRack in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.NotEnoughReplicasExceptionprotected String getRack(org.apache.hadoop.hdfs.protocol.DatanodeInfo cur)
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
getRack in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
protected int addToExcludedNodes(org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes)
addToExcludedNodes in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
public Iterator<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> pickupReplicaSet(Collection<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> first,
Collection<org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor> second)
pickupReplicaSet in class org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||