|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.hadoop.hdfs.server.namenode.JournalSet
public class JournalSet
Manages a collection of Journals. None of the methods are synchronized, it is assumed that FSEditLog methods, that use this class, use proper synchronization.
| Nested Class Summary |
|---|
| Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.server.namenode.JournalManager |
|---|
JournalManager.CorruptionException |
| Field Summary | |
|---|---|
static Comparator<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> |
EDIT_LOG_INPUT_STREAM_COMPARATOR
|
| Method Summary | |
|---|---|
static void |
chainAndMakeRedundantStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> outStreams,
PriorityQueue<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> allStreams,
long fromTxId)
|
void |
close()
Close the journal manager, freeing any resources it may hold. |
void |
finalizeLogSegment(long firstTxId,
long lastTxId)
Mark the log segment that spans from firstTxId to lastTxId as finalized and complete. |
void |
format(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo nsInfo)
Format the underlying storage, removing any previously stored data. |
RemoteEditLogManifest |
getEditLogManifest(long fromTxId,
boolean forReading)
Return a manifest of what finalized edit logs are available. |
boolean |
hasSomeData()
|
boolean |
isEmpty()
Returns true if there are no journals, all redundant journals are disabled, or any required journals are disabled. |
void |
purgeLogsOlderThan(long minTxIdToKeep)
Remove all edit logs with transaction IDs lower than the given transaction ID. |
void |
recoverUnfinalizedSegments()
Recover segments which have not been finalized. |
void |
selectInputStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> streams,
long fromTxId,
boolean inProgressOk,
boolean forReading)
In this function, we get a bunch of streams from all of our JournalManager objects. |
void |
setOutputBufferCapacity(int size)
Set the amount of memory that this stream should use to buffer edits |
org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream |
startLogSegment(long txId)
Begin writing to a new segment of the log stream, which starts at the given transaction ID. |
| Methods inherited from class java.lang.Object |
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
| Methods inherited from interface org.apache.hadoop.hdfs.server.common.Storage.FormatConfirmable |
|---|
toString |
| Field Detail |
|---|
public static final Comparator<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> EDIT_LOG_INPUT_STREAM_COMPARATOR
| Method Detail |
|---|
public void format(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo nsInfo)
throws IOException
org.apache.hadoop.hdfs.server.namenode.JournalManager
format in interface org.apache.hadoop.hdfs.server.namenode.JournalManagerIOException
public boolean hasSomeData()
throws IOException
hasSomeData in interface org.apache.hadoop.hdfs.server.common.Storage.FormatConfirmableIOException - if the storage cannot be accessed at all.
public org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream startLogSegment(long txId)
throws IOException
org.apache.hadoop.hdfs.server.namenode.JournalManager
startLogSegment in interface org.apache.hadoop.hdfs.server.namenode.JournalManagerIOException
public void finalizeLogSegment(long firstTxId,
long lastTxId)
throws IOException
org.apache.hadoop.hdfs.server.namenode.JournalManager
finalizeLogSegment in interface org.apache.hadoop.hdfs.server.namenode.JournalManagerIOException
public void close()
throws IOException
org.apache.hadoop.hdfs.server.namenode.JournalManager
close in interface Closeableclose in interface org.apache.hadoop.hdfs.server.namenode.JournalManagerIOException
public void selectInputStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> streams,
long fromTxId,
boolean inProgressOk,
boolean forReading)
throws IOException
streams - The collection to add the streams to. It may or
may not be sorted-- this is up to the caller.fromTxId - The transaction ID to start looking for streams atinProgressOk - Should we consider unfinalized streams?forReading - Whether or not the caller intends to read from
the returned streams.
IOException - if the underlying storage has an error or is otherwise
inaccessible
public static void chainAndMakeRedundantStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> outStreams,
PriorityQueue<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> allStreams,
long fromTxId)
public boolean isEmpty()
public void setOutputBufferCapacity(int size)
org.apache.hadoop.hdfs.server.namenode.JournalManager
setOutputBufferCapacity in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
public void purgeLogsOlderThan(long minTxIdToKeep)
throws IOException
minTxIdToKeep - the lowest transaction ID that should be retained
IOException - in the event of error
public void recoverUnfinalizedSegments()
throws IOException
org.apache.hadoop.hdfs.server.namenode.JournalManager
recoverUnfinalizedSegments in interface org.apache.hadoop.hdfs.server.namenode.JournalManagerIOException
public RemoteEditLogManifest getEditLogManifest(long fromTxId,
boolean forReading)
fromTxId - Starting transaction id to read the logs.
|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||