public class TransactionProcessor extends Object implements org.apache.hadoop.hbase.coprocessor.RegionObserver, org.apache.hadoop.hbase.coprocessor.RegionCoprocessor
org.apache.hadoop.hbase.coprocessor.RegionObserver coprocessor that handles server-side processing
for transactions:
In order to use this coprocessor for transactions, configure the class on any table involved in transactions,
or on all user tables by adding the following to hbase-site.xml:
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.tephra.hbase.coprocessor.TransactionProcessor</value>
</property>
HBase Get and Scan operations should have the current transaction serialized on to the operation
as an attribute:
Transaction t = ...;
Get get = new Get(...);
TransactionCodec codec = new TransactionCodec();
codec.addToOperation(get, t);
| Modifier and Type | Field and Description |
|---|---|
protected boolean |
allowEmptyValues |
protected Boolean |
pruneEnable |
protected boolean |
readNonTxnData |
protected Map<byte[],Long> |
ttlByFamily |
protected Long |
txMaxLifetimeMillis |
| Constructor and Description |
|---|
TransactionProcessor() |
| Modifier and Type | Method and Description |
|---|---|
protected org.apache.hadoop.hbase.regionserver.InternalScanner |
createStoreScanner(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env,
String action,
TransactionVisibilityState snapshot,
org.apache.hadoop.hbase.regionserver.InternalScanner scanner,
org.apache.hadoop.hbase.regionserver.ScanType type) |
protected void |
ensureValidTxLifetime(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env,
org.apache.hadoop.hbase.client.OperationWithAttributes op,
Transaction tx)
Make sure that the transaction is within the max valid transaction lifetime.
|
protected org.apache.hadoop.conf.Configuration |
getConfiguration(org.apache.hadoop.hbase.CoprocessorEnvironment env)
Fetch the
Configuration that contains the properties required by the coprocessor. |
Optional |
getRegionObserver() |
protected org.apache.hadoop.hbase.filter.Filter |
getTransactionFilter(Transaction tx,
org.apache.hadoop.hbase.regionserver.ScanType type,
org.apache.hadoop.hbase.filter.Filter filter)
Derived classes can override this method to customize the filter used to return data visible
for the current transaction.
|
protected CacheSupplier<TransactionStateCache> |
getTransactionStateCacheSupplier(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env) |
protected void |
initializePruneState(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env)
Refresh the properties related to transaction pruning.
|
void |
postCompact(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.StoreFile resultFile,
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker tracker,
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest request) |
void |
postFlush(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker tracker) |
org.apache.hadoop.hbase.regionserver.InternalScanner |
preCompact(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.InternalScanner scanner,
org.apache.hadoop.hbase.regionserver.ScanType scanType,
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker tracker,
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest request) |
void |
preCompactScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.ScanType scanType,
org.apache.hadoop.hbase.regionserver.ScanOptions options,
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker tracker,
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest request) |
void |
preDelete(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.client.Delete delete,
org.apache.hadoop.hbase.wal.WALEdit edit,
org.apache.hadoop.hbase.client.Durability durability) |
org.apache.hadoop.hbase.regionserver.InternalScanner |
preFlush(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.InternalScanner scanner,
org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker tracker) |
void |
preFlushScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.ScanOptions options,
org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker tracker) |
void |
preGetOp(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.client.Get get,
List<org.apache.hadoop.hbase.Cell> results) |
void |
prePut(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.client.Put put,
org.apache.hadoop.hbase.wal.WALEdit edit,
org.apache.hadoop.hbase.client.Durability durability) |
void |
preScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.client.Scan scan) |
protected void |
resetPruneState()
Stop and clear state related to pruning.
|
void |
start(org.apache.hadoop.hbase.CoprocessorEnvironment e) |
void |
stop(org.apache.hadoop.hbase.CoprocessorEnvironment e) |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitpostAppend, postBatchMutate, postBatchMutateIndispensably, postBulkLoadHFile, postCheckAndDelete, postCheckAndPut, postClose, postCloseRegionOperation, postCommitStoreFile, postCompactSelection, postDelete, postExists, postFlush, postGetOp, postIncrement, postInstantiateDeleteTracker, postMemStoreCompaction, postMutationBeforeWAL, postOpen, postPut, postReplayWALs, postScannerClose, postScannerFilterRow, postScannerNext, postScannerOpen, postStartRegionOperation, postStoreFileReaderOpen, postWALRestore, preAppend, preAppendAfterRowLock, preBatchMutate, preBulkLoadHFile, preCheckAndDelete, preCheckAndDeleteAfterRowLock, preCheckAndPut, preCheckAndPutAfterRowLock, preClose, preCommitStoreFile, preCompactSelection, preExists, preFlush, preIncrement, preIncrementAfterRowLock, preMemStoreCompaction, preMemStoreCompactionCompact, preMemStoreCompactionCompactScannerOpen, preOpen, prePrepareTimeStampForDeleteVersion, preReplayWALs, preScannerClose, preScannerNext, preStoreFileReaderOpen, preStoreScannerOpen, preWALRestoreprotected volatile Boolean pruneEnable
protected volatile Long txMaxLifetimeMillis
protected boolean allowEmptyValues
protected boolean readNonTxnData
public Optional getRegionObserver()
getRegionObserver in interface org.apache.hadoop.hbase.coprocessor.RegionCoprocessorpublic void start(org.apache.hadoop.hbase.CoprocessorEnvironment e)
throws IOException
start in interface org.apache.hadoop.hbase.CoprocessorIOException@Nullable protected org.apache.hadoop.conf.Configuration getConfiguration(org.apache.hadoop.hbase.CoprocessorEnvironment env)
Configuration that contains the properties required by the coprocessor. By
default, the HBase configuration is returned. This method will never return null in
Tephra but the derived classes might do so if Configuration is not available
temporarily (for example, if it is being fetched from a HBase Table.env - RegionCoprocessorEnvironment of the Region to which the coprocessor is
associatedConfiguration, can be null if it is not availableprotected CacheSupplier<TransactionStateCache> getTransactionStateCacheSupplier(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env)
public void stop(org.apache.hadoop.hbase.CoprocessorEnvironment e)
throws IOException
stop in interface org.apache.hadoop.hbase.CoprocessorIOExceptionpublic void preGetOp(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.client.Get get,
List<org.apache.hadoop.hbase.Cell> results)
throws IOException
preGetOp in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic void prePut(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.client.Put put,
org.apache.hadoop.hbase.wal.WALEdit edit,
org.apache.hadoop.hbase.client.Durability durability)
throws IOException
prePut in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic void preDelete(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.client.Delete delete,
org.apache.hadoop.hbase.wal.WALEdit edit,
org.apache.hadoop.hbase.client.Durability durability)
throws IOException
preDelete in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic void preScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.client.Scan scan)
throws IOException
preScannerOpen in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic void preFlushScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.ScanOptions options,
org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker tracker)
throws IOException
preFlushScannerOpen in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic org.apache.hadoop.hbase.regionserver.InternalScanner preFlush(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.InternalScanner scanner,
org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker tracker)
throws IOException
preFlush in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic void postFlush(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> e,
org.apache.hadoop.hbase.regionserver.FlushLifeCycleTracker tracker)
throws IOException
postFlush in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic void preCompactScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.ScanType scanType,
org.apache.hadoop.hbase.regionserver.ScanOptions options,
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker tracker,
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest request)
throws IOException
preCompactScannerOpen in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic org.apache.hadoop.hbase.regionserver.InternalScanner preCompact(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.InternalScanner scanner,
org.apache.hadoop.hbase.regionserver.ScanType scanType,
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker tracker,
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest request)
throws IOException
preCompact in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionpublic void postCompact(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c,
org.apache.hadoop.hbase.regionserver.Store store,
org.apache.hadoop.hbase.regionserver.StoreFile resultFile,
org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker tracker,
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest request)
throws IOException
postCompact in interface org.apache.hadoop.hbase.coprocessor.RegionObserverIOExceptionprotected org.apache.hadoop.hbase.regionserver.InternalScanner createStoreScanner(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env,
String action,
TransactionVisibilityState snapshot,
org.apache.hadoop.hbase.regionserver.InternalScanner scanner,
org.apache.hadoop.hbase.regionserver.ScanType type)
throws IOException
IOExceptionprotected void ensureValidTxLifetime(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env,
org.apache.hadoop.hbase.client.OperationWithAttributes op,
@Nullable
Transaction tx)
throws IOException
env - RegionCoprocessorEnvironment of the Region to which the coprocessor is
associatedop - OperationWithAttributes HBase operation to access its attributes if requiredtx - Transaction supplied by theorg.apache.hadoop.hbase.DoNotRetryIOException - thrown if the transaction is older than the max lifetime of a
transaction IOException throw if the value of max lifetime of transaction is
unavailableIOExceptionprotected org.apache.hadoop.hbase.filter.Filter getTransactionFilter(Transaction tx, org.apache.hadoop.hbase.regionserver.ScanType type, org.apache.hadoop.hbase.filter.Filter filter)
tx - the current transaction to applytype - the type of scan being performedprotected void initializePruneState(org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment env)
resetPruneState().env - RegionCoprocessorEnvironment of this regionprotected void resetPruneState()
Copyright © 2020 The Apache Software Foundation. All rights reserved.