public class OHTable extends Object implements org.apache.hadoop.hbase.client.HTableInterface
| 构造器和说明 |
|---|
OHTable(byte[] tableName,
com.alipay.oceanbase.rpc.ObTableClient obTableClient,
ExecutorService executePool)
Creates an object to access a HBase table.
|
OHTable(org.apache.hadoop.conf.Configuration configuration,
byte[] tableName)
Creates an object to access a HBase table.
|
OHTable(org.apache.hadoop.conf.Configuration configuration,
byte[] tableName,
ExecutorService executePool)
Creates an object to access a HBase table.
|
OHTable(org.apache.hadoop.conf.Configuration configuration,
String tableName)
Creates an object to access a HBase table.
|
| 限定符和类型 | 方法和说明 |
|---|---|
org.apache.hadoop.hbase.client.Result |
append(org.apache.hadoop.hbase.client.Append append)
例如 key = "key", c1="a",在c1后面append,使c1="aaa"
原子操作
|
Object[] |
batch(List<? extends org.apache.hadoop.hbase.client.Row> actions) |
void |
batch(List<? extends org.apache.hadoop.hbase.client.Row> actions,
Object[] results) |
boolean |
checkAndDelete(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
org.apache.hadoop.hbase.client.Delete delete)
例如当 key="key001", family = "family", c1="a" 时,才执行 delete 操作,该命令是原子的
|
boolean |
checkAndPut(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
org.apache.hadoop.hbase.client.Put put)
例如当 key="key001", family = "family", c1="a" 时,才执行 put 操作,该命令是原子的
|
void |
close() |
<T extends org.apache.hadoop.hbase.ipc.CoprocessorProtocol,R> |
coprocessorExec(Class<T> protocol,
byte[] startKey,
byte[] endKey,
org.apache.hadoop.hbase.client.coprocessor.Batch.Call<T,R> callable) |
<T extends org.apache.hadoop.hbase.ipc.CoprocessorProtocol,R> |
coprocessorExec(Class<T> protocol,
byte[] startKey,
byte[] endKey,
org.apache.hadoop.hbase.client.coprocessor.Batch.Call<T,R> callable,
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback<R> callback) |
<T extends org.apache.hadoop.hbase.ipc.CoprocessorProtocol> |
coprocessorProxy(Class<T> protocol,
byte[] row) |
static ThreadPoolExecutor |
createDefaultThreadPoolExecutor(int coreSize,
int maxThreads,
long keepAliveTime)
创建默认的线程池
Using the "direct handoff" approach, new threads will only be created
if it is necessary and will grow unbounded.
|
void |
delete(org.apache.hadoop.hbase.client.Delete delete) |
void |
delete(List<org.apache.hadoop.hbase.client.Delete> deletes) |
boolean |
exists(org.apache.hadoop.hbase.client.Get get)
Test for the existence of columns in the table, as specified in the Get.
|
void |
flushCommits() |
org.apache.hadoop.hbase.client.Result |
get(org.apache.hadoop.hbase.client.Get get) |
org.apache.hadoop.hbase.client.Result[] |
get(List<org.apache.hadoop.hbase.client.Get> gets) |
org.apache.hadoop.conf.Configuration |
getConfiguration() |
byte[][] |
getEndKeys() |
void |
getKeyValueFromResult(com.alipay.oceanbase.rpc.protocol.payload.impl.execute.query.AbstractQueryStreamResult clientQueryStreamResult,
List<org.apache.hadoop.hbase.KeyValue> keyValueList,
boolean isTableGroup,
byte[] family) |
org.apache.hadoop.hbase.client.Result |
getRowOrBefore(byte[] row,
byte[] family)
hbase的获取这个rowkey,没有的话就前一个rowkey,不支持
|
org.apache.hadoop.hbase.client.ResultScanner |
getScanner(byte[] family) |
org.apache.hadoop.hbase.client.ResultScanner |
getScanner(byte[] family,
byte[] qualifier) |
org.apache.hadoop.hbase.client.ResultScanner |
getScanner(org.apache.hadoop.hbase.client.Scan scan) |
org.apache.hadoop.hbase.util.Pair<byte[][],byte[][]> |
getStartEndKeys() |
byte[][] |
getStartKeys() |
org.apache.hadoop.hbase.HTableDescriptor |
getTableDescriptor() |
byte[] |
getTableName() |
long |
getWriteBufferSize()
Returns the maximum size in bytes of the write buffer for this HTable.
|
org.apache.hadoop.hbase.client.Result |
increment(org.apache.hadoop.hbase.client.Increment increment)
例如 key = "key", c2=1,在c2后面increment,在c2后面加2,变成 c2=3
原子操作
|
long |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount)
直接通过 column 名进行 increment 操作
|
long |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount,
boolean writeToWAL) |
boolean |
isAutoFlush() |
org.apache.hadoop.hbase.client.RowLock |
lockRow(byte[] row) |
void |
mutateRow(org.apache.hadoop.hbase.client.RowMutations rm) |
void |
put(List<org.apache.hadoop.hbase.client.Put> puts) |
void |
put(org.apache.hadoop.hbase.client.Put put) |
void |
refreshTableEntry(String familyString,
boolean hasTestLoad) |
void |
setAutoFlush(boolean autoFlush)
|
void |
setAutoFlush(boolean autoFlush,
boolean clearBufferOnFail)
Turns 'auto-flush' on or off.
|
void |
setOperationTimeout(int operationTimeout) |
void |
setRuntimeBatchExecutor(ExecutorService runtimeBatchExecutor) |
void |
setWriteBufferSize(long writeBufferSize)
Sets the size of the buffer in bytes.
|
void |
unlockRow(org.apache.hadoop.hbase.client.RowLock rl) |
public OHTable(org.apache.hadoop.conf.Configuration configuration,
String tableName)
throws IOException
configuration instance. Uses already-populated
region cache if one is available, populated by any other OHTable instances
sharing this configuration instance. Recommended.configuration - Configuration object to use.tableName - Name of the table.IllegalArgumentException - if the param errorIOException - if a remote or network exception occurspublic OHTable(org.apache.hadoop.conf.Configuration configuration,
byte[] tableName)
throws IOException
configuration instance. Uses already-populated
region cache if one is available, populated by any other OHTable instances
sharing this configuration instance. Recommended.configuration - Configuration object to use.tableName - Name of the table.IOException - if a remote or network exception occursIllegalArgumentException - if the param errorpublic OHTable(org.apache.hadoop.conf.Configuration configuration,
byte[] tableName,
ExecutorService executePool)
throws IOException
configuration instance. Uses already-populated
region cache if one is available, populated by any other OHTable instances
sharing this configuration instance.
Use this constructor when the ExecutorService is externally managed.configuration - Configuration object to use.tableName - Name of the table.executePool - ExecutorService to be used.IOException - if a remote or network exception occursIllegalArgumentException - if the param errorpublic OHTable(byte[] tableName,
com.alipay.oceanbase.rpc.ObTableClient obTableClient,
ExecutorService executePool)
connection instance.
Use this constructor when the ExecutorService and HConnection instance are
externally managed.tableName - Name of the table.obTableClient - Oceanbase obTableClient to be used.executePool - ExecutorService to be used.IllegalArgumentException - if the param errorpublic static ThreadPoolExecutor createDefaultThreadPoolExecutor(int coreSize, int maxThreads, long keepAliveTime)
coreSize - core sizemaxThreads - max threadskeepAliveTime - keep alive timepublic byte[] getTableName()
getTableName 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic org.apache.hadoop.conf.Configuration getConfiguration()
getConfiguration 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic org.apache.hadoop.hbase.HTableDescriptor getTableDescriptor()
getTableDescriptor 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic boolean exists(org.apache.hadoop.hbase.client.Get get)
throws IOException
exists 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceget - the GetIOException - epublic void batch(List<? extends org.apache.hadoop.hbase.client.Row> actions, Object[] results)
batch 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic Object[] batch(List<? extends org.apache.hadoop.hbase.client.Row> actions)
batch 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic void getKeyValueFromResult(com.alipay.oceanbase.rpc.protocol.payload.impl.execute.query.AbstractQueryStreamResult clientQueryStreamResult,
List<org.apache.hadoop.hbase.KeyValue> keyValueList,
boolean isTableGroup,
byte[] family)
throws Exception
Exceptionpublic org.apache.hadoop.hbase.client.Result get(org.apache.hadoop.hbase.client.Get get)
throws IOException
get 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic org.apache.hadoop.hbase.client.Result[] get(List<org.apache.hadoop.hbase.client.Get> gets) throws IOException
get 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic org.apache.hadoop.hbase.client.Result getRowOrBefore(byte[] row,
byte[] family)
getRowOrBefore 在接口中 org.apache.hadoop.hbase.client.HTableInterfacerow - rowfamily - familypublic org.apache.hadoop.hbase.client.ResultScanner getScanner(org.apache.hadoop.hbase.client.Scan scan)
throws IOException
getScanner 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic org.apache.hadoop.hbase.client.ResultScanner getScanner(byte[] family)
throws IOException
getScanner 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic org.apache.hadoop.hbase.client.ResultScanner getScanner(byte[] family,
byte[] qualifier)
throws IOException
getScanner 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic void put(org.apache.hadoop.hbase.client.Put put)
throws IOException
put 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic void put(List<org.apache.hadoop.hbase.client.Put> puts) throws IOException
put 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic boolean checkAndPut(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
org.apache.hadoop.hbase.client.Put put)
throws IOException
checkAndPut 在接口中 org.apache.hadoop.hbase.client.HTableInterfacerow - rowfamily - familyqualifier - qualifiervalue - valueput - putIOException - if failedpublic void delete(org.apache.hadoop.hbase.client.Delete delete)
throws IOException
delete 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic void delete(List<org.apache.hadoop.hbase.client.Delete> deletes) throws IOException
delete 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic boolean checkAndDelete(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
org.apache.hadoop.hbase.client.Delete delete)
throws IOException
checkAndDelete 在接口中 org.apache.hadoop.hbase.client.HTableInterfacerow - rowfamily - familyqualifier - qualifiervalue - valuedelete - deleteIOException - if failedpublic void mutateRow(org.apache.hadoop.hbase.client.RowMutations rm)
mutateRow 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic org.apache.hadoop.hbase.client.Result append(org.apache.hadoop.hbase.client.Append append)
throws IOException
append 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceappend - appendIOException - if failedpublic org.apache.hadoop.hbase.client.Result increment(org.apache.hadoop.hbase.client.Increment increment)
throws IOException
increment 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceincrement - incrementIOException - if failedpublic long incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount)
throws IOException
incrementColumnValue 在接口中 org.apache.hadoop.hbase.client.HTableInterfacerow - rowfamily - familyqualifier - qualifieramount - amountIOException - if failedpublic long incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount,
boolean writeToWAL)
throws IOException
incrementColumnValue 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic boolean isAutoFlush()
isAutoFlush 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic void flushCommits()
throws IOException
flushCommits 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic void close()
throws IOException
close 在接口中 Closeableclose 在接口中 AutoCloseableclose 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceIOExceptionpublic org.apache.hadoop.hbase.client.RowLock lockRow(byte[] row)
lockRow 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic void unlockRow(org.apache.hadoop.hbase.client.RowLock rl)
unlockRow 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic <T extends org.apache.hadoop.hbase.ipc.CoprocessorProtocol> T coprocessorProxy(Class<T> protocol, byte[] row)
coprocessorProxy 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic <T extends org.apache.hadoop.hbase.ipc.CoprocessorProtocol,R> Map<byte[],R> coprocessorExec(Class<T> protocol, byte[] startKey, byte[] endKey, org.apache.hadoop.hbase.client.coprocessor.Batch.Call<T,R> callable)
coprocessorExec 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic <T extends org.apache.hadoop.hbase.ipc.CoprocessorProtocol,R> void coprocessorExec(Class<T> protocol, byte[] startKey, byte[] endKey, org.apache.hadoop.hbase.client.coprocessor.Batch.Call<T,R> callable, org.apache.hadoop.hbase.client.coprocessor.Batch.Callback<R> callback)
coprocessorExec 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic void setAutoFlush(boolean autoFlush)
setAutoFlush 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceautoFlush - Whether or not to enable 'auto-flush'.public void setAutoFlush(boolean autoFlush,
boolean clearBufferOnFail)
When enabled (default), Put operations don't get buffered/delayed
and are immediately executed. Failed operations are not retried. This is
slower but safer.
Turning off autoFlush means that multiple Puts will be
accepted before any RPC is actually sent to do the write operations. If the
application dies before pending writes get flushed to HBase, data will be
lost.
When you turn autoFlush off, you should also consider the
clearBufferOnFail option. By default, asynchronous Put
requests will be retried on failure until successful. However, this can
pollute the writeBuffer and slow down batching performance. Additionally,
you may want to issue a number of Put requests and call
flushCommits() as a barrier. In both use cases, consider setting
clearBufferOnFail to true to erase the buffer after flushCommits()
has been called, regardless of success.
setAutoFlush 在接口中 org.apache.hadoop.hbase.client.HTableInterfaceautoFlush - Whether or not to enable 'auto-flush'.clearBufferOnFail - Whether to keep Put failures in the writeBufferflushCommits()public long getWriteBufferSize()
The default value comes from the configuration parameter
hbase.client.write.buffer.
getWriteBufferSize 在接口中 org.apache.hadoop.hbase.client.HTableInterfacepublic void setWriteBufferSize(long writeBufferSize)
throws IOException
If the new size is less than the current amount of data in the write buffer, the buffer gets flushed.
setWriteBufferSize 在接口中 org.apache.hadoop.hbase.client.HTableInterfacewriteBufferSize - The new write buffer size, in bytes.IOException - if a remote or network exception occurs.public void setOperationTimeout(int operationTimeout)
public void setRuntimeBatchExecutor(ExecutorService runtimeBatchExecutor)
public void refreshTableEntry(String familyString, boolean hasTestLoad) throws Exception
Exceptionpublic byte[][] getStartKeys()
throws IOException
IOExceptionpublic byte[][] getEndKeys()
throws IOException
IOExceptionpublic org.apache.hadoop.hbase.util.Pair<byte[][],byte[][]> getStartEndKeys()
throws IOException
IOExceptionCopyright © 2024. All rights reserved.