public class ChunkInputFormat extends InputFormatBase<List<Map.Entry<Key,Value>>,InputStream>
FileDataIngest into an InputStream using ChunkInputStream. Mappers used with this
InputFormat must close the InputStream.InputFormatBase.RangeInputSplit, InputFormatBase.RecordReaderBase<K,V>AbstractInputFormat.AbstractRecordReader<K,V>CLASS, log| Constructor and Description |
|---|
ChunkInputFormat() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.mapreduce.RecordReader<List<Map.Entry<Key,Value>>,InputStream> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
addIterator, fetchColumns, getAutoAdjustRanges, getFetchedColumns, getInputTableName, getIterators, getRanges, getTabletLocator, isBatchScan, isIsolated, isOfflineScan, setAutoAdjustRanges, setBatchScan, setInputTableName, setLocalIterators, setOfflineTableScan, setRanges, setScanIsolation, usesLocalIteratorsgetAuthenticationToken, getClientConfiguration, getInputTableConfig, getInputTableConfigs, getInstance, getLogLevel, getPrincipal, getScanAuthorizations, getSplits, getTabletLocator, getToken, getTokenClass, isConnectorInfoSet, setConnectorInfo, setConnectorInfo, setLogLevel, setMockInstance, setScanAuthorizations, setZooKeeperInstance, setZooKeeperInstance, validateOptionspublic org.apache.hadoop.mapreduce.RecordReader<List<Map.Entry<Key,Value>>,InputStream> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOException, InterruptedException
createRecordReader in class org.apache.hadoop.mapreduce.InputFormat<List<Map.Entry<Key,Value>>,InputStream>IOExceptionInterruptedExceptionCopyright © 2015 Apache Accumulo Project. All rights reserved.