Package org.apache.druid.indexer.hadoop
Class DatasourceInputFormat
- java.lang.Object
-
- org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.druid.data.input.InputRow>
-
- org.apache.druid.indexer.hadoop.DatasourceInputFormat
-
public class DatasourceInputFormat extends org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.druid.data.input.InputRow>
-
-
Constructor Summary
Constructors Constructor Description DatasourceInputFormat()
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description static voidaddDataSource(org.apache.hadoop.conf.Configuration conf, DatasourceIngestionSpec spec, List<WindowedDataSegment> segments, long maxSplitSize)org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,org.apache.druid.data.input.InputRow>createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context)static List<String>getDataSources(org.apache.hadoop.conf.Configuration conf)static DatasourceIngestionSpecgetIngestionSpec(org.apache.hadoop.conf.Configuration conf, String dataSource)static longgetMaxSplitSize(org.apache.hadoop.conf.Configuration conf, String dataSource)static List<WindowedDataSegment>getSegments(org.apache.hadoop.conf.Configuration conf, String dataSource)List<org.apache.hadoop.mapreduce.InputSplit>getSplits(org.apache.hadoop.mapreduce.JobContext context)
-
-
-
Method Detail
-
getSplits
public List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext context) throws IOException
- Specified by:
getSplitsin classorg.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.druid.data.input.InputRow>- Throws:
IOException
-
createRecordReader
public org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,org.apache.druid.data.input.InputRow> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context)- Specified by:
createRecordReaderin classorg.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.druid.data.input.InputRow>
-
getDataSources
public static List<String> getDataSources(org.apache.hadoop.conf.Configuration conf) throws IOException
- Throws:
IOException
-
getIngestionSpec
public static DatasourceIngestionSpec getIngestionSpec(org.apache.hadoop.conf.Configuration conf, String dataSource) throws IOException
- Throws:
IOException
-
getSegments
public static List<WindowedDataSegment> getSegments(org.apache.hadoop.conf.Configuration conf, String dataSource) throws IOException
- Throws:
IOException
-
getMaxSplitSize
public static long getMaxSplitSize(org.apache.hadoop.conf.Configuration conf, String dataSource)
-
addDataSource
public static void addDataSource(org.apache.hadoop.conf.Configuration conf, DatasourceIngestionSpec spec, List<WindowedDataSegment> segments, long maxSplitSize) throws IOException- Throws:
IOException
-
-