public class HadoopDruidIndexerConfig extends Object
| Modifier and Type | Class and Description |
|---|---|
static class |
HadoopDruidIndexerConfig.IndexJobCounters |
| Modifier and Type | Field and Description |
|---|---|
static IndexIO |
INDEX_IO |
static com.fasterxml.jackson.databind.ObjectMapper |
JSON_MAPPER |
static Properties |
PROPERTIES
Hadoop tasks running in an Indexer process need a reference to the Properties instance created
in PropertiesModule so that the task sees properties that were specified in Druid's config files.
|
| Constructor and Description |
|---|
HadoopDruidIndexerConfig(HadoopIngestionSpec spec) |
public static final com.fasterxml.jackson.databind.ObjectMapper JSON_MAPPER
public static final IndexIO INDEX_IO
public static final Properties PROPERTIES
This is not strictly necessary for Peon-based tasks which have all properties, including config file properties, specified on their command line by ForkingTaskRunner (so they could use System.getProperties() only), but we always use the injected Properties for consistency.
public HadoopDruidIndexerConfig(HadoopIngestionSpec spec)
public static HadoopDruidIndexerConfig fromSpec(HadoopIngestionSpec spec)
public static HadoopDruidIndexerConfig fromFile(File file)
public static HadoopDruidIndexerConfig fromString(String str)
public static HadoopDruidIndexerConfig fromDistributedFileSystem(String path)
public static HadoopDruidIndexerConfig fromConfiguration(org.apache.hadoop.conf.Configuration conf)
public HadoopIngestionSpec getSchema()
public PathSpec getPathSpec()
public String getDataSource()
public GranularitySpec getGranularitySpec()
public void setGranularitySpec(GranularitySpec granularitySpec)
public DimensionBasedPartitionsSpec getPartitionsSpec()
public IndexSpec getIndexSpec()
public IndexSpec getIndexSpecForIntermediatePersists()
public void setShardSpecs(Map<Long,List<HadoopyShardSpec>> shardSpecs)
public com.google.common.base.Optional<List<org.joda.time.Interval>> getIntervals()
public int getTargetPartitionSize()
public boolean isUpdaterJobSpecSet()
public boolean isCombineText()
public InputRowParser getParser()
public HadoopyShardSpec getShardSpec(Bucket bucket)
public boolean isLogParseExceptions()
public int getMaxParseExceptions()
public org.apache.hadoop.mapreduce.Job addInputPaths(org.apache.hadoop.mapreduce.Job job)
throws IOException
addJobProperties(Job)
or via injected system properties) before this method is called. The PathSpec may
create objects which depend on the values of these configurations.IOExceptionpublic List<org.joda.time.Interval> getInputIntervals()
public String getWorkingPath()
public void intoConfiguration(org.apache.hadoop.mapreduce.Job job)
public void verify()
Copyright © 2011–2022 The Apache Software Foundation. All rights reserved.