Class HadoopDruidIndexerConfig


  • public class HadoopDruidIndexerConfig
    extends Object
    • Field Detail

      • JSON_MAPPER

        public static final com.fasterxml.jackson.databind.ObjectMapper JSON_MAPPER
      • INDEX_IO

        public static final org.apache.druid.segment.IndexIO INDEX_IO
      • PROPERTIES

        public static final Properties PROPERTIES
        Hadoop tasks running in an Indexer process need a reference to the Properties instance created in PropertiesModule so that the task sees properties that were specified in Druid's config files.

        This is not strictly necessary for Peon-based tasks which have all properties, including config file properties, specified on their command line by ForkingTaskRunner (so they could use System.getProperties() only), but we always use the injected Properties for consistency.

    • Constructor Detail

    • Method Detail

      • fromConfiguration

        public static HadoopDruidIndexerConfig fromConfiguration​(org.apache.hadoop.conf.Configuration conf)
      • getPathSpec

        public PathSpec getPathSpec()
      • getDataSource

        public String getDataSource()
      • getGranularitySpec

        public org.apache.druid.segment.indexing.granularity.GranularitySpec getGranularitySpec()
      • setGranularitySpec

        public void setGranularitySpec​(org.apache.druid.segment.indexing.granularity.GranularitySpec granularitySpec)
      • getPartitionsSpec

        public org.apache.druid.indexer.partitions.DimensionBasedPartitionsSpec getPartitionsSpec()
      • getIndexSpec

        public org.apache.druid.segment.IndexSpec getIndexSpec()
      • getIndexSpecForIntermediatePersists

        public org.apache.druid.segment.IndexSpec getIndexSpecForIntermediatePersists()
      • getIntervals

        public com.google.common.base.Optional<List<org.joda.time.Interval>> getIntervals()
      • getTargetPartitionSize

        public int getTargetPartitionSize()
      • isUpdaterJobSpecSet

        public boolean isUpdaterJobSpecSet()
      • isCombineText

        public boolean isCombineText()
      • getParser

        public org.apache.druid.data.input.impl.InputRowParser getParser()
      • isLogParseExceptions

        public boolean isLogParseExceptions()
      • getMaxParseExceptions

        public int getMaxParseExceptions()
      • getAllowedProperties

        public Map<String,​String> getAllowedProperties()
      • addInputPaths

        public org.apache.hadoop.mapreduce.Job addInputPaths​(org.apache.hadoop.mapreduce.Job job)
                                                      throws IOException
        Job instance should have Configuration set (by calling addJobProperties(Job) or via injected system properties) before this method is called. The PathSpec may create objects which depend on the values of these configurations.
        Throws:
        IOException
      • getInputIntervals

        public List<org.joda.time.Interval> getInputIntervals()
      • getWorkingPath

        public String getWorkingPath()
      • intoConfiguration

        public void intoConfiguration​(org.apache.hadoop.mapreduce.Job job)
      • verify

        public void verify()