Interface Plumber
-
- All Known Implementing Classes:
AppenderatorPlumber,FlushingPlumber,RealtimePlumber
public interface Plumber
-
-
Field Summary
Fields Modifier and Type Field Description static org.apache.druid.segment.incremental.IncrementalIndexAddResultDUPLICATEstatic org.apache.druid.segment.incremental.IncrementalIndexAddResultNOT_WRITABLEstatic org.apache.druid.segment.incremental.IncrementalIndexAddResultTHROWAWAY
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description org.apache.druid.segment.incremental.IncrementalIndexAddResultadd(org.apache.druid.data.input.InputRow row, com.google.common.base.Supplier<org.apache.druid.data.input.Committer> committerSupplier)voidfinishJob()Perform any final processing and clean up after ourselves.<T> org.apache.druid.query.QueryRunner<T>getQueryRunner(org.apache.druid.query.Query<T> query)voidpersist(org.apache.druid.data.input.Committer committer)Persist any in-memory indexed data to durable storage.ObjectstartJob()Perform any initial setup.
-
-
-
Field Detail
-
THROWAWAY
static final org.apache.druid.segment.incremental.IncrementalIndexAddResult THROWAWAY
-
NOT_WRITABLE
static final org.apache.druid.segment.incremental.IncrementalIndexAddResult NOT_WRITABLE
-
DUPLICATE
static final org.apache.druid.segment.incremental.IncrementalIndexAddResult DUPLICATE
-
-
Method Detail
-
startJob
Object startJob()
Perform any initial setup. Should be called before using any other methods, and should be paired with a corresponding call tofinishJob().- Returns:
- the metadata of the "newest" segment that might have previously been persisted
-
add
org.apache.druid.segment.incremental.IncrementalIndexAddResult add(org.apache.druid.data.input.InputRow row, com.google.common.base.Supplier<org.apache.druid.data.input.Committer> committerSupplier) throws org.apache.druid.segment.incremental.IndexSizeExceededException- Parameters:
row- the row to insertcommitterSupplier- supplier of a committer associated with all data that has been added, including this row- Returns:
- IncrementalIndexAddResult whose rowCount - positive numbers indicate how many summarized rows exist in the index for that timestamp, -1 means a row was thrown away because it was too late -2 means a row was thrown away because it is duplicate
- Throws:
org.apache.druid.segment.incremental.IndexSizeExceededException
-
getQueryRunner
<T> org.apache.druid.query.QueryRunner<T> getQueryRunner(org.apache.druid.query.Query<T> query)
-
persist
void persist(org.apache.druid.data.input.Committer committer)
Persist any in-memory indexed data to durable storage. This may be only somewhat durable, e.g. the machine's local disk.- Parameters:
committer- committer to use after persisting data
-
finishJob
void finishJob()
Perform any final processing and clean up after ourselves. Should be called after all data has been fed into sinks and persisted.
-
-