Package org.apache.druid.segment.loading
Class NoopDataSegmentPusher
- java.lang.Object
-
- org.apache.druid.segment.loading.NoopDataSegmentPusher
-
- All Implemented Interfaces:
DataSegmentPusher
public class NoopDataSegmentPusher extends Object implements DataSegmentPusher
Mostly used for test purpose.
-
-
Field Summary
-
Fields inherited from interface org.apache.druid.segment.loading.DataSegmentPusher
JOINER
-
-
Constructor Summary
Constructors Constructor Description NoopDataSegmentPusher()
-
Method Summary
All Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description StringgetPathForHadoop()StringgetPathForHadoop(String dataSource)Deprecated.Map<String,Object>makeLoadSpec(URI uri)DataSegmentpush(File file, DataSegment segment, boolean replaceExisting)Pushes index files and segment descriptor to deep storage.DataSegmentpushToPath(File file, DataSegment segment, String storageDirSuffix)-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.apache.druid.segment.loading.DataSegmentPusher
getAllowedPropertyPrefixesForHadoop, getStorageDir, getStorageDir, makeIndexPathName
-
-
-
-
Method Detail
-
getPathForHadoop
public String getPathForHadoop()
- Specified by:
getPathForHadoopin interfaceDataSegmentPusher
-
getPathForHadoop
@Deprecated public String getPathForHadoop(String dataSource)
Deprecated.- Specified by:
getPathForHadoopin interfaceDataSegmentPusher
-
push
public DataSegment push(File file, DataSegment segment, boolean replaceExisting)
Description copied from interface:DataSegmentPusherPushes index files and segment descriptor to deep storage. Expected to perform its own retries, if appropriate.- Specified by:
pushin interfaceDataSegmentPusher- Parameters:
file- directory containing index filessegment- segment descriptorreplaceExisting- if true, pushes to a unique file path. This prevents situations where task failures or replica tasks can either overwrite or fail to overwrite existing segments leading to the possibility of different versions of the same segment ID containing different data. As an example, a Kafka indexing task starting at offset A and ending at offset B may push a segment to deep storage and then fail before writing the loadSpec to the metadata table, resulting in a replacement task being spawned. This replacement will also start at offset A but will read to offset C and will then push a segment to deep storage and write the loadSpec metadata. Without unique file paths, this can only work correctly if new segments overwrite existing segments. Suppose that at this point the task then fails so that the supervisor retries again from offset A. This 3rd attempt will overwrite the segments in deep storage before failing to write the loadSpec metadata, resulting in inconsistencies in the segment data now in deep storage and copies of the segment already loaded by historicals. If unique paths are used, caller is responsible for cleaning up segments that were pushed but were not written to the metadata table (for example when using replica tasks).- Returns:
- segment descriptor
-
pushToPath
public DataSegment pushToPath(File file, DataSegment segment, String storageDirSuffix)
- Specified by:
pushToPathin interfaceDataSegmentPusher
-
makeLoadSpec
public Map<String,Object> makeLoadSpec(URI uri)
- Specified by:
makeLoadSpecin interfaceDataSegmentPusher
-
-