public class NoopDataSegmentPusher extends Object implements DataSegmentPusher
JOINER| Constructor and Description |
|---|
NoopDataSegmentPusher() |
| Modifier and Type | Method and Description |
|---|---|
String |
getPathForHadoop() |
String |
getPathForHadoop(String dataSource)
Deprecated.
|
Map<String,Object> |
makeLoadSpec(URI uri) |
DataSegment |
push(File file,
DataSegment segment,
boolean replaceExisting)
Pushes index files and segment descriptor to deep storage.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitgenerateUniquePath, getAllowedPropertyPrefixesForHadoop, getDefaultStorageDir, getStorageDir, getStorageDir, makeIndexPathNamepublic String getPathForHadoop()
getPathForHadoop in interface DataSegmentPusher@Deprecated public String getPathForHadoop(String dataSource)
getPathForHadoop in interface DataSegmentPusherpublic DataSegment push(File file, DataSegment segment, boolean replaceExisting)
DataSegmentPusherpush in interface DataSegmentPusherfile - directory containing index filessegment - segment descriptorreplaceExisting - if true, pushes to a unique file path. This prevents situations where task failures or replica
tasks can either overwrite or fail to overwrite existing segments leading to the possibility
of different versions of the same segment ID containing different data. As an example, a Kafka
indexing task starting at offset A and ending at offset B may push a segment to deep storage
and then fail before writing the loadSpec to the metadata table, resulting in a replacement
task being spawned. This replacement will also start at offset A but will read to offset C and
will then push a segment to deep storage and write the loadSpec metadata. Without unique file
paths, this can only work correctly if new segments overwrite existing segments. Suppose that
at this point the task then fails so that the supervisor retries again from offset A. This 3rd
attempt will overwrite the segments in deep storage before failing to write the loadSpec
metadata, resulting in inconsistencies in the segment data now in deep storage and copies of
the segment already loaded by historicals.
If unique paths are used, caller is responsible for cleaning up segments that were pushed but
were not written to the metadata table (for example when using replica tasks).public Map<String,Object> makeLoadSpec(URI uri)
makeLoadSpec in interface DataSegmentPusherCopyright © 2011–2018 The Apache Software Foundation. All rights reserved.