public class GroupByQueryEngineV2 extends Object
StorageAdapter. It returns a Sequence
of ResultRow objects that are not guaranteed to be in any particular order, and may not even be fully
grouped. It is expected that a downstream GroupByMergingQueryRunnerV2 will finish grouping these results.
This code runs on data servers, like Historicals.
Used by
GroupByStrategyV2.process(GroupByQuery, StorageAdapter, GroupByQueryMetrics).| Modifier and Type | Class and Description |
|---|---|
static class |
GroupByQueryEngineV2.GroupByEngineIterator<KeyType> |
static class |
GroupByQueryEngineV2.HashAggregateIterator |
| Modifier and Type | Field and Description |
|---|---|
static org.apache.druid.query.groupby.epinephelinae.GroupByQueryEngineV2.GroupByStrategyFactory |
STRATEGY_FACTORY |
| Modifier and Type | Method and Description |
|---|---|
static boolean |
canPushDownLimit(ColumnSelectorFactory columnSelectorFactory,
String columnName)
check if a column will operate correctly with
LimitedBufferHashGrouper for query limit pushdown |
static void |
convertRowTypesToOutputTypes(List<DimensionSpec> dimensionSpecs,
ResultRow resultRow,
int resultRowDimensionStart) |
static GroupByColumnSelectorPlus[] |
createGroupBySelectorPlus(ColumnSelectorPlus<GroupByColumnSelectorStrategy>[] baseSelectorPlus,
int dimensionStart) |
static int |
getCardinalityForArrayAggregation(GroupByQueryConfig querySpecificConfig,
GroupByQuery query,
StorageAdapter storageAdapter,
ByteBuffer buffer)
Returns the cardinality of array needed to do array-based aggregation, or -1 if array-based aggregation
is impossible.
|
static boolean |
hasNoExplodingDimensions(ColumnInspector inspector,
List<DimensionSpec> dimensions)
Checks whether all "dimensions" are either single-valued, or if the input column or output dimension spec has
specified a type that
TypeSignature.isArray(). |
static Sequence<ResultRow> |
process(GroupByQuery query,
StorageAdapter storageAdapter,
NonBlockingPool<ByteBuffer> intermediateResultsBufferPool,
GroupByQueryConfig querySpecificConfig,
DruidProcessingConfig processingConfig,
GroupByQueryMetrics groupByQueryMetrics) |
public static final org.apache.druid.query.groupby.epinephelinae.GroupByQueryEngineV2.GroupByStrategyFactory STRATEGY_FACTORY
public static GroupByColumnSelectorPlus[] createGroupBySelectorPlus(ColumnSelectorPlus<GroupByColumnSelectorStrategy>[] baseSelectorPlus, int dimensionStart)
public static Sequence<ResultRow> process(GroupByQuery query, @Nullable StorageAdapter storageAdapter, NonBlockingPool<ByteBuffer> intermediateResultsBufferPool, GroupByQueryConfig querySpecificConfig, DruidProcessingConfig processingConfig, @Nullable GroupByQueryMetrics groupByQueryMetrics)
public static int getCardinalityForArrayAggregation(GroupByQueryConfig querySpecificConfig, GroupByQuery query, StorageAdapter storageAdapter, ByteBuffer buffer)
public static boolean hasNoExplodingDimensions(ColumnInspector inspector, List<DimensionSpec> dimensions)
TypeSignature.isArray(). Both cases indicate we don't want to explode the under-lying
multi value column. Since selectors on non-existent columns will show up as full of nulls, they are effectively
single valued, however capabilites on columns can also be null, for example during broker merge with an 'inline'
datasource subquery, so we only return true from this method when capabilities are fully known.public static void convertRowTypesToOutputTypes(List<DimensionSpec> dimensionSpecs, ResultRow resultRow, int resultRowDimensionStart)
public static boolean canPushDownLimit(ColumnSelectorFactory columnSelectorFactory, String columnName)
LimitedBufferHashGrouper for query limit pushdownCopyright © 2011–2022 The Apache Software Foundation. All rights reserved.