Conversation
| package io.druid.segment.incremental; | ||
|
|
||
|
|
||
| public abstract class InternalDataIncrementalIndex<AggregatorType> extends IncrementalIndex<AggregatorType> |
There was a problem hiding this comment.
Can we enlarge the scope of this class? Is there something in OffheapOakIncrementalIndex that can between shared between off-heap and on-heap and moved here?
| maxAggregatorIntermediateSize += Arrays.stream(incrementalIndexSchema.getMetrics()) | ||
| .mapToLong(aggregator -> aggregator.getMaxIntermediateSize() + Long.BYTES * 2) | ||
| .sum(); | ||
| .mapToLong(aggregator -> aggregator.getMaxIntermediateSize() + Long.BYTES * 2) |
There was a problem hiding this comment.
In general there are many TABs distance changes between master and this version of the code... I wounder whether we will be asked to change it later, as not following the existing coding conventions?
| new OffheapOakCreateKeyConsumer(dimensionDescsList), | ||
| new OffheapOakKeyCapacityCalculator(dimensionDescsList), | ||
| chunkMaxItems, | ||
| chunkBytesPerItem); |
There was a problem hiding this comment.
Not so related to Druid, but why the size of the chunk need to be provided here? This need to be configurable, but doesn't look reasonable to be part of constructor... ?
There was a problem hiding this comment.
Actually both chunkMaxItems and chunkBytesPerItem looks to me weird to be provided in constructor. It should be eliminated from the Oak Constructor and we should give user other ways to influence the defauklt (if needed).
| public Row apply(@Nullable Map.Entry<ByteBuffer, ByteBuffer> entry) | ||
| { | ||
| IncrementalIndexRow key = incrementalIndexRowDeserialization(entry.getKey()); | ||
| ByteBuffer value = entry.getValue(); |
There was a problem hiding this comment.
Should it return ReadOnly ByteBuffer?
Creating one more pull request, again for internal review only. Thanks @galisheffi !