Thanks for reporting this @ilveroluca and @tiphaineruy!
Could you please give us some more details, e.g., how many fragments, their sizes, the subarrays you wrote into for the dense case, etc?
Some comments before we deep dive:
- It is important to understand all
sm.consolidation*
config parameters, since, if you are using the defaults on huge arrays, you will probably run out of memory. - Please check the consolidation docs, and especially the topic on dense array amplification in case you write to the array in non-contiguous subarray “patches”.
- We have a known bug in that you should explicitly pass the
config
object totiledb.consolidate
(see discussion here). - For the dense case and especially if you write reasonably-sized data (e.g., 500MB-1GB) into disjoint subarrays, you can improve performance by just consolidating the fragment metadata (instead of the actual fragments).
@tiphaineruy, @ihnorton could help us see why there is a regression. Could you please send us any information you can so that we can reproduce the issue on our side?
Thanks!