I have C# code using which I am reading the tiledb. I am trying to read total of 4480000 tiles, each containing 2 floats. Which from my rough math looks like around 32MBs of data. But when I try to read this tile I get following error:
“DenseReader: Cannot process a single tile, increase memory budget”
Following is my current code:
public List<float> ReadArray(string arrayName, List<int> slice, string attribute)
{
Console.WriteLine($"Reading from the slice has started. Started at {DateTime.Now}. Slice: {slice[0]}, {slice[1]}, {slice[2]}, {slice[3]}");
var config = new Config();
config.Set("sm.memory_budget", (32L * 1024 * 1024 * 1024).ToString()); // Increase total memory budget
config.Set("sm.tile_cache_size", (32L * 1024 * 1024 * 1024).ToString()); // Increase tile cache size
config.Set("sm.memory_budget_var", (32L * 1024 * 1024 * 1024).ToString());
using var ctx = new Context(config);
using var array = new Array(ctx, arrayName);
array.Open(QueryType.Read);
using var query = new Query(ctx, array);
using Subarray subArray = new Subarray(array);
int dataSize = CalculateSizeOfData(slice);
Console.WriteLine("dataSize is " + dataSize);
var readData = new float[dataSize];
subArray.SetSubarray(slice[0], slice[1], slice[2], slice[3]);
query.SetLayout(LayoutType.RowMajor);
query.SetDataBuffer(attribute, readData);
query.SetSubarray(subArray);
query.Submit();
array.Close();
Console.WriteLine($"Reading from the slice has completed. Completed at {DateTime.Now} Slice: {slice[0]}, {slice[1]}, {slice[2]}, {slice[3]}");
return new List<float>(readData);
}
Now, please note that I have not had any custom config code before but following the error, I added the code to increase the budget but it does not seem to work
I also wanted to add that when I this was a C++ code, it was able to handle far bigger datasets than this and also, I had never seen this error before with C++ code. This is something that we started seeing when we moved to C#.
Hello @kunaal_desai, you should be using the sm.mem.total_budget config option to set the memory budget.
Could you tell us more details about your array and its schema? The default memory budget is 10GB; it is unusual that reading 32MB of data would exhaust the budget.
public void CreateArray(string uri, ArrayDimension rowDimension, ArrayDimension columnDimension)
{
try
{
using var ctx = new Context();
var domain = new Domain(ctx);
domain.AddDimensions(
Dimension.Create(ctx, "rows", rowDimension.Start, rowDimension.End, rowDimension.TileExtent),
Dimension.Create(ctx, "cols", columnDimension.Start, columnDimension.End, columnDimension.TileExtent)
);
// The array will be dense.
var schema = new ArraySchema(ctx, ArrayType.Dense);
schema.SetDomain(domain);
var xNorm = new TileDB.CSharp.Attribute(ctx, "x_norm", DataType.Float32);
var yNorm = new TileDB.CSharp.Attribute(ctx, "y_norm", DataType.Float32);
schema.AddAttributes(xNorm, yNorm);
Array.Create(ctx, uri, schema);
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
Two float attributes. and the chunk that I am reading has 440k tiles
Also thanks for pointing me to correct config value. Updating it to 15Gb worked. But I do notice some slow down not sure if it is related to the memory usage problem
The other thing is that most of the things are same between this code and the previous version of this code which was a C++ code but what I have noticed is that C++ code was able to handle much much larger load than this without any issues.
Unfortunately the dump functions are not currently available in the C# API due to limitations of the underlying native C API. You can create a separate C++ program that dumps the array you created from C#.
You are also using an old version of TileDB. The latest versions are TileDB.CSharp 5.14.0, and TileDB.Native 2.24.1. Can you update your program to these versions and try again reproducing the issue?
Thanks for providing the dumps @kunaal_desai. As @teo-tsirpanis said, they should be sufficient to get the team to build an array to reproduce your issue. If you can give us access to the data though, it would make it much easier for us to investigate and would help to expedite the investigation. Let us know. Also note that we are currently working on a new release that might address your issue coming out in about a week. We’ll also let you know when it’s ready so you can try it with your array.
@kunaal_desai Actually we won’t need access to your data. I see the issue from just looking at your schema and fragments and there probably are a few easy tweaks we can make to fix everything for you. First a little more information about what the problem is. From your schema, I see that you have two dimensions and that the tile extent covers the whole domain, so you really have one large tile for the whole dataset. This is not ideal as TileDB always stores full tiles for each write in a dense array. This might be improved in the future but is done this way for now to simplify the read algorithms as well as making it perform better. So, in your case, for each write, each attribute will have a in memory tile size of ~280MB. This will likely compressed well on disk but in memory it will end up taking a lot of space as you have 114 tiles to bring into memory. Since the read algorithm tries to load everything it needs to process one tile at a minimum, it’s probably trying to load all your fragments in memory at once but doesn’t have enough memory to decompress everything.
Now, let’s chat about a solution! Looking at your fragments, I notice that you store one full column per write, so for now, changing the tile extent for your rows dimension to 1 will greatly improve everything as you’ll now only store a little over 2MB tiles for each writes and bring roughly 280MB of uncompressed data to do your read. Are you always going to write the data for this array one full row at a time? Also, may I ask how you plan to read this data? Depending on the slices you plan to access, I can recommend a better tile extent for the column dimension. There is also some things we can do with consolidation to improve the read access performance but I need to have more information about your write and read patterns.
@KiterLuc, thank you so much for your response. I have tried couple of combinations. here are my observations:
Dimension: 1X1 was for some reason very slow when writing.
Dimension: NumSamples/100 X NumOfLoci/100: pretty good read and write, plus the issue with the memory is also resolved.
I think since the original issue is resolved, I am curious about one thing. This code is c# code that we converted from original C++/PInvoke layer. But then when we converted the same code to C#, the same dimensions are having issue. I was expecting the same read/write capacity across the languages since they are simply access mechanism and the data layout should drive the performance.
Neverthless, thank you so much for your help. Your suggestions were pretty helpful.
@KiterLuc, @teo-tsirpanis there is some difference. I was inspecting the C++ file and the footprint of that file is in alignment with what I would expect it to take to store certain amount of floats + some meta-data.
However, this particular file is storing approx 136,800,000 floats, I am guessing that it should not take more than 500MB + some metadata but this file is taking almost 68GB. I think the original problem with the memory that we ran into is related to that because now it is having to load more data into memory. Even the fragments files are pretty large. is it possible to get some info from the dump file OR you need the tile it self to inspect it?
Just for your reference, I have uploaded the full tile-db which has footprint issues(114 fragments):
Also, attaching the dump file with smaller footprint. The attached dumpfile is is from a tile-db that was generated using C++ and also has 5k fragments as oppose to 114 fragemnts that the current one has.