We are using nodejs-polars
to read “parquet” (and CSV) files and iterate row-by-row to perform some processing.
I am running into FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
errors on some files when I do this. The files in question don’t have terribly many rows (about 40k) but they do have quite a few columns (about 1,400 columns).
The code looks like this, more or less:
const fs = require("fs");
const pl = require("nodejs-polars");
const df = pl.readParquet(fs.readFileSync(file_path));
const date_cols = df
.getColumns()
.filter((c) => {
try {
return c.dtype.variant === pl.DataType.Datetime().variant;
} catch (e) {
return false;
}
})
.map((c) => pl.col(c.name).cast(pl.DataType.Datetime("ms")));
for (const record of df.withColumns(...date_cols).toRecords()) {
processor(record);
}
When I reproduce the “OOM” in the debugger, it stops on the line:
for (const record of df.withColumns(...date_cols).toRecords()) {
so it seems like toRecords()
is causing the “OOM”.
This raises several questions I’d love answered:
- Is there a way to fix the “OOM” with the current code? The memory usage reached over 2.5GB on a parquet file that is about 20MB in size. Do you think it’s because of the large number of columns?
- I considered using
scanParquet()
to get aLazyDataFrame
but I was at a loss to understand how to do this, iterating over the rows like I need to (perhaps/presumably in some chunk size so I’m not again trying to create all the rows in memory, which seems to be the problem). Can someone please help show how this is done? - I left in the ugly
pl.DataType.Datetime().variant
stuff in the question to show another problem we had. The columns that were “Dates” would not come through in JS without this ugly hack. Is there a better way to do this? Even if I do end up solving the OOM (either usingLazyDataFrame
or not), it would also be nice to solve this.