I have a bull queue in a server which accepts some chunk data. Previously these chunk data were saved in a MongoDB individually in documents.
These chunks had a id in common so i grouped them by id then pushed the events together.
Sometimes there are multiple ids that i need to get bulk chunks from, and that is causing me some problems.
"$group": {
"_id": '$testID',
"events": {
"$push": '$events'
},
"videoID": {
"$first": "$testID"
},
'client' {
"$first": "$client"
}
}
}
], allowDiskUse=True))
Getting this chunks of data is passing the 100MB limit of mongodb and it causing me the exceed error of returning objects to memory. (allowDiskUse does not seem to work).
So i have come with a different solution in place.
Events come like this example
queue 1 [
a : “434,
b : “5454”,
id : “iduser”
]
I have tried first accessing the file
fs.access(filename, fs.constants.F_OK, (err) => {
if (err) {
fs.writeFile(filename, JSON.stringify(events, null, 2), (err) => {
if (err) {
console.error("Error writing file:", err);
} else {
console.log("File created successfully!");
}
});
Then in the second request just appending it with the method appendFile but with this logic the format is formed in uncorrected way.
Example file.json
[{
a : 4334,
b : 4334,
id : 434,
}
]
[{
a : 99,
b : 43,
id : 434,
}
]
But i want it in a different format
[
{
a : 434,
b : 4343
}
,{
a : 343
b : 434
}
]
So meaning all the objects in a single array.
Because my solution is to construct the files chunks by chunks in this format.
There is fix for this to read the files the append to it but files may become larger and how to prevent that then?