Environment:
Java 17,
Spring boot 3.2.4,
Apache Tomcat by default
G1 by default
In my application I deal with huge byte arrays, from 5 to 50 Mb. It is because I’m generating PDF files. And operations in code looks like:
byte[] pdfContent = generatePDFViaHttpCallOnAnotherMicroservice();
byte[] optimizedPdfContent = optimizePdf(pdfContent);
return optimizedPdfContent; //to the user from @RestController
When I analyze performance of my application, I see that PDF generation is too heavy. About 10 pdf generations in a minute will drop my pod. One of the problems which I see – huge byte[] arrays. As I understand they require huge sequential memory blocks. And memory allocation performed directly in old generation. As I understand from GC cycles, to allocate byte[] of 25Mb I need to perform GC and defragmentation of RAM in old gen.
Solutions to improve situation which I see:
- Increase block size with
-XX:HeapRegionSize
is not an option. Because my app handles a lot of other requests, where request/response size is ~10Kb - Can I have 10% of RAM reserved in old generation just for cases of pdf generation. How does it possible to configure it?
- Can I replace byte[] with other type of data? Goal of replacement – do not use huge block of sequential memory. And in case is HeapRegionSize is 1Mb and byte[] is 25Mb – use 25 blocks of 1Mb scattered all around the heap. I understand that it will degrade performance, but at the moment it is not as critical as GC problems. Because application just can’t handle simple load
Could you please offer me the best solution an recommendations how it can be implemented?