As the data is migrated via the object level, there has been a slow upwards trend in memory usage. I can see the periodic gc activity, but the memory trend is still going up slowly. I have currently for my application set the starting and max heap at 14GB in production, however if we might need a large data set, my concern is the we will eventually hit the limit and then the application will slow down (which was what happened).
What are some effective ways and proving strategies to help identify and remedy a potential memory leak in a running Java process?
There are generally two ways to approach this problem.
1.) Use a GC Log analyser for post mortem analysis of a log. There are range of options from free to commercial for these.
You’ll need to make sure your JVM starts with the flags -XX:+PrintGCDetails -XX:+PrintTenuringDistribution and -loggc:[name of your gc log]
When you analyse the log you want to look at what happens after full GC’s. Is the memory consumption dropping back to a fairly flat level? If not then you have a possible memory leak.
If it’s falling back fairly flat then you simply just have a big resident set size problem, up the heap size or split the work into batches.
2.) Fire up Visual VM (comes with Java) and then sample your object usage (profiler) Visual VM tells you how many generations the objects are living in. Suspicious memory leaks tend to live in all of the generations (look for the big number, it’ll be the big red bar at the top). Typically this is some sort of base Java class byte[] or String or whatever, but that’s not the leak, the leak is your code that’s creating those and not letting go.
So next step is to follow the method profile down to the bit of code that’s doing the creating and look at the source code as to why you’re not releasing.
3
-
I used to use BEA’s JRockit enter link description here to find out the memory leak points in Java program.
-
Now BEA has been bought by Oracle,I do not know whether JRockit is live or dead.
2