Some (at least Mono’s and .NET’s) garbage collectors have a short term memory area which they scan often, and a secondary memory area which they scan less often. Mono calls this a nursery.
To find out which objects can be disposed of, they scan all objects starting from roots, the stack and the registers and dispose all objects that aren’t being referenced anymore.
My question is how they prevent all in use memory from being scanned on every collect? In principle, the only way to find out what objects aren’t in use anymore is to scan all objects and all their references. However, this would prevent the OS from swapping out memory even though it isn’t in use by the application and feels like a huge amount of work that needs to be done, also for “Nursery Collection”. It doesn’t feel like they’re winning much by using a nursery.
Am I missing something or is the garbage collector actually scanning every object and every reference every time it does a collection?
1
The fundamental observations which allow generational garbage-collection to avoid having to scan all older-generation objects are:
- After a collection, all objects that still exist will be of some minimum generation (e.g. in .net, after a Gen0 collection, all objects are Gen1 or Gen2; after a Gen1 or Gen2 collection, all objects are Gen2).
- An object, or portion thereof, which has not been written since a collection that promoted everything to generation N or higher cannot contain any references to objects of lower generations.
- If an object has reached a certain generation, it need not be identified as reachable to ensure its retention when collecting lower generations.
In many GC frameworks, it’s possible for the garbage collector to flag objects or portions thereof in such a way that the first attempt to write to them will trigger special code to record the fact that they have been modified. An object or portion thereof which has been modified, regardless of its generation, must be scanned in the next collection, since it may contain references to newer objects. On the other hand, it’s very common for there to be a lot of older objects that do not get modified between collections. The fact that lower-generation scans can ignore such objects can allow such scans to complete much more quickly than they otherwise would.
Note, btw, that even if one cannot detect when objects are modified and would have to scan everything on each GC pass, generational garbage collection could still improve the “sweep” stage performance of a compacting collector. In some embedded environments (especially those where there is little or no difference in speed between sequential and random memory accesses), moving blocks of memory around is relatively expensive compared to tagging references. Consequently, even if the “mark” phase can’t be sped up using a generational collector, speeding up the “sweep” phase may be worthwhile.
4
The GCs you are referring to are generational garbage collectors. They are engineered to get the most out of an observation known as “infant mortality” or “the generational hypothesis”, which means that most objects become unreachable very quickly. They indeed scan starting from the roots, but ignore all old objects. Therefore, they do not need to scan most of the objects in memory, they only scan young objects (at the expense of not detecting unreachable old objects, at least not at that point).
“But that’s wrong”, I hear you scream, “old objects can and do refer to young objects”. You’re right, and there are several solutions to that, which all revolve around gaining knowledge, quickly and efficiently, which old objects must be checked and which are safe to ignore. They pretty much boil down to recording objects, or small (larger than objects, but much smaller than the whole heap) ranges of memory which contain pointers to younger generations. Others have described those far better than me, so I’ll just give you a couple of keywords: Card marking, remembered sets, write barriers. There are other techniques too (including hybrids), but these encompass the common approaches I’m aware of.
To find out what nursery objects are still live, the collector only needs to scan the the root set and any old objects that have been mutated since the last collection, since an old object that has not been recently mutated cannot possibly point to a young object. There are different algorithms for maintaining this information at varying levels of precision (from an exact set of mutated fields to a set of pages where mutation may have occurred), but they all generally involve some sort of write barrier: code that runs on every reference-typed field mutation that updates the GC’s bookkeeping.
The oldest and simplest generation of garbage collectors actually did scan all of memory, and had to stop all other processing while they did it. Later algorithms improved on this in various ways – making the copy/scan incremental, or run in parallel. Most modern garbage collectors segregate objects into generations, and carefully manage cross-generational pointers so newer generations can be collected without disturbing older ones.
The key point is that garbage collectors work in close collaboration with the compiler and with the rest of the runtime to maintain the illusion that it is watching all of memory.
2
Basically… GC uses “buckets” to separate what is in use and what isn’t.
Once it makes it check, it wipes out things that are not in use and moves everything else to 2nd generation (which is checked less often than 1st generation) and then moves things that are still in use in 2nd den to 3rd gen.
So, things in 3rd generation are usually objects that are stuck open for some reason, and GC doesn’t check there very often.
4
The algorithm usually used by this GC is the Naïve mark-and-sweep
you should also be aware of the fact that this is a not managed by the C# itself, but by the so called CLR .
7