Generational Model
Gen0, Gen1, Gen2 heaps and the weak generational hypothesis
.NET Generational Garbage Collection Model
Master the .NET generational garbage collection model with free flashcards and spaced repetition practice. This lesson covers heap organization, generation promotions, collection strategies, and performance optimizationβessential concepts for building high-performance .NET applications.
Welcome to Generational Garbage Collection π»
The .NET garbage collector doesn't treat all objects equally. Instead, it uses a sophisticated generational model based on empirical observation: most objects die young. By organizing memory into generations and collecting younger objects more frequently, the GC achieves remarkable performance. Understanding this model is crucial for writing efficient .NET applications and diagnosing memory issues.
Core Concepts
The Weak Generational Hypothesis π§
The generational model is built on a fundamental observation in computer science called the weak generational hypothesis:
"Most objects die young."
Research shows that in typical applications:
- 80-90% of objects become garbage shortly after allocation
- Only a small fraction survives long-term
- Older objects rarely reference newer objects
This insight allows the GC to optimize by focusing collection efforts on recently allocated objects, where most garbage resides.
π‘ Memory Device: Think of it like a restaurant kitchenβmost ingredients (objects) are used immediately and disposed of quickly, while only a few items (long-lived objects like spices) stay on the shelf for extended periods.
The Three Generations πΊ
.NET divides the managed heap into three generations: Gen 0, Gen 1, and Gen 2.
| Generation | Purpose | Typical Size | Collection Frequency |
|---|---|---|---|
| Gen 0 | Newly allocated objects | 256 KB - 4 MB | Very frequent |
| Gen 1 | Short-lived object buffer | 512 KB - 8 MB | Less frequent |
| Gen 2 | Long-lived objects | Limited only by memory | Infrequent |
Generation 0 (Gen 0): The Nursery π±
Gen 0 is where all new objects are born (with rare exceptions). It's the smallest and most frequently collected generation.
Characteristics:
- Fast allocation: Objects are allocated sequentially using a simple pointer increment
- High mortality rate: 80-90% of objects die here
- Frequent collections: Triggered when Gen 0 fills up
- Quick collections: Typically completes in microseconds
When Gen 0 fills up:
- GC suspends execution threads
- Identifies live objects (roots β reachable objects)
- Compacts survivors to the beginning of Gen 0
- Promotes survivors to Gen 1
- Resets Gen 0 allocation pointer
Generation 1 (Gen 1): The Buffer Zone π‘οΈ
Gen 1 serves as a buffer between short-lived and long-lived objects. It catches objects that survived one collection but may not survive much longer.
Characteristics:
- Medium size: Larger than Gen 0, smaller than Gen 2
- Filter function: Prevents premature promotion to Gen 2
- Collected with Gen 0: Often collected together (Gen 0+1 collection)
- Second chance: Objects get another opportunity to die
π‘ Think of Gen 1 as a probationary periodβobjects must prove they'll live long enough to justify the cost of moving to Gen 2.
Generation 2 (Gen 2): The Tenured Generation ποΈ
Gen 2 holds long-lived objects that have survived at least two collections. This is where most of your application's persistent data lives.
Characteristics:
- Large size: Can grow to gigabytes
- Expensive collections: Full Gen 2 collections are costly
- Infrequent collections: Only when necessary
- Contains special objects: Large Object Heap (LOH) for objects β₯85,000 bytes
Objects typically in Gen 2:
- Static fields and singletons
- Long-lived caches
- Application configuration
- Framework objects
GENERATIONAL HEAP STRUCTURE βββββββββββββββββββββββββββββββββββββββββββββββ β MANAGED HEAP β βββββββββββββββββββββββββββββββββββββββββββββββ€ β β β Gen 0 [ββββββββ] β Fast, frequent β β 256KB-4MB collections β β β β Gen 1 [ββββββββββ] β Buffer zone, β β 512KB-8MB medium frequency β β β β Gen 2 [ββββββββββββββββββββββββ...] β β β Growing size, infrequent β β (includes LOH) β β β βββββββββββββββββββββββββββββββββββββββββββββββ Allocation β Gen 0 Survival β Gen 1 β Gen 2
Object Promotion: The Journey Through Generations π
Objects advance through generations by surviving collections. This process is called promotion.
Promotion Flow:
OBJECT LIFECYCLE
New Object
β
ββββββββββββββββ
β Gen 0 β β Allocation happens here
β [Object] β
ββββββββ¬ββββββββ
β
β Survives Gen 0 collection
β
ββββββββββββββββ
β Gen 1 β β First survival
β [Object] β
ββββββββ¬ββββββββ
β
β Survives Gen 1 collection
β
ββββββββββββββββ
β Gen 2 β β Long-term residence
β [Object] β (stays here until GC'd)
ββββββββββββββββ
Promotion Rules:
- Objects in Gen 0 that survive a Gen 0 collection β promoted to Gen 1
- Objects in Gen 1 that survive a Gen 1 collection β promoted to Gen 2
- Objects in Gen 2 stay in Gen 2 (no Gen 3)
β οΈ Important: Promotion is one-way. Objects never move back to younger generations. Once in Gen 2, always in Gen 2 (until collected).
Collection Types and Triggers π―
Not all garbage collections are created equal. The .NET GC performs different types of collections based on what's needed.
Collection Types:
| Collection Type | Generations Collected | Trigger | Cost |
|---|---|---|---|
| Gen 0 Collection | Gen 0 only | Gen 0 budget exceeded | Very low |
| Gen 1 Collection | Gen 0 + Gen 1 | Gen 1 budget exceeded | Low |
| Gen 2 Collection (Full GC) | All generations | Gen 2 budget exceeded, memory pressure, explicit call | High |
Ephemeral Collections (Gen 0 and Gen 1) β‘
Ephemeral refers to Gen 0 and Gen 1 together. These collections are fast because:
- They examine only a small portion of the heap
- Most objects are already dead (high mortality)
- Compaction distance is minimal
Typical Gen 0 collection timeline:
- Suspend threads: 10-100 microseconds
- Mark live objects: 50-500 microseconds
- Compact and promote: 50-300 microseconds
- Resume threads: 10-50 microseconds
Total: Usually < 1 millisecond
Full Collections (Gen 2) π
Full collections examine the entire managed heap. They're expensive but necessary.
Why full collections are costly:
- Must scan all live objects (potentially millions)
- Large compaction operations
- Can take 10-100+ milliseconds
- May trigger OS paging if memory is tight
Triggers for full collections:
- Gen 2 budget exceeded: Gen 2 grew too large
- Memory pressure: Operating system reports low memory
- Explicit call:
GC.Collect()called - Allocation failure: Can't allocate large object
π‘ Performance Tip: Your application's performance is heavily influenced by Gen 2 collection frequency. Minimize Gen 2 collections by keeping object lifetimes either very short (die in Gen 0) or truly long-lived.
The Card Table and Write Barriers π
A critical challenge: How does the GC handle references from older generations to younger ones without scanning the entire heap?
The Problem:
class OldObject // Lives in Gen 2
{
public YoungObject Child; // Lives in Gen 0
}
During a Gen 0 collection, we must identify Child as live, but we don't want to scan all of Gen 2 to find this reference.
The Solution: Card Table π
The card table is a data structure that tracks which regions of older generations contain references to younger generations.
- Heap divided into 512-byte "cards"
- Each card has a 1-byte entry in the card table
- When Gen 2/Gen 1 object writes to reference field β card marked "dirty"
- During Gen 0 collection β only scan dirty cards in older generations
Write Barrier:
Every reference field assignment goes through a write barrierβa small piece of code that updates the card table:
// Without write barrier (conceptual)
oldObject.field = newObject;
// With write barrier (what actually happens)
oldObject.field = newObject;
MarkCardTableDirty(AddressOf(oldObject));
CARD TABLE MECHANISM Gen 2 Memory (divided into cards): ββββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ β Card β Card β Card β Card β Card β β 0 β 1 β 2 β 3 β 4 β β β β β β β β β β = references Gen 0 object ββββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββ Card Table: βββββ¬ββββ¬ββββ¬ββββ¬ββββ β 0 β 1 β 0 β 1 β 0 β 1 = dirty (scan this card) βββββ΄ββββ΄ββββ΄ββββ΄ββββ 0 = clean (skip this card) During Gen 0 collection: β Only scan cards 1 and 3 in Gen 2 β Massive performance savings!
β οΈ Write barriers add overhead (typically 1-5 CPU cycles per reference write), but this is vastly cheaper than scanning entire older generations.
GC Budget and Tuning ποΈ
The GC dynamically adjusts generation sizes based on application behavior. This is called GC tuning or budget management.
How budgets work:
- Initial budgets: GC starts with default sizes
- Monitor survival rates: After each collection, measure how many objects survived
- Adjust budgets:
- If survival rate is high β increase budget (reduce collection frequency)
- If survival rate is low β decrease budget (collect more aggressively)
- Balance throughput vs. memory: Trade-off between pause times and memory usage
Example budget adjustment:
Scenario: High Gen 0 survival rate (50%)
Before:
Gen 0 budget: 2 MB
Collections per second: 100
Survival rate: 50%
GC Decision: "Too many objects surviving β increase budget"
After:
Gen 0 budget: 4 MB
Collections per second: 50
Survival rate: 30% (improved!)
Tuning factors:
- Survival rates: Lower is better (more efficient collections)
- Collection frequency: Balance pause frequency with pause duration
- Memory pressure: Available system memory constraints
- Workload type: Server vs. client, batch vs. interactive
Detailed Examples
Example 1: Tracing an Object's Journey πΊοΈ
Let's follow a Customer object through its lifecycle:
public class Customer
{
public string Name { get; set; }
public List<Order> Orders { get; set; } = new();
}
// Time T0: Allocation
var customer = new Customer { Name = "Alice" };
// Customer allocated in Gen 0
// Generation: 0, Age: 0
// Time T1: Gen 0 collection triggered (Gen 0 full)
// Customer is still referenced by 'customer' variable
// β Customer survives, promoted to Gen 1
// Generation: 1, Age: 1
// Time T2: More allocations...
for (int i = 0; i < 1000; i++)
{
var temp = new Customer { Name = $"Temp{i}" };
// These 'temp' objects die immediately (not referenced)
}
// Another Gen 0 collection occurs
// Customer still referenced β remains in Gen 1
// Generation: 1, Age: 1 (no promotion this time)
// Time T3: Gen 1 collection triggered
// Customer still referenced β promoted to Gen 2
// Generation: 2, Age: 2
// Time T4: Customer lives in Gen 2 for remainder of application
// Will only be collected during full GC (if unreferenced)
Key Observations:
- Age tracking: GC tracks how many collections an object survived
- Promotion timing: Objects promote when their generation is collected
- Gen 2 residence: Once in Gen 2, objects stay until the end (or until unreferenced)
Visualization:
CUSTOMER OBJECT TIMELINE
T0 T1 T2 T3 T4
β β β β β
βΌ βΌ βΌ βΌ βΌ
Gen 0 β Gen 1 β Gen 1 β Gen 2 β Gen 2
[Customer] [Customer] [Customer] [Customer] [Customer]
Age: 0 Age: 1 Age: 1 Age: 2 Age: 2
β β β β
GC 0 GC 0 GC 1 (lives here)
Promoted Promoted
Example 2: Generation Pressure and Performance π
Let's examine how different object allocation patterns affect GC behavior:
Pattern A: Short-lived objects (Good) β
public class GoodPattern
{
public void ProcessOrders()
{
foreach (var orderId in GetOrderIds())
{
// Create temporary objects
var processor = new OrderProcessor();
var result = processor.Process(orderId);
SaveResult(result);
// processor and intermediate objects die here
// β Collected in Gen 0, never promoted
}
}
}
Impact:
- 95% of objects die in Gen 0
- Gen 0 collections are fast and frequent
- Gen 1 and Gen 2 stay small
- Excellent performance
Pattern B: Mid-lived objects (Bad) β
public class BadPattern
{
private List<OrderProcessor> processors = new();
public void ProcessOrders()
{
foreach (var orderId in GetOrderIds())
{
var processor = new OrderProcessor();
processors.Add(processor); // Kept alive!
processor.Process(orderId);
}
// Clear after batch
processors.Clear();
}
}
Impact:
- Objects survive Gen 0 β promoted to Gen 1
- Objects survive Gen 1 β promoted to Gen 2
- Then all die at once when cleared
- Gen 2 grows unnecessarily
- Triggers expensive full collections
Performance Comparison:
| Metric | Good Pattern | Bad Pattern |
|---|---|---|
| Gen 0 collections/sec | 100 | 100 |
| Gen 1 collections/sec | 5 | 20 |
| Gen 2 collections/sec | 0.1 | 2 |
| Avg pause time | 0.5 ms | 15 ms |
| Memory usage | 50 MB | 250 MB |
π‘ Design Principle: Keep object lifetimes either very short (die in Gen 0) or truly long-lived (worth keeping in Gen 2). Avoid mid-lived objects that pollute Gen 1 and Gen 2.
Example 3: Card Table in Action π
Let's see how the card table enables efficient collections:
public class Container // Lives in Gen 2 (old)
{
public Item CurrentItem; // Will reference Gen 0 objects
public void Update()
{
// Allocate new item (goes to Gen 0)
var newItem = new Item { Data = "Fresh" };
// Assignment triggers write barrier
CurrentItem = newItem; // β Write barrier here!
// What happens:
// 1. Reference updated: CurrentItem β newItem
// 2. Write barrier marks card containing Container as dirty
// 3. Next Gen 0 collection will scan this card
}
}
public class Item
{
public string Data { get; set; }
}
Step-by-step with card table:
BEFORE ASSIGNMENT:
Gen 2:
βββββββββββββββββββββββββββ
β [Container] β Card 5
β CurrentItem: null β
βββββββββββββββββββββββββββ
Card Table: [0][0][0][0][0][0] β Card 5 clean
Gen 0:
βββββββββββββββββββββββββββ
β [Item "Fresh"] β
βββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββ
AFTER ASSIGNMENT (CurrentItem = newItem):
Gen 2:
βββββββββββββββββββββββββββ
β [Container] β Card 5
β CurrentItem: ββββββββ β
βββββββββββββββββββββββββΌββ
β
ββββ [Item "Fresh"] (Gen 0)
Card Table: [0][0][0][0][0][1] β Card 5 marked dirty!
β
Write barrier did this
βββββββββββββββββββββββββββββββββ
DURING NEXT GEN 0 COLLECTION:
1. Scan Gen 0 roots (stack, registers)
2. Scan dirty cards in Gen 2 (only card 5!)
3. Find Container.CurrentItem β Item reference
4. Mark Item as live
5. Item survives collection
Without card table:
- Would need to scan entire Gen 2 (potentially gigabytes)
- Collection time increases dramatically with heap size
With card table:
- Only scan dirty cards (typically < 1% of Gen 2)
- Collection time stays fast regardless of Gen 2 size
Example 4: Analyzing GC Behavior with Diagnostics π
Let's use .NET diagnostic tools to understand generation behavior:
using System;
using System.Diagnostics;
public class GCAnalysis
{
public static void AnalyzeGenerations()
{
// Force clean state
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Console.WriteLine("\nStarting GC analysis...");
PrintGCInfo("Initial state");
// Allocate short-lived objects
for (int i = 0; i < 100000; i++)
{
var temp = new byte[1024]; // 1 KB objects
}
PrintGCInfo("After short-lived allocations");
// Allocate and keep some objects
var longLived = new List<byte[]>();
for (int i = 0; i < 1000; i++)
{
longLived.Add(new byte[1024]);
}
PrintGCInfo("After long-lived allocations");
// Force collections to promote
GC.Collect(0);
PrintGCInfo("After Gen 0 collection");
GC.Collect(1);
PrintGCInfo("After Gen 1 collection");
GC.Collect(2);
PrintGCInfo("After Gen 2 collection");
}
private static void PrintGCInfo(string label)
{
Console.WriteLine($"\n{label}:");
Console.WriteLine($" Gen 0 collections: {GC.CollectionCount(0)}");
Console.WriteLine($" Gen 1 collections: {GC.CollectionCount(1)}");
Console.WriteLine($" Gen 2 collections: {GC.CollectionCount(2)}");
Console.WriteLine($" Total memory: {GC.GetTotalMemory(false) / 1024:N0} KB");
Console.WriteLine($" Gen 0 size: {GC.GetGCMemoryInfo().GenerationInfo[0].SizeBytes / 1024:N0} KB");
Console.WriteLine($" Gen 1 size: {GC.GetGCMemoryInfo().GenerationInfo[1].SizeBytes / 1024:N0} KB");
Console.WriteLine($" Gen 2 size: {GC.GetGCMemoryInfo().GenerationInfo[2].SizeBytes / 1024:N0} KB");
}
}
Sample Output:
Starting GC analysis...
Initial state:
Gen 0 collections: 0
Gen 1 collections: 0
Gen 2 collections: 0
Total memory: 124 KB
Gen 0 size: 0 KB
Gen 1 size: 0 KB
Gen 2 size: 124 KB
After short-lived allocations:
Gen 0 collections: 15 β Multiple Gen 0 collections occurred
Gen 1 collections: 3 β Some Gen 1 collections triggered
Gen 2 collections: 0
Total memory: 1,156 KB β Minimal memory retained
Gen 0 size: 256 KB
Gen 1 size: 512 KB
Gen 2 size: 388 KB
After long-lived allocations:
Gen 0 collections: 16
Gen 1 collections: 3
Gen 2 collections: 0
Total memory: 2,180 KB β 1 MB kept alive (longLived list)
Gen 0 size: 512 KB
Gen 1 size: 512 KB
Gen 2 size: 1,156 KB
After Gen 0 collection:
Gen 0 collections: 17 β Explicit collection
Gen 1 collections: 3
Gen 2 collections: 0
Total memory: 2,180 KB
Gen 0 size: 0 KB β Gen 0 cleared
Gen 1 size: 1,024 KB β Objects promoted to Gen 1
Gen 2 size: 1,156 KB
After Gen 1 collection:
Gen 0 collections: 17
Gen 1 collections: 4 β Gen 1 collected
Gen 2 collections: 0
Total memory: 2,180 KB
Gen 0 size: 0 KB
Gen 1 size: 0 KB β Gen 1 cleared
Gen 2 size: 2,180 KB β Objects promoted to Gen 2
After Gen 2 collection:
Gen 0 collections: 17
Gen 1 collections: 4
Gen 2 collections: 1 β Full collection
Total memory: 1,048 KB β Only live objects remain
Gen 0 size: 0 KB
Gen 1 size: 0 KB
Gen 2 size: 1,048 KB β Compacted Gen 2
Key Insights:
- Gen 0 collections dominate: 17 Gen 0 vs. 4 Gen 1 vs. 1 Gen 2
- Memory stays low: Short-lived objects don't accumulate
- Promotion visible: Objects move through generations
- Full GC compacts: Memory reduces after Gen 2 collection
Common Mistakes
β οΈ Mistake 1: Creating Mid-Lived Objects
Problem:
public class ReportGenerator
{
private List<DataRow> cache = new();
public void GenerateReport()
{
// Build cache for this report
for (int i = 0; i < 10000; i++)
{
cache.Add(LoadDataRow(i));
}
ProcessData(cache);
// Clear after use
cache.Clear(); // Too late! Objects already in Gen 2
}
}
Why it's bad:
- Objects survive long enough to reach Gen 2
- Then immediately die
- Gen 2 fills with garbage
- Triggers expensive full collections
Solution:
public class ReportGenerator
{
public void GenerateReport()
{
// Use local variable - scoped to method
var cache = new List<DataRow>(10000); // Pre-size!
for (int i = 0; i < 10000; i++)
{
cache.Add(LoadDataRow(i));
}
ProcessData(cache);
// cache dies when method returns (Gen 0 collection)
}
}
β οΈ Mistake 2: Calling GC.Collect() Unnecessarily
Problem:
public void ProcessBatch(List<Order> orders)
{
foreach (var order in orders)
{
ProcessOrder(order);
GC.Collect(); // β DON'T DO THIS!
}
}
Why it's bad:
- Forces expensive full collections
- Disrupts GC's tuning algorithms
- Worse performance than letting GC decide
- Creates "stop-the-world" pauses
When to use GC.Collect():
- After loading large amounts of temporary data (rare)
- Before long idle periods
- In specific testing scenarios
- Almost never in production code
β οΈ Mistake 3: Ignoring Large Object Heap (LOH) Behavior
Problem:
public class ImageProcessor
{
public void ProcessImages()
{
foreach (var imagePath in GetImages())
{
// Allocates 10 MB buffer each time
var buffer = new byte[10 * 1024 * 1024];
LoadImage(imagePath, buffer);
ProcessBuffer(buffer);
// buffer becomes garbage
}
}
}
Why it's bad:
- Objects β₯ 85,000 bytes go directly to Gen 2 (LOH)
- LOH isn't compacted by default (fragmentation)
- Creates Gen 2 pressure
- Frequent large allocations β frequent full GCs
Solution:
public class ImageProcessor
{
// Reuse large buffer
private readonly byte[] buffer = new byte[10 * 1024 * 1024];
public void ProcessImages()
{
foreach (var imagePath in GetImages())
{
LoadImage(imagePath, buffer); // Reuse!
ProcessBuffer(buffer);
}
}
}
// Or use ArrayPool for temporary buffers
public class ImageProcessorWithPool
{
public void ProcessImages()
{
foreach (var imagePath in GetImages())
{
var buffer = ArrayPool<byte>.Shared.Rent(10 * 1024 * 1024);
try
{
LoadImage(imagePath, buffer);
ProcessBuffer(buffer);
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
}
}
}
}
β οΈ Mistake 4: Not Understanding Promotion Timing
Problem:
public class DataCache
{
private Dictionary<int, Data> cache = new();
public void PopulateCache()
{
for (int i = 0; i < 100000; i++)
{
cache[i] = new Data();
if (i % 10000 == 0)
{
// Thinking this prevents promotion
GC.Collect(0, GCCollectionMode.Optimized);
}
}
}
}
Misconception:
- Frequent Gen 0 collections will keep objects in Gen 0
Reality:
- Objects survive Gen 0 collection β promoted to Gen 1
- No way to prevent promotion of live objects
- Making objects die in Gen 0 is the only solution
Better approach:
public class DataCache
{
// If data must be long-lived, accept Gen 2 residence
// Optimize the cache itself
private Dictionary<int, Data> cache = new(100000); // Pre-size!
public void PopulateCache()
{
// Batch allocations to reduce GC pressure
var batch = new List<Data>(10000);
for (int i = 0; i < 100000; i++)
{
var data = new Data();
batch.Add(data);
if (batch.Count == 10000)
{
foreach (var item in batch)
cache[cache.Count] = item;
batch.Clear(); // Reuse list
}
}
}
}
β οΈ Mistake 5: Mixing Short and Long-Lived References
Problem:
public class EventLogger
{
// Long-lived singleton
private List<LogEntry> recentLogs = new();
public void Log(string message)
{
var entry = new LogEntry
{
Message = message,
Timestamp = DateTime.Now,
Context = CaptureFullContext() // Captures everything!
};
recentLogs.Add(entry);
if (recentLogs.Count > 1000)
recentLogs.RemoveAt(0);
}
}
Why it's bad:
- Long-lived list (Gen 2) holds references to Gen 0 objects
- Prevents Gen 0 objects from being collected
- Everything gets promoted to Gen 2
- Card table overhead increases
Solution:
public class EventLogger
{
// Store only essential data
private List<string> recentMessages = new(1000);
private List<DateTime> recentTimestamps = new(1000);
public void Log(string message)
{
// Copy strings (interned or copied to Gen 2)
recentMessages.Add(message);
recentTimestamps.Add(DateTime.Now);
if (recentMessages.Count > 1000)
{
recentMessages.RemoveAt(0);
recentTimestamps.RemoveAt(0);
}
}
}
Key Takeaways
β The generational model optimizes for the common case: Most objects die young, so focus collection efforts on Gen 0.
β Three generations serve different purposes:
- Gen 0: Nursery for new objects (fast, frequent collections)
- Gen 1: Buffer zone (prevents premature Gen 2 promotion)
- Gen 2: Long-term storage (expensive, infrequent collections)
β Promotion is automatic and one-way: Objects advance through generations by surviving collections. No demotion.
β Card tables enable efficient ephemeral collections: Track oldβyoung references without scanning entire heap.
β Design for generational efficiency:
- Keep lifetimes short (die in Gen 0) or long (worth Gen 2)
- Avoid mid-lived objects that pollute Gen 1 and Gen 2
- Reuse large buffers instead of reallocating
β GC tuning is dynamic: The runtime adjusts generation budgets based on survival rates and memory pressure.
β Full Gen 2 collections are expensive: Minimize by reducing Gen 2 pressure and preventing unnecessary promotions.
β
Tools for analysis: Use GC.CollectionCount(), GC.GetGCMemoryInfo(), and profilers to understand your app's GC behavior.
π Quick Reference Card: Generational GC
| Concept | Key Points |
|---|---|
| Gen 0 | New objects, 256KB-4MB, collected most frequently, ~80-90% mortality |
| Gen 1 | Buffer zone, 512KB-8MB, medium frequency, filters premature Gen 2 promotion |
| Gen 2 | Long-lived objects, large/growing size, least frequent, includes LOH (β₯85KB) |
| Promotion | One-way: Gen 0 β Gen 1 β Gen 2. Happens when object survives collection. |
| Collection Types | Gen 0 only (~1ms) | Gen 0+1 (~2-5ms) | Full GC (~10-100+ms) |
| Card Table | Tracks oldβyoung references, 512-byte cards, updated by write barriers |
| Performance Tips | Short or long lifetimes (not mid). Reuse large buffers. Avoid GC.Collect(). |
| Diagnostics | `GC.CollectionCount(gen)`, `GC.GetGCMemoryInfo()`, PerfView, dotMemory |
π Further Study
Microsoft Docs - Fundamentals of Garbage Collection: https://learn.microsoft.com/en-us/dotnet/standard/garbage-collection/fundamentals - Official documentation with detailed explanations of GC internals
Maoni Stephens' Blog (GC Team Lead): https://devblogs.microsoft.com/dotnet/author/maoni/ - Deep technical insights from the .NET GC team, including performance optimization techniques
PerfView Tutorial - GC Analysis: https://github.com/microsoft/perfview/blob/main/documentation/Tutorial.md - Learn to use PerfView for analyzing GC behavior and diagnosing memory issues in production applications