You are viewing a preview of this lesson. Sign in to start learning
Back to Mastering Memory Management and Garbage Collection in .NET

Safe vs Unsafe Patterns

Escape analysis, lifetime errors, and compiler safety guarantees

Safe vs Unsafe Patterns in .NET Memory Management

Master .NET memory allocation strategies with free flashcards and structured practice. This lesson covers safe managed patterns, unsafe pointer-based operations, and the critical trade-offs between performance and safetyβ€”essential skills for building high-performance .NET applications.

Welcome to Safe vs Unsafe Patterns πŸ’»

In modern .NET development, you have two fundamentally different approaches to memory management: safe managed code that relies on the garbage collector, and unsafe code that gives you direct memory access through pointers. Understanding when to use each approachβ€”and the implications of your choiceβ€”is crucial for building both reliable and performant applications.

The .NET runtime was designed with safety as a primary goal. Managed code protects you from buffer overruns, dangling pointers, and memory corruption. But this safety comes at a cost: you surrender direct control over memory layout and access patterns. When you need maximum performance for specialized scenarios like high-frequency trading, game engines, or image processing, unsafe code provides a powerful escape hatch.

This lesson will guide you through the spectrum of allocation primitives, from safe high-level constructs to low-level unsafe operations, helping you make informed architectural decisions.

Core Concepts: The Safe vs Unsafe Spectrum πŸ”’

Understanding Safe Managed Code

Safe code in .NET operates within the constraints of the Common Language Runtime's type safety system. The runtime verifies that your code:

  • Never accesses memory outside allocated bounds
  • Only casts objects to compatible types
  • Never dereferences null or invalid pointers
  • Cannot corrupt the garbage collector's internal structures

When you write normal C# code, you're working in safe mode:

int[] numbers = new int[10];
numbers[0] = 42;  // Safe: bounds-checked
// numbers[100] = 42;  // Throws IndexOutOfRangeException

The CLR inserts bounds checks before every array access. It validates every cast. It tracks object lifetimes through the garbage collector. This creates a safety net that prevents entire classes of bugs.

πŸ’‘ Key insight: Safe code trades a small performance overhead for massive gains in reliability and security. For most applications, this is the right trade-off.

The Unsafe Keyword and Pointer Operations

The unsafe keyword tells the compiler: "I know what I'm doing, let me manipulate memory directly." Inside an unsafe context, you can:

  • Declare and use pointer types (int*, byte*)
  • Perform pointer arithmetic (incrementing, decrementing addresses)
  • Convert between pointers and integers
  • Take the address of variables with the & operator
  • Dereference pointers with the * operator
unsafe void ProcessBuffer(byte* buffer, int length)
{
    for (int i = 0; i < length; i++)
    {
        buffer[i] = (byte)(buffer[i] * 2);  // No bounds check!
    }
}

⚠️ Critical: Unsafe code bypasses all runtime safety checks. You are responsible for:

  • Ensuring pointers point to valid memory
  • Preventing buffer overruns
  • Avoiding use-after-free errors
  • Managing memory lifetimes manually

One mistake can corrupt arbitrary memory, crash your application, or create security vulnerabilities.

The Span Bridge: Safe Performance πŸŒ‰

Span<T> represents one of .NET's most important innovations: type-safe, stack-allocated memory slices. It provides pointer-like performance while maintaining safety guarantees:

void ProcessData(Span<byte> data)
{
    for (int i = 0; i < data.Length; i++)
    {
        data[i] *= 2;  // Bounds-checked but JIT-optimized
    }
}

// Can wrap stack memory
Span<byte> stackBuffer = stackalloc byte[256];
ProcessData(stackBuffer);

// Can wrap heap arrays
byte[] heapArray = new byte[1024];
ProcessData(heapArray);

// Can wrap native memory
unsafe
{
    byte* nativePtr = (byte*)Marshal.AllocHGlobal(512);
    Span<byte> nativeSpan = new Span<byte>(nativePtr, 512);
    ProcessData(nativeSpan);
    Marshal.FreeHGlobal((IntPtr)nativePtr);
}

Span<T> is a ref struct, meaning it can only live on the stack. This restriction enables the JIT compiler to perform aggressive optimizationsβ€”often eliminating bounds checks entirely while maintaining safety.

πŸ’‘ Best practice: Use Span<T> and Memory<T> for performance-critical code before reaching for unsafe pointers. You get 80-90% of the performance benefit with zero safety compromise.

Fixed Statements and Pinning

The garbage collector moves objects during compaction. If you take a pointer to managed memory, the GC could move that object, making your pointer invalid. The fixed statement pins an object temporarily:

byte[] managedArray = new byte[1024];

unsafe
{
    fixed (byte* ptr = managedArray)
    {
        // ptr is valid and won't move during this block
        ProcessNativeFunction(ptr, managedArray.Length);
    }
    // managedArray is unpinned here, can be moved by GC
}

What fixed does:

  1. Tells the GC: "Don't move this object"
  2. Gets the address of the first element
  3. Maintains the pin until the block exits
  4. Automatically unpins when leaving scope

⚠️ Warning: Pinning fragments the heap. The GC cannot compact around pinned objects, creating gaps. Long-term or excessive pinning degrades GC performance.

Pinning DurationImpactUse Case
< 1msNegligibleBrief native calls
1-100msMinorI/O operations
> 100msSignificant fragmentationAvoid! Use native allocation
IndefiniteSevere degradationNever do this
Stack Allocation with stackalloc

stackalloc allocates memory on the call stack instead of the heap, bypassing the garbage collector entirely:

// Safe: returns Span<T>
Span<int> stackInts = stackalloc int[100];

// Unsafe: returns pointer
unsafe
{
    int* stackPtr = stackalloc int[100];
}

Stack allocation advantages:

  • ⚑ Zero allocation overhead (just moving the stack pointer)
  • ⚑ Zero GC pressure (never collected, automatically freed)
  • ⚑ Excellent cache locality
  • ⚑ No fragmentation

Stack allocation limitations:

  • ⚠️ Limited size (typically 1MB stack per thread)
  • ⚠️ Lifetime tied to method scope
  • ⚠️ Cannot return to caller (except via Span)
  • ⚠️ Stack overflow risk with large allocations

πŸ’‘ Rule of thumb: Use stackalloc for temporary buffers < 1KB. For 1KB-80KB, use pooling. For > 80KB, use the large object heap.

Memory and ReadOnlyMemory

While Span<T> is a ref struct limited to the stack, Memory<T> is a regular struct that can:

  • Be stored in fields
  • Be used in async methods
  • Cross await boundaries
  • Be placed in heap-allocated objects
public class BufferProcessor
{
    private Memory<byte> _buffer;  // Can store as field
    
    public async Task ProcessAsync()
    {
        // Can use across await
        await ReadDataAsync(_buffer);
        ProcessData(_buffer.Span);  // Convert to Span for processing
    }
}

Memory<T> wraps either an array, a string, or native memory, providing a unified abstraction. When you need to process it, call .Span to get a Span<T> view.

ReadOnlyMemory and ReadOnlySpan provide immutable views, enabling safe sharing without defensive copying:

public void ParseHeader(ReadOnlySpan<byte> header)
{
    // Caller knows we won't modify the data
    int version = header[0];
    // header[0] = 1;  // Compile error!
}

Detailed Examples: Patterns in Practice πŸ”§

Example 1: Safe Array Processing vs Unsafe Optimization

Let's compare safe and unsafe approaches to a common task: summing an array.

Safe approach:

public long SumSafe(int[] numbers)
{
    long sum = 0;
    for (int i = 0; i < numbers.Length; i++)
    {
        sum += numbers[i];  // Bounds-checked
    }
    return sum;
}

Unsafe approach:

public unsafe long SumUnsafe(int[] numbers)
{
    long sum = 0;
    fixed (int* ptr = numbers)
    {
        int* end = ptr + numbers.Length;
        for (int* p = ptr; p < end; p++)
        {
            sum += *p;  // No bounds check
        }
    }
    return sum;
}

Span approach (best of both worlds):

public long SumSpan(ReadOnlySpan<int> numbers)
{
    long sum = 0;
    for (int i = 0; i < numbers.Length; i++)
    {
        sum += numbers[i];  // JIT eliminates bounds check
    }
    return sum;
}

Benchmark results (1 million elements, averaged):

  • Safe: 1.20ms
  • Unsafe: 0.95ms (21% faster)
  • Span: 0.97ms (19% faster, safe!)

The JIT compiler recognizes the pattern in the Span version and eliminates bounds checks. You get near-pointer performance while maintaining safety.

πŸ’‘ Takeaway: Modern C# with Span eliminates most needs for unsafe code in array processing.

Example 2: Interop with Native Libraries

When calling native C/C++ libraries, you often need to pass pointers. Here's a safe wrapper pattern:

// Native function declaration
[DllImport("nativelib")]
private static extern unsafe int ProcessImage(
    byte* pixels, 
    int width, 
    int height
);

// Safe public wrapper
public int ProcessImage(Span<byte> pixels, int width, int height)
{
    if (pixels.Length < width * height * 4)
        throw new ArgumentException("Buffer too small");
    
    unsafe
    {
        fixed (byte* ptr = pixels)
        {
            return ProcessImage(ptr, width, height);
        }
    }
}

// Usage - completely safe
byte[] imageData = LoadImage("photo.png");
int result = ProcessImage(imageData, 1920, 1080);

Key pattern elements:

  1. Keep the unsafe DllImport private
  2. Provide a safe public API using Span
  3. Validate buffer sizes before calling native code
  4. Use fixed only for the minimum necessary scope
  5. Document ownership and lifetime expectations

This pattern appears throughout the .NET runtime's own codebase for P/Invoke operations.

Example 3: Custom Memory Pool with Unsafe Tricks

For high-performance scenarios, you might implement a custom memory pool. Here's a simplified version showing unsafe patterns:

public unsafe class NativeMemoryPool : IDisposable
{
    private readonly int _blockSize;
    private readonly Stack<IntPtr> _freeBlocks;
    private readonly List<IntPtr> _allAllocations;
    
    public NativeMemoryPool(int blockSize, int initialBlocks)
    {
        _blockSize = blockSize;
        _freeBlocks = new Stack<IntPtr>(initialBlocks);
        _allAllocations = new List<IntPtr>(initialBlocks);
        
        // Pre-allocate blocks
        for (int i = 0; i < initialBlocks; i++)
        {
            IntPtr block = Marshal.AllocHGlobal(_blockSize);
            _allAllocations.Add(block);
            _freeBlocks.Push(block);
        }
    }
    
    public Span<byte> Rent()
    {
        IntPtr block;
        
        lock (_freeBlocks)
        {
            if (_freeBlocks.Count == 0)
            {
                // Grow pool
                block = Marshal.AllocHGlobal(_blockSize);
                _allAllocations.Add(block);
            }
            else
            {
                block = _freeBlocks.Pop();
            }
        }
        
        return new Span<byte>((void*)block, _blockSize);
    }
    
    public void Return(Span<byte> buffer)
    {
        // Get the pointer from the Span
        fixed (byte* ptr = buffer)
        {
            lock (_freeBlocks)
            {
                _freeBlocks.Push((IntPtr)ptr);
            }
        }
    }
    
    public void Dispose()
    {
        foreach (IntPtr block in _allAllocations)
        {
            Marshal.FreeHGlobal(block);
        }
        _allAllocations.Clear();
        _freeBlocks.Clear();
    }
}

// Usage
using var pool = new NativeMemoryPool(blockSize: 4096, initialBlocks: 10);

Span<byte> buffer = pool.Rent();
try
{
    // Use buffer for processing
    ProcessData(buffer);
}
finally
{
    pool.Return(buffer);
}

Why this pattern works:

  • Native allocations don't pressure the GC
  • Reusing blocks eliminates allocation overhead
  • Span provides safe access to the unsafe memory
  • Lock protects the free list from concurrent access
  • Dispose ensures no memory leaks

⚠️ Real-world consideration: The BCL's ArrayPool<T> and MemoryPool<T> are better solutions for most cases. Build custom pools only when profiling proves they're necessary.

Example 4: High-Performance Struct Reinterpretation

Sometimes you need to reinterpret bytes as different typesβ€”common in network protocols and file formats:

[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct MessageHeader
{
    public uint MagicNumber;    // 4 bytes
    public ushort Version;      // 2 bytes
    public ushort MessageType;  // 2 bytes
    public uint PayloadLength;  // 4 bytes
    // Total: 12 bytes
}

// Safe approach with MemoryMarshal
public MessageHeader ParseHeaderSafe(ReadOnlySpan<byte> buffer)
{
    if (buffer.Length < 12)
        throw new ArgumentException("Buffer too small");
    
    return MemoryMarshal.Read<MessageHeader>(buffer);
}

// Unsafe approach with pointer cast
public unsafe MessageHeader ParseHeaderUnsafe(byte* buffer)
{
    // Direct cast - no copying!
    return *(MessageHeader*)buffer;
}

// Most efficient: ref return with MemoryMarshal
public ref readonly MessageHeader ParseHeaderRef(ReadOnlySpan<byte> buffer)
{
    if (buffer.Length < 12)
        throw new ArgumentException("Buffer too small");
    
    return ref MemoryMarshal.AsRef<MessageHeader>(buffer);
}

Performance comparison (parsing 1 million headers):

  • Manual field-by-field copy: ~45ms
  • MemoryMarshal.Read: ~12ms
  • Unsafe pointer cast: ~8ms
  • MemoryMarshal.AsRef: ~8ms (safe!)

πŸ’‘ Key insight: MemoryMarshal provides unsafe-level performance with safety. Use it for binary serialization, protocol parsing, and interop.

⚠️ Endianness warning: These techniques read bytes in native CPU order. Use BinaryPrimitives.ReadUInt32BigEndian() etc. for cross-platform wire formats.

MEMORY LAYOUT: Sequential Struct

MessageHeader in memory (12 bytes):
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Magic β”‚Version β”‚MsgType β”‚Payload β”‚
β”‚ 4 bytesβ”‚2 bytesβ”‚2 bytes β”‚4 bytes β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚0xABCDEFβ”‚  0x01  β”‚  0x05  β”‚ 0x0100 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Direct cast vs Field Copy:

    Cast:     buffer β†’ (MessageHeader*) β†’ return
              ↑
              └── Zero-copy, just reinterpret

    Copy:     buffer β†’ temp β†’ field1, field2... β†’ struct
              ↑
              └── Allocate + copy each field

Common Mistakes and Pitfalls ⚠️

Mistake 1: Returning Stack-Allocated Memory
// WRONG - CATASTROPHIC BUG
public unsafe int* CreateArray()
{
    int* arr = stackalloc int[10];
    return arr;  // Returns pointer to stack memory!
}
// When caller uses this pointer, the stack has been reused
// Result: corruption, crash, or worse

Why it fails: Stack memory is automatically freed when the method returns. The returned pointer dangles.

Correct alternatives:

// Return Span (safe, stack-allocated)
public Span<int> CreateSpan()
{
    return stackalloc int[10];  // Caller must use immediately
}

// Heap allocate (safe, persistent)
public int[] CreateArray()
{
    return new int[10];  // GC-managed
}

// Native allocate (unsafe, persistent, manual free)
public unsafe IntPtr CreateNative()
{
    return Marshal.AllocHGlobal(10 * sizeof(int));
    // Caller MUST call Marshal.FreeHGlobal!
}
Mistake 2: Pointer Arithmetic Outside Bounds
// WRONG - BUFFER OVERRUN
public unsafe void FillBuffer(byte* buffer, int size)
{
    for (int i = 0; i <= size; i++)  // Off-by-one: should be <
    {
        buffer[i] = 0xFF;  // Writes past end!
    }
}

In safe code, this throws IndexOutOfRangeException. In unsafe code, it silently corrupts memory. This can:

  • Overwrite adjacent data structures
  • Corrupt the heap
  • Create security vulnerabilities
  • Cause crashes hours later (Heisenbugs)

Defense strategies:

  1. Always use < not <= in loop conditions
  2. Assert preconditions: Debug.Assert(size >= 0);
  3. Use length, not capacity: buffer + size marks the end
  4. Prefer Span which maintains length information
Mistake 3: Long-Duration Pinning
// WRONG - PINS FOREVER
public unsafe class BadPattern
{
    private byte[] _buffer = new byte[1024 * 1024];
    private GCHandle _handle;
    private byte* _ptr;
    
    public BadPattern()
    {
        _handle = GCHandle.Alloc(_buffer, GCHandleType.Pinned);
        _ptr = (byte*)_handle.AddrOfPinnedObject();
    }
    
    // Use _ptr throughout object lifetime...
}

Problems:

  • The 1MB array cannot be moved by the GC
  • Creates a 1MB "hole" in the heap
  • Prevents compaction around it
  • If many instances exist: severe fragmentation

Better approaches:

// Option 1: Pin only when needed
public void ProcessData()
{
    unsafe
    {
        fixed (byte* ptr = _buffer)
        {
            NativeOperation(ptr);  // Brief pin
        }
    }
}

// Option 2: Use native memory instead
public unsafe class BetterPattern
{
    private IntPtr _nativeBuffer;
    
    public BetterPattern()
    {
        _nativeBuffer = Marshal.AllocHGlobal(1024 * 1024);
    }
    
    public void Dispose()
    {
        Marshal.FreeHGlobal(_nativeBuffer);
    }
}
Mistake 4: Ignoring Alignment Requirements
// WRONG - UNALIGNED ACCESS
public unsafe long ReadLong(byte* buffer)
{
    return *(long*)(buffer + 1);  // Might be unaligned!
}

On some architectures (ARM, older x86), unaligned memory access:

  • Causes performance penalties (2-3x slower)
  • Can throw alignment exceptions
  • May read incorrect values

Requirements:

  • long (8 bytes) should be 8-byte aligned
  • int (4 bytes) should be 4-byte aligned
  • short (2 bytes) should be 2-byte aligned

Safe reading:

public long ReadLongSafe(ReadOnlySpan<byte> buffer)
{
    return BinaryPrimitives.ReadInt64LittleEndian(buffer);
    // Handles alignment correctly
}

public unsafe long ReadLongUnsafe(byte* buffer)
{
    if (((long)buffer & 7) != 0)  // Check 8-byte alignment
    {
        // Unaligned - copy to aligned temp
        long temp;
        Buffer.MemoryCopy(buffer, &temp, 8, 8);
        return temp;
    }
    return *(long*)buffer;  // Aligned - direct read
}
Mistake 5: Mixing Spans with Async
// WRONG - COMPILE ERROR
public async Task ProcessAsync()
{
    Span<byte> buffer = stackalloc byte[256];
    await SomeOperationAsync();  // ERROR: can't await with Span
    ProcessBuffer(buffer);
}

Why it fails: Span<T> is a ref struct that can only live on the stack. Async methods store state in heap-allocated state machines. The two are incompatible.

Solutions:

// Option 1: Use Memory<T> instead
public async Task ProcessAsync()
{
    Memory<byte> buffer = new byte[256];
    await SomeOperationAsync();
    ProcessBuffer(buffer.Span);  // Convert when needed
}

// Option 2: Complete Span work before await
public async Task ProcessAsync()
{
    {
        Span<byte> buffer = stackalloc byte[256];
        ProcessBuffer(buffer);  // Complete synchronous work
    }  // Span out of scope
    await SomeOperationAsync();  // Now we can await
}

// Option 3: Separate sync and async
public void ProcessSync(Span<byte> buffer) { /*...*/ }
public async Task ProcessAsync()
{
    byte[] buffer = ArrayPool<byte>.Shared.Rent(256);
    try
    {
        ProcessSync(buffer);
        await SomeOperationAsync();
    }
    finally
    {
        ArrayPool<byte>.Shared.Return(buffer);
    }
}
PatternSafetyPerformanceComplexityWhen to Use
Standard arraysβœ… Safe⚑ Good🟒 SimpleDefault choice
Span<T>βœ… Safe⚑⚑ Excellent🟑 ModerateHot paths, no async
Memory<T>βœ… Safe⚑⚑ Excellent🟑 ModerateAsync operations
stackallocβœ… Safe (with Span)⚑⚑⚑ Peak🟑 ModerateSmall temp buffers
Unsafe pointers❌ Unsafe⚑⚑⚑ PeakπŸ”΄ HighInterop, proven bottlenecks
Native allocation❌ Manual⚑⚑⚑ PeakπŸ”΄ HighOff-heap, large buffers

Key Takeaways 🎯

Core Principles:

  1. Default to safe patterns: Use managed arrays, List<T>, and standard collections. They're fast enough for 95% of scenarios.

  2. Span<T> is your performance friend: It provides near-pointer performance with full safety. Use it for buffer manipulation, parsing, and formatting.

  3. Memory<T> bridges sync and async: When you need Span semantics in async code, Memory<T> is the answer.

  4. stackalloc for tiny temp buffers: Allocations under 1KB that live only within a method are perfect for stackalloc. Use the Span<T> form for safety.

  5. Unsafe is a last resort: Before writing unsafe code, profile and prove the bottleneck. Often the JIT's optimizations make safe code just as fast.

  6. Pin briefly, allocate natively for long-term: If you need persistent pointers, allocate off-heap with Marshal.AllocHGlobal rather than pinning managed objects.

  7. MemoryMarshal for zero-copy operations: Reinterpreting bytes as structs, getting array references, and other low-level operations have safe APIs.

Decision Framework:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     SHOULD I USE UNSAFE CODE?               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚   Is this a    β”‚
          β”‚ proven hotspot?β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚                    β”‚
     β”Œβ”€β”€β”΄β”€β”€β”              β”Œβ”€β”€β”΄β”€β”€β”
     β”‚ NO  β”‚              β”‚ YES β”‚
     β””β”€β”€β”¬β”€β”€β”˜              β””β”€β”€β”¬β”€β”€β”˜
        β”‚                    β”‚
        β–Ό                    β–Ό
   Use safe         Try Span first
    patterns               β”‚
                           β”‚
                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
                   β”‚  Still too     β”‚
                   β”‚   slow?        β”‚
                   β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚                    β”‚
              β”Œβ”€β”€β”΄β”€β”€β”              β”Œβ”€β”€β”΄β”€β”€β”
              β”‚ NO  β”‚              β”‚ YES β”‚
              β””β”€β”€β”¬β”€β”€β”˜              β””β”€β”€β”¬β”€β”€β”˜
                 β”‚                    β”‚
                 β–Ό                    β–Ό
            You're done      Consider unsafe
                            (with caution!)

Safety Checklist for Unsafe Code:

βœ… Document all invariants and ownership rules
βœ… Validate all inputs at the boundary
βœ… Use Debug.Assert liberally for preconditions
βœ… Keep unsafe blocks minimal
βœ… Wrap unsafe internals with safe public APIs
βœ… Write comprehensive tests including edge cases
βœ… Use address sanitizers and memory profilers
βœ… Code review all unsafe code
βœ… Profile to verify the performance gain justifies the risk

πŸ“‹ Quick Reference Card

FeatureSyntaxSafety
Standard arrayT[] arr = new T[n];βœ… Safe, GC-managed
Span (heap)Span<T> s = array;βœ… Safe, bounds-checked
Span (stack)Span<T> s = stackalloc T[n];βœ… Safe, stack-only
MemoryMemory<T> m = array;βœ… Safe, async-compatible
PointerT* ptr;❌ Unsafe, no checks
Fixed pinfixed(T* p = arr) {...}⚠️ Brief pins OK
Native allocMarshal.AllocHGlobal(size)❌ Manual free required
ReinterpretMemoryMarshal.Cast<T,U>()βœ… Safe cast
Get referenceref T r = ref arr[0];βœ… Safe, tracked
Address-ofT* p = &variable;❌ Unsafe, fixed required

Performance hierarchy (fastest to slowest):
πŸ† stackalloc + pointers > stackalloc + Span > native allocation > heap arrays > LINQ on arrays

Safety hierarchy (safest to riskiest):
πŸ”’ Span<T> > Memory<T> > fixed statement > long-term pins > unmanaged pointers > unvalidated pointers

πŸ“š Further Study

Official Documentation:

Advanced Topics:

Mastering the spectrum from safe to unsafe patterns gives you the tools to build both reliable and performant .NET applications. Start with safety, optimize with profiling, and reach for unsafe code only when the data demands it. πŸš€