Key Takeaways
1. Measure Everything: Performance is Data-Driven.
You do NOT know where your performance problems are if you have not measured accurately.
Performance is not guesswork. Gut feelings and code inspection can offer hints, but only accurate measurement reveals the true bottlenecks. Premature optimization of non-critical code is a waste of time; focus efforts where they yield the biggest impact, guided by data.
Define quantifiable goals. Vague notions of "fast" or "responsive" are useless; performance requirements must be specific and measurable. Track metrics like latency (using percentiles, not just averages), memory usage (working set vs. private bytes), and CPU time under defined load conditions to know if you're meeting your goals.
Automate measurement. Integrate performance monitoring into your development, testing, and production environments. Tools like Performance Counters and ETW events allow for continuous tracking and historical analysis, providing solid data to back up claims of improvement and quickly identify regressions.
2. Master Memory: Work With the Garbage Collector.
Collect objects in gen 0 or not at all.
GC is a feature, not a bug. .NET's garbage collector (GC) simplifies memory management but requires understanding to optimize performance. The core principle is to make objects either extremely short-lived (cleaned up in fast Gen 0 collections) or very long-lived (promoted to Gen 2 and kept forever, often via pooling).
Reduce allocation rate and object lifetime. The time GC takes depends on live objects, not allocated ones. Minimize memory allocation, especially for large objects (>= 85,000 bytes) which go to the Large Object Heap (LOH) and are expensive to collect and prone to fragmentation. Pool large or frequently used objects to avoid repeated allocations.
Understand GC configuration. Choose Workstation GC for desktop apps and Server GC for dedicated servers to leverage parallel collection. Background GC (default) allows Gen 2 collections concurrently. Use low-latency modes or LOH compaction sparingly and with careful measurement, as they have significant trade-offs.
3. Optimize JIT: Control Startup and Code Generation.
The first time any method is called, there is always a performance hit.
JITting adds startup cost. .NET code is compiled to Intermediate Language (IL) and then Just-In-Time (JIT) compiled to native code on first execution. This initial JIT cost can impact application startup time or the responsiveness of the first call to a method.
Reduce JIT time. Minimize the amount of code that needs JITting, especially during critical startup paths. Be aware that certain language features and APIs generate significant amounts of hidden IL, such as:
dynamic
keywordasync
andawait
(though benefits often outweigh costs)- Regular Expressions (especially uncompiled or complex ones)
- Code Generation (e.g., serializers)
Leverage pre-compilation. For critical startup performance, use Profile Optimization (Multicore JIT) to pre-JIT frequently used code based on profiling. For Universal Windows Platform apps, .NET Native compiles to native code ahead of time. For other scenarios, NGEN (Native Image Generator) can pre-compile assemblies, but has trade-offs in code locality and size.
4. Embrace Asynchrony: Avoid Blocking, Maximize Throughput.
To obtain the highest performance you must ensure that your program never wastes one resource while waiting for another.
Parallelism is key to throughput. Modern applications must leverage multiple CPU cores. Asynchronous programming is essential to prevent threads from blocking on I/O (network, disk, database) or other resources, allowing the system to utilize CPU cycles effectively while waiting.
Use Tasks and async/await. The Task Parallel Library (TPL) and the async
/await
keywords are the preferred way to manage concurrency in .NET. They abstract away thread pool management and simplify complex asynchronous workflows, making code look linear while avoiding blocking.
Never block on I/O or Tasks. Avoid synchronous I/O calls and never call .Wait()
or .Result
on a Task in performance-critical code. Instead, use await
or .ContinueWith()
to schedule subsequent work, allowing the current thread to return to the thread pool and handle other tasks.
5. Code Smart: Choose Types and Patterns Wisely.
In-depth performance optimization will often defy code abstractions.
Understand type costs. Classes (reference types) have per-instance overhead and live on the heap, impacting GC. Structs (value types) have no overhead and live on the stack or inline within other objects, offering better memory locality, especially in arrays. Choose structs for small, frequently used data to reduce GC pressure and improve cache performance.
Be wary of hidden costs. Language features and APIs can hide expensive operations. Properties are method calls, not just field access. foreach
on IEnumerable
can be slower than for
on arrays due to enumerator overhead. Casting objects, especially down the hierarchy or to interfaces, has performance costs.
Optimize common operations. For structs, always implement Equals
, GetHashCode
(IEquatable<T>
), and CompareTo
(IComparable<T>
) efficiently to avoid expensive reflection-based defaults and enable optimized collection operations. Use ref
returns and locals in C# 7+ to avoid copying large structs or accessing array elements repeatedly.
6. Know Your Framework: Understand API Costs.
You must understand the code executing behind every called API.
Framework APIs have trade-offs. The .NET Framework is general-purpose; its APIs prioritize correctness and usability over raw performance in many cases. Don't assume a simple API call is cheap, especially in performance-critical paths.
Inspect and question APIs. Use decompilers (like ILSpy) to examine the implementation of Framework methods you use frequently. Look for hidden costs like:
- Memory allocations (especially on the LOH)
- Expensive loops or algorithms
- Reliance on reflection or dynamic behavior
- Unnecessary validation or error handling
Choose the right tool. For common tasks like collections, strings, or I/O, .NET offers multiple APIs with varying performance characteristics. Benchmark alternatives (e.g., different XML parsers, string concatenation methods) to find the best fit for your specific scenario.
7. Leverage Tools: ETW, Profilers, and Debuggers are Your Friends.
PerfView, originally written by Microsoft .NET performance architect (and writer of this book’s Foreword) Vance Morrison, is one of the best for its sheer power.
Tools are essential for diagnosis. Effective performance analysis relies on powerful tools to collect and interpret data. Don't rely solely on basic IDE profilers; learn to use more advanced system-level tools.
Key tools and their uses:
- PerfView: Collects and analyzes ETW events (CPU, GC, JIT, custom). Excellent for stack analysis, finding allocation hot spots, and understanding GC behavior.
- WinDbg + SOS: Powerful debugger for examining managed heap state, object roots, pinned objects, and thread stacks. Essential for deep memory leak analysis.
- Visual Studio Profiler: User-friendly tools for CPU and Memory usage analysis during development.
- Performance Counters: System-wide metrics for monitoring overall application health and resource usage over time.
- ETW Events: Low-overhead logging mechanism used by OS and CLR. Define custom events to correlate application behavior with system performance.
Master the data. These tools often provide raw data (like ETW events or heap dumps). Learning to interpret this data, often by correlating information from multiple sources, is key to effective performance debugging.
8. Performance is Engineering: Design, Measure, Iterate.
Performance work should never be left for the end, especially in a macro or architectural sense.
Performance is a design feature. Like security or usability, performance must be considered from the outset, particularly for large or complex systems. Architectural decisions have the most significant impact and are hardest to change later.
Follow an iterative process. Performance optimization is not a one-time task. It requires continuous monitoring and refinement throughout the application lifecycle.
- Define goals and metrics.
- Design/implement with performance in mind.
- Measure against goals.
- Identify bottlenecks.
- Optimize (macro first, then micro).
- Repeat.
Build a performance culture. Encourage performance awareness within your team. Automate testing and monitoring, review code for performance anti-patterns, and prioritize performance fixes based on data.
9. Avoid Common Pitfalls: Exceptions, Boxing, Dynamic, Reflection.
exceptions are very expensive to throw.
Exceptions are for exceptional cases. Throwing exceptions involves significant overhead (stack walks, object creation) and should not be used for control flow or expected error conditions. Use TryParse
methods instead of Parse
where input format is uncertain.
Minimize boxing. Wrapping value types in objects (int
to object
) creates heap allocations and GC pressure. Avoid APIs that implicitly box (e.g., String.Format
with value types, old non-generic collections).
Avoid dynamic and reflection in hot paths. The dynamic
keyword and reflection APIs (like MethodInfo.Invoke
) involve significant overhead due to runtime type resolution and code generation. Use them sparingly, especially in performance-critical code. If dynamic invocation is necessary for performance, consider code generation (System.Reflection.Emit
) as an alternative.
10. Macro Over Micro: Focus on Architecture First.
Macro-optimizations are almost always more beneficial than micro-optimizations.
Prioritize optimization efforts. When performance issues arise, start by examining the highest levels of your system:
- Architecture: Is the overall design efficient? Are you using the right technologies?
- Algorithms: Are the core algorithms used appropriate for the data size and access patterns (Big O complexity)?
- Data Structures: Are you using collections and types that match your usage patterns and memory needs?
Micro-optimizations come last. Only after addressing high-level issues should you drill down into micro-optimizations like tweaking individual method implementations, reducing minor allocations, or optimizing small loops. These yield smaller gains and can obscure larger problems if done prematurely.
The Seductiveness of Simplicity: .NET's ease of use can lead to quickly writing inefficient code. Understanding the underlying costs of seemingly simple constructs is crucial to avoid building slow systems rapidly.
Last updated:
Review Summary
Writing High-Performance .NET Code receives positive reviews, with an average rating of 4.31/5. Readers appreciate its practical advice on optimizing .NET applications, noting its value for advanced C# programmers. The book is praised for covering various performance-related topics, including garbage collection and JIT. While some find it essential for performance-critical systems, others mention that not all applications require such optimization. A few reviewers expected more in-depth insights given the author's Microsoft background, but overall, it's considered a solid resource for .NET developers.
Similar Books
Download PDF
Download EPUB
.epub
digital book format is ideal for reading ebooks on phones, tablets, and e-readers.