Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Writing High-Performance .NET Code

Writing High-Performance .NET Code

by Ben Watson 2014 280 pages
4.31
290 ratings
Listen
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. Measure Everything: Performance is Data-Driven.

You do NOT know where your performance problems are if you have not measured accurately.

Performance is not guesswork. Gut feelings and code inspection can offer hints, but only accurate measurement reveals the true bottlenecks. Premature optimization of non-critical code is a waste of time; focus efforts where they yield the biggest impact, guided by data.

Define quantifiable goals. Vague notions of "fast" or "responsive" are useless; performance requirements must be specific and measurable. Track metrics like latency (using percentiles, not just averages), memory usage (working set vs. private bytes), and CPU time under defined load conditions to know if you're meeting your goals.

Automate measurement. Integrate performance monitoring into your development, testing, and production environments. Tools like Performance Counters and ETW events allow for continuous tracking and historical analysis, providing solid data to back up claims of improvement and quickly identify regressions.

2. Master Memory: Work With the Garbage Collector.

Collect objects in gen 0 or not at all.

GC is a feature, not a bug. .NET's garbage collector (GC) simplifies memory management but requires understanding to optimize performance. The core principle is to make objects either extremely short-lived (cleaned up in fast Gen 0 collections) or very long-lived (promoted to Gen 2 and kept forever, often via pooling).

Reduce allocation rate and object lifetime. The time GC takes depends on live objects, not allocated ones. Minimize memory allocation, especially for large objects (>= 85,000 bytes) which go to the Large Object Heap (LOH) and are expensive to collect and prone to fragmentation. Pool large or frequently used objects to avoid repeated allocations.

Understand GC configuration. Choose Workstation GC for desktop apps and Server GC for dedicated servers to leverage parallel collection. Background GC (default) allows Gen 2 collections concurrently. Use low-latency modes or LOH compaction sparingly and with careful measurement, as they have significant trade-offs.

3. Optimize JIT: Control Startup and Code Generation.

The first time any method is called, there is always a performance hit.

JITting adds startup cost. .NET code is compiled to Intermediate Language (IL) and then Just-In-Time (JIT) compiled to native code on first execution. This initial JIT cost can impact application startup time or the responsiveness of the first call to a method.

Reduce JIT time. Minimize the amount of code that needs JITting, especially during critical startup paths. Be aware that certain language features and APIs generate significant amounts of hidden IL, such as:

  • dynamic keyword
  • async and await (though benefits often outweigh costs)
  • Regular Expressions (especially uncompiled or complex ones)
  • Code Generation (e.g., serializers)

Leverage pre-compilation. For critical startup performance, use Profile Optimization (Multicore JIT) to pre-JIT frequently used code based on profiling. For Universal Windows Platform apps, .NET Native compiles to native code ahead of time. For other scenarios, NGEN (Native Image Generator) can pre-compile assemblies, but has trade-offs in code locality and size.

4. Embrace Asynchrony: Avoid Blocking, Maximize Throughput.

To obtain the highest performance you must ensure that your program never wastes one resource while waiting for another.

Parallelism is key to throughput. Modern applications must leverage multiple CPU cores. Asynchronous programming is essential to prevent threads from blocking on I/O (network, disk, database) or other resources, allowing the system to utilize CPU cycles effectively while waiting.

Use Tasks and async/await. The Task Parallel Library (TPL) and the async/await keywords are the preferred way to manage concurrency in .NET. They abstract away thread pool management and simplify complex asynchronous workflows, making code look linear while avoiding blocking.

Never block on I/O or Tasks. Avoid synchronous I/O calls and never call .Wait() or .Result on a Task in performance-critical code. Instead, use await or .ContinueWith() to schedule subsequent work, allowing the current thread to return to the thread pool and handle other tasks.

5. Code Smart: Choose Types and Patterns Wisely.

In-depth performance optimization will often defy code abstractions.

Understand type costs. Classes (reference types) have per-instance overhead and live on the heap, impacting GC. Structs (value types) have no overhead and live on the stack or inline within other objects, offering better memory locality, especially in arrays. Choose structs for small, frequently used data to reduce GC pressure and improve cache performance.

Be wary of hidden costs. Language features and APIs can hide expensive operations. Properties are method calls, not just field access. foreach on IEnumerable can be slower than for on arrays due to enumerator overhead. Casting objects, especially down the hierarchy or to interfaces, has performance costs.

Optimize common operations. For structs, always implement Equals, GetHashCode (IEquatable<T>), and CompareTo (IComparable<T>) efficiently to avoid expensive reflection-based defaults and enable optimized collection operations. Use ref returns and locals in C# 7+ to avoid copying large structs or accessing array elements repeatedly.

6. Know Your Framework: Understand API Costs.

You must understand the code executing behind every called API.

Framework APIs have trade-offs. The .NET Framework is general-purpose; its APIs prioritize correctness and usability over raw performance in many cases. Don't assume a simple API call is cheap, especially in performance-critical paths.

Inspect and question APIs. Use decompilers (like ILSpy) to examine the implementation of Framework methods you use frequently. Look for hidden costs like:

  • Memory allocations (especially on the LOH)
  • Expensive loops or algorithms
  • Reliance on reflection or dynamic behavior
  • Unnecessary validation or error handling

Choose the right tool. For common tasks like collections, strings, or I/O, .NET offers multiple APIs with varying performance characteristics. Benchmark alternatives (e.g., different XML parsers, string concatenation methods) to find the best fit for your specific scenario.

7. Leverage Tools: ETW, Profilers, and Debuggers are Your Friends.

PerfView, originally written by Microsoft .NET performance architect (and writer of this book’s Foreword) Vance Morrison, is one of the best for its sheer power.

Tools are essential for diagnosis. Effective performance analysis relies on powerful tools to collect and interpret data. Don't rely solely on basic IDE profilers; learn to use more advanced system-level tools.

Key tools and their uses:

  • PerfView: Collects and analyzes ETW events (CPU, GC, JIT, custom). Excellent for stack analysis, finding allocation hot spots, and understanding GC behavior.
  • WinDbg + SOS: Powerful debugger for examining managed heap state, object roots, pinned objects, and thread stacks. Essential for deep memory leak analysis.
  • Visual Studio Profiler: User-friendly tools for CPU and Memory usage analysis during development.
  • Performance Counters: System-wide metrics for monitoring overall application health and resource usage over time.
  • ETW Events: Low-overhead logging mechanism used by OS and CLR. Define custom events to correlate application behavior with system performance.

Master the data. These tools often provide raw data (like ETW events or heap dumps). Learning to interpret this data, often by correlating information from multiple sources, is key to effective performance debugging.

8. Performance is Engineering: Design, Measure, Iterate.

Performance work should never be left for the end, especially in a macro or architectural sense.

Performance is a design feature. Like security or usability, performance must be considered from the outset, particularly for large or complex systems. Architectural decisions have the most significant impact and are hardest to change later.

Follow an iterative process. Performance optimization is not a one-time task. It requires continuous monitoring and refinement throughout the application lifecycle.

  1. Define goals and metrics.
  2. Design/implement with performance in mind.
  3. Measure against goals.
  4. Identify bottlenecks.
  5. Optimize (macro first, then micro).
  6. Repeat.

Build a performance culture. Encourage performance awareness within your team. Automate testing and monitoring, review code for performance anti-patterns, and prioritize performance fixes based on data.

9. Avoid Common Pitfalls: Exceptions, Boxing, Dynamic, Reflection.

exceptions are very expensive to throw.

Exceptions are for exceptional cases. Throwing exceptions involves significant overhead (stack walks, object creation) and should not be used for control flow or expected error conditions. Use TryParse methods instead of Parse where input format is uncertain.

Minimize boxing. Wrapping value types in objects (int to object) creates heap allocations and GC pressure. Avoid APIs that implicitly box (e.g., String.Format with value types, old non-generic collections).

Avoid dynamic and reflection in hot paths. The dynamic keyword and reflection APIs (like MethodInfo.Invoke) involve significant overhead due to runtime type resolution and code generation. Use them sparingly, especially in performance-critical code. If dynamic invocation is necessary for performance, consider code generation (System.Reflection.Emit) as an alternative.

10. Macro Over Micro: Focus on Architecture First.

Macro-optimizations are almost always more beneficial than micro-optimizations.

Prioritize optimization efforts. When performance issues arise, start by examining the highest levels of your system:

  • Architecture: Is the overall design efficient? Are you using the right technologies?
  • Algorithms: Are the core algorithms used appropriate for the data size and access patterns (Big O complexity)?
  • Data Structures: Are you using collections and types that match your usage patterns and memory needs?

Micro-optimizations come last. Only after addressing high-level issues should you drill down into micro-optimizations like tweaking individual method implementations, reducing minor allocations, or optimizing small loops. These yield smaller gains and can obscure larger problems if done prematurely.

The Seductiveness of Simplicity: .NET's ease of use can lead to quickly writing inefficient code. Understanding the underlying costs of seemingly simple constructs is crucial to avoid building slow systems rapidly.

Last updated:

Review Summary

4.31 out of 5
Average of 290 ratings from Goodreads and Amazon.

Writing High-Performance .NET Code receives positive reviews, with an average rating of 4.31/5. Readers appreciate its practical advice on optimizing .NET applications, noting its value for advanced C# programmers. The book is praised for covering various performance-related topics, including garbage collection and JIT. While some find it essential for performance-critical systems, others mention that not all applications require such optimization. A few reviewers expected more in-depth insights given the author's Microsoft background, but overall, it's considered a solid resource for .NET developers.

Your rating:
4.62
3 ratings

About the Author

Ben Watson is a software engineer at Microsoft since 2008, specializing in high-performance server applications. He has contributed significantly to the Bing platform, developing a .NET-based system that handles high-volume, low-latency requests across thousands of machines. Watson's expertise in .NET performance optimization is reflected in his authorship of two technical books. Beyond his professional work, he enjoys diverse interests including geocaching, reading, classical music, and family time. His experience at Microsoft and his role in creating large-scale, efficient systems have positioned him as an authority in .NET performance optimization.

Download PDF

To save this Writing High-Performance .NET Code summary for later, download the free PDF. You can print it out, or read offline at your convenience.
Download PDF
File size: 0.40 MB     Pages: 14

Download EPUB

To read this Writing High-Performance .NET Code summary on your e-reader device or app, download the free EPUB. The .epub digital book format is ideal for reading ebooks on phones, tablets, and e-readers.
Download EPUB
File size: 2.95 MB     Pages: 11
Listen
Now playing
Writing High-Performance .NET Code
0:00
-0:00
Now playing
Writing High-Performance .NET Code
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
100,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Jun 21,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Start a 7-Day Free Trial
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

Settings
General
Widget
Loading...