Facebook Pixel
Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Java Concurrency in Practice

Java Concurrency in Practice

by Brian Goetz 2005 432 pages
4.48
2k+ ratings
Listen
Listen to Summary

Key Takeaways

1. Thread Safety is About Managing Shared, Mutable State

Writing thread-safe code is, at its core, about managing access to state, and in particular to shared, mutable state.

Defining Thread Safety. Thread safety means a class behaves correctly when accessed from multiple threads, regardless of scheduling or interleaving, without requiring additional synchronization from the caller. It's about ensuring that invariants and postconditions hold true even in a concurrent environment. The key is to protect data from uncontrolled concurrent access.

Three Ways to Fix Broken Programs. When multiple threads access the same mutable state variable without appropriate synchronization, the program is broken. There are three ways to fix it: (1) Don't share the state variable across threads; (2) Make the state variable immutable; or (3) Use synchronization whenever accessing the state variable.

Encapsulation is Key. Good object-oriented techniques like encapsulation and data hiding are crucial for creating thread-safe classes. The less code that has access to a particular variable, the easier it is to ensure that all of it uses the proper synchronization, and the easier it is to reason about the conditions under which a given variable might be accessed.

2. Synchronization Guarantees Atomicity and Visibility

To preserve state consistency, update related state variables in a single atomic operation.

Atomicity and Race Conditions. Synchronization ensures that operations execute atomically, preventing race conditions where the correctness of a computation depends on the unpredictable timing of multiple threads. Without atomicity, operations like incrementing a counter or lazy initialization can produce incorrect results.

Intrinsic Locks for Atomicity. Java provides intrinsic locks (using the synchronized keyword) to enforce atomicity. Only one thread can execute a block of code guarded by a given lock at a time. This ensures that compound actions, like check-then-act or read-modify-write sequences, are executed as a single, indivisible unit.

Locking and Memory Visibility. Synchronization is not just about mutual exclusion; it also ensures memory visibility. To guarantee that all threads see the most up-to-date values of shared mutable variables, both reading and writing threads must synchronize on a common lock. This prevents stale data and ensures that changes made by one thread are visible to others.

3. Safe Publication is Essential for Sharing Objects

Immutable objects can be used safely by any thread without additional synchronization, even when synchronization is not used to publish them.

What is Safe Publication? Publishing an object means making it available to code outside its current scope. Safe publication ensures that both the reference to the object and the object's state are visible to other threads at the same time. Without safe publication, threads may see stale or inconsistent data.

Safe Publication Idioms:

  • Initializing an object reference from a static initializer
  • Storing a reference to it into a volatile field or AtomicReference
  • Storing a reference to it into a final field of a properly constructed object
  • Storing a reference to it into a field that is properly guarded by a lock

Immutability and Safe Publication. Immutable objects can be published through any mechanism, even without synchronization. This is because their state cannot be modified after construction, eliminating the risk of data races. Effectively immutable objects, whose state will not be modified after publication, must be safely published. Mutable objects must be safely published and be either thread-safe or guarded by a lock.

4. Compose Thread-Safe Classes for Robust Concurrency

For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by the same lock.

Designing Thread-Safe Classes. Designing a thread-safe class involves identifying state variables, defining invariants, and establishing a synchronization policy. Encapsulation is crucial for managing complexity and ensuring that state is accessed with the appropriate lock held.

Instance Confinement. Instance confinement involves encapsulating mutable state within an object and protecting it from concurrent access by synchronizing any code path that accesses the state using the object's intrinsic lock. This simplifies thread safety analysis and allows for flexible locking strategies.

Delegating Thread Safety. Thread safety can be delegated to thread-safe objects, but this requires careful consideration of invariants and state dependencies. If a class has compound actions, it must provide its own locking to ensure atomicity.

5. Leverage Concurrent Collections for Scalable Performance

Replacing synchronized collections with concurrent collections can offer dramatic scalability improvements with little risk.

Limitations of Synchronized Collections. Synchronized collections, like Vector and Hashtable, achieve thread safety by serializing all access to the collection's state. This can lead to poor concurrency and scalability issues, especially under heavy load.

Advantages of Concurrent Collections. Concurrent collections, such as ConcurrentHashMap and CopyOnWriteArrayList, are designed for concurrent access from multiple threads. They use finer-grained locking mechanisms and nonblocking algorithms to allow greater concurrency and scalability.

ConcurrentHashMap and CopyOnWriteArrayList. ConcurrentHashMap is a concurrent replacement for synchronized hash-based Map implementations, while CopyOnWriteArrayList is a concurrent replacement for synchronized List implementations for cases where traversal is the dominant operation. These classes provide iterators that do not throw ConcurrentModificationException, eliminating the need to lock the collection during iteration.

6. Use Blocking Queues to Implement the Producer-Consumer Pattern

Bounded queues are a powerful resource management tool for building reliable applications: they make your program more robust to overload by throttling activities that threaten to produce more work than can be handled.

Producer-Consumer Pattern. Blocking queues are ideal for implementing the producer-consumer pattern, where producers place data onto the queue and consumers retrieve data from the queue. This pattern decouples the identification of work from its execution, simplifying development and workload management.

BlockingQueue Implementations. The class library contains several implementations of BlockingQueue, including LinkedBlockingQueue, ArrayBlockingQueue, and PriorityBlockingQueue. SynchronousQueue is a special type of blocking queue that maintains no storage space for queued elements, facilitating direct handoff between producers and consumers.

Serial Thread Confinement. Blocking queues facilitate serial thread confinement for handing off ownership of objects from producers to consumers. This allows mutable objects to be safely transferred between threads without additional synchronization.

7. Cancellation and Shutdown Require Cooperative Mechanisms

Interruption is usually the most sensible way to implement cancellation.

Cooperative Cancellation. Java does not provide a mechanism for safely forcing a thread to stop. Instead, it provides interruption, a cooperative mechanism that lets one thread ask another to stop what it is doing.

Interruption Policies. Threads should have an interruption policy that determines how they respond to interruption requests. The most sensible policy is some form of thread-level or service-level cancellation: exit as quickly as practical, cleaning up if necessary, and possibly notifying some owning entity that the thread is exiting.

ExecutorService Shutdown. ExecutorService provides methods for lifecycle management, including shutdown (graceful shutdown) and shutdownNow (abrupt shutdown). These methods allow applications to terminate thread pools and other services in a controlled manner.

8. Amdahl's Law Limits Scalability; Reduce Serialization

The principal threat to scalability in concurrent applications is the exclusive resource lock.

Amdahl's Law. Amdahl's law describes how much a program can theoretically be sped up by additional computing resources, based on the proportion of parallelizable and serial components. It highlights the importance of reducing serialization to improve scalability.

Reducing Lock Contention. There are three ways to reduce lock contention: (1) Reduce the duration for which locks are held; (2) Reduce the frequency with which locks are requested; or (3) Replace exclusive locks with coordination mechanisms that permit greater concurrency.

Lock Splitting and Striping. Lock splitting involves using separate locks to guard multiple independent state variables previously guarded by a single lock. Lock striping extends this concept to a variable-sized set of objects, using multiple locks to guard different subsets of the objects.

9. Understand the Costs Introduced by Threads

Allocating objects is usually cheaper than synchronizing.

Context Switching Overhead. Using multiple threads always introduces some performance costs compared to the single-threaded approach. These include the overhead associated with coordinating between threads (locking, signaling, and memory synchronization), increased context switching, thread creation and teardown, and scheduling overhead.

Memory Synchronization Costs. Synchronization creates traffic on the shared memory bus, which has limited bandwidth and is shared across all processors. This can inhibit compiler optimizations and introduce additional performance costs.

Blocking and Responsiveness. When locking is contended, the losing thread(s) must block. The JVM can implement blocking either via spin-waiting or by suspending the blocked thread through the operating system. Both approaches have performance costs.

10. Testing Concurrent Programs Requires Specific Strategies

The goal of testing is not so much to find errors as it is to increase confidence that the code works as expected.

Challenges of Concurrent Testing. Concurrent programs have a degree of nondeterminism that sequential programs do not, increasing the number of potential interactions and failure modes that must be planned for and analyzed. Potential failures may be rare probabilistic occurrences rather than deterministic ones.

Testing for Correctness. Tests of safety verify that a class's behavior conforms to its specification, usually taking the form of testing invariants. Tests of liveness ensure that "something good eventually happens," including tests of progress and nonprogress.

Testing for Performance. Performance tests measure end-to-end performance metrics for representative use cases, such as throughput, responsiveness, and scalability. These tests should be run under realistic conditions and with sufficient load to expose potential bottlenecks.

11. Explicit Locks Offer Advanced Control Over Synchronization

ReentrantLock is an advanced tool for situations where intrinsic locking is not practical. Use it if you need its advanced features: timed, polled, or interruptible lock acquisition, fair queueing, or non-block-structured locking. Otherwise, prefer synchronized.

Lock Interface. The Lock interface defines abstract locking operations, offering a choice of unconditional, polled, timed, and interruptible lock acquisition. Unlike intrinsic locking, all lock and unlock operations are explicit.

ReentrantLock. ReentrantLock implements Lock, providing the same mutual exclusion and memory-visibility guarantees as synchronized. It also offers reentrant locking semantics and supports all of the lock-acquisition modes defined by Lock.

ReadWriteLock. ReadWriteLock exposes two Lock objectsone for reading and one for writing. This allows multiple simultaneous readers but only a single writer, improving concurrency for read-mostly data structures.

12. AbstractQueuedSynchronizer (AQS) Simplifies Synchronizer Development

A synchronizer is any object that coordinates the control flow of threads based on its state.

AQS Framework. AQS is a framework for building locks and synchronizers, providing a common base class for many of the synchronizers in java.util.concurrent. It handles many of the details of implementing a synchronizer, such as FIFO queuing of waiting threads.

AQS Operations. The basic operations that an AQS-based synchronizer performs are some variants of acquire and release. Acquisition is the state-dependent operation and can always block. Release is not a blocking operation; a release may allow threads blocked in acquire to proceed.

AQS in Practice. Many of the blocking classes in java.util.concurrent, such as ReentrantLock, Semaphore, ReentrantReadWriteLock, CountDownLatch, SynchronousQueue, and FutureTask, are built using AQS.

Last updated:

Review Summary

4.48 out of 5
Average of 2k+ ratings from Goodreads and Amazon.

Java Concurrency in Practice is highly praised as an essential read for Java developers. Reviewers commend its comprehensive coverage of concurrency concepts, from basic to advanced topics. The book is lauded for its clear explanations, practical examples, and gradual buildup of knowledge. Many readers appreciate its insights into the Java memory model and concurrency-related APIs. While some note it's slightly dated, the core principles remain relevant. Readers emphasize its value in understanding and implementing safe, efficient concurrent code, with many considering it a must-read for Java programmers.

Your rating:

About the Author

Brian Goetz is a renowned expert in Java concurrency and performance optimization. He serves as the Java Language Architect at Oracle, where he plays a crucial role in evolving the Java programming language. Goetz has made significant contributions to Java's concurrency libraries and language features. He is widely respected in the Java community for his deep technical knowledge and ability to explain complex concepts clearly. Besides authoring "Java Concurrency in Practice," Goetz has written numerous articles on Java programming and is a frequent speaker at technology conferences. His work has greatly influenced the development of concurrent programming practices in Java.

0:00
-0:00
1x
Dan
Andrew
Michelle
Lauren
Select Speed
1.0×
+
200 words per minute
Create a free account to unlock:
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Recommendations: Get personalized suggestions
Ratings: Rate books & see your ratings
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 10
📜 Unlimited History
Free users are limited to 10
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Mar 22,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.
Settings
Appearance
Black Friday Sale 🎉
$20 off Lifetime Access
$79.99 $59.99
Upgrade Now →