Please enable JavaScript.
Coggle requires JavaScript to display documents.
Multi-Threading (Java Concurrency Utils (CyclicBarrier (stop waiting when,…
Modes
Light Weight Process
-
each Light Weight Process is backed by a kernel thread, thus limited numbers
-
require system calls, switching from User mode <---> Kernel mode, thus high cost
-
User Thread
completely built in User mode, no kernel mode
OS kernel is NOT aware of user threads, no mode switching
efficient and fast, support more threads
-
need to handle thread by app itself, more complicated and error-prone, not widely used
-
Scheduling
Cooperative
-
when its task is done, proactively notify OS to switch to another thread
-
uncontrolled execution time, can block
-
State Transition
-
-
Waiting
not executed, need be notified by other threads
Timed Waiting
not executed, OS notifies it after a timeout
called start(), run() is executing and not blocked
doesn't mean being executed by CPU, may be waiting for CPU by OS
-
Terminated
finished thread, cannot be started again
-
Thread-Safety approaches
-
-
No synchronization
-
Thread Local Storage
if shared data is ensured to be on the same thread, restrict access to only within the thread
-
-
Memory Visibility
-
because change may only happen in cache, not main memory
-
-
Lock optimisation
Adaptive Spinning
-
suspend after spinning for 10 times (default)
Adaptively adjust spin count according to previous spinning results
Lock Elimination
based on escape analysis, remove unnecessary locks if code is ensured to be thread-safe
-
-
-
Thread suspending/resuming is done by kernel, need kernel <-> user mode switch cost
API
currentThread() to get current thread
getId() and getName() to get detail of thread
set/getPriority() to set priority 1-10, default 5
-
setDaemon() - make thread as daemon, will terminate when all non-daemon threads finished.
-
-
-
-
wait/notify
wait()
give up CPU and release locks, block and put current thread on a condition queue
always use wait() in while loop with a condition check, cos thread can be unexpectedly awaken where condition not met yet
-
-
Definition
-
-
only run once, not restartable
-
-
Context Switching
costly. when OS switch threads to execute by CPU, need to store old program counter/register and load new pc/reg
-
Problems
-
-
occurs when multiple threads want the same locks, at the same time, but obtain then in different order
-
Database Deadlock
a transaction will lock the records it will update, so when multiple tx are executed at the same time, there's chance to deadlock on records being locked and required at the same time
-
Lock Timeout
if cannot obtain all locks in timeout, free all locks it holds and retry after a random time period
if too many threads competing, likely they will still deadlock each other even if retried at random time
Deadlock Detection
-
use a data structure (map, graph etc.) to track which threads take/request which locks
when a thread A requests a lock 7 but denied due to thread B holding it, then A traverse the map to check if thread B is requesting any lock A is holding itself, to detect a deadlock
if there is a deadlock,
a) let all threads in deadlock release all locks, each wait a random period and retry
b) let only one thread release locks, decided by priority
two or more threads waiting to obtain a lock that some of the other threads are holding, like mutually waiting for lock release while holding the lock the other thread need
- Threads with high priority use CPU all the time over lower priority ones
- Threads are blocked indefinitely waiting to enter a synchronized block
- no guarantee about sequence of waking waiting threads up
- a thread can fail competing to enter sync block all the time
- Threads waiting by wait() on an object forever
- notify() wakes random thread, no guaranteed order
Solution
- use lock instead of synchronized block
- this allows multiple threads enter the method, call lock.lock() and wait() on the lock object, instead of waiting to enter the method call
- Fair Lock - every thread calling lock() creates a queueObject that is queued and notified in FIFO order.
wait()/notify() is called on the queueObject instead of lock
- do NOT call alien methods while holding a lock
obtained multiple locks in nested style, and only released the inner locks but still holding the outer one, causing other threads blocked
from the time a thread checked the condition until it acts upon it, other threads change the condition and made the act erroreous
the testing and setting of condition must be done atomically by the thread doing it.
no other thread can check the condition in between testing and setting
if notify() is called before wait(), it's omitted and wait() call after it may be waiting forever.
use some boolean field to indicate whether notify() has been called, if so do not call wait()
Common Patterns
Producer/Comsumer
BlockingQueue
take() - wait() while queue empty, notifyAll after taken
put() - wait() while queue is full, notifyAll() after put
Lock
Reentrance
if a thread obtains a lock, it can enter any code block synchronized on the same monitor object
keep a reference and count of thread currently holding the lock, and
do not call wait() if the same thread reenters the sync code block.
decrement lock count when unlock() is called
-
lockRead() - obtain read access when there is no writing or write request (to avoid starvation by massive read threads)
lockWrite() - obtain write lock when neither read/write are locked, increment writeRequest count when requesting writeLock,
and decrement it when obtained write lock and increment writer count
use notifyAll() in unLockRead/Write(), because notify() only wake a random thread,
if it's requesting read lock it will be put back to wait() immediately if there exists waiting write lock request.
with notifyAll() all write request will be waken and processed, and all read request will proceed at the same time if no write request
Reentrant
-
read reentrance - A thread is granted read reentrance if it
a) can get read access (no writers or write requests);
b) it already has read access (regardless of write requests)
-
-
-
a thread synchonization construct that can
1) send signals between threads to avoid missed signals,
2) guard code section like a lock
-
release()
wait() while signal is false, proceed when signal is true and reset it to false
Counting Semaphore
keep count of signals, only release when count == 0
-
-
Access Condition
determine a thread calling test-and-set-state is allowed to set the state or not,
e.g. leaving the spinning lock while loop or not
-
-
Test and Set Method
- set state before test if necessary
- test state against access condition
- if condition is not met, wait
- if met, set state and notify waiting threads if necessary
instead of copying and modifying the whole shared data structure, a thread can share its intended modification of the shared data. steps below:
- check if another thread has submitted an intended modification to the data structure
- if none, create an intended modification (as an object) and submit (using compare and swap)
- carry out modification to shared data structure
- remove the reference to the intended modification object to signal other threads it's done
-
a variable is changed from A to B and changed back to A,
thus another thread is not able to detect the change in CAS
Solution - not just swap a pointer to an intended modification object,
but to combine the pointer with a counter and swap together using a single CAS
in this way, counter value differs if A-B-A problem happens
-
while thread holding the lock is doing initialization work,
other threads can access the resource and see undefined state due to instruction re-ordering etc.
even if the thread A finished initializing before thread B accesses the resource,
A could be mid-way flushing local memory into main memory, therefore B is seeing stale values in member fields of resource
unless all fields are volatile, thread B can still access stale data
Hand-Over-Hand Locking
do NOT lock whole linked list, only lock the nodes involved in insert/remove/...
-
Java Concurrency Utils
Implementation
schedule(Callable/Runnable task, long delay, TimeUnit unit): ScheduledFuture
scheduleAtFixedRate(Runnable, initialDelay, interval, timeunit)
-
if current task still running when next interval comes, next task waits and start ONLY when current one finishes
-
scheduleWithFixedDelay(Runnable, initialDelay, interval, timeunit)
-
-
API
execute(Runnable)
no return value, just run the runnable async
-
-
-
-
enqueue method calls will block when queue is full, notify waiting dequeue calls when done
dequeue method calls will block when queue is empty, notify waiting enqueue calls when done
Implementations
bounded, store elems in an array, FIFO
dequeue in order of expiration time, elem must implement Delayed { getDelay(timeunit) }
-
store elems in linked list with optional bound (default max int), FIFO
unbounded, concurrent heap
elem must implement Comparable, behavior undefined for equal elems
-
-
-
-
await()
block until all threads are ready, a.k.a. the count is reduced to 0
-
-
-
-
BarrierAction
new CyclicBarrier(int n, Runnable action)
-
a medium where 2 thread exchange objects
-
acquire()
request permit before entering critical section code, block if not enough permits available
-
-
-
-
-
-
keeps both an object reference and a stamp,
and CAS them together atomically
-
You generally want to refer to the variable "normally" (that is, without having to always refer to it via the get or set methods on the atomic classes) but occasionally need an atomic get-set operation (which you can't do with a normal volatile field);
you are going to create a large number of objects of the given type, and don't want every single instance to have to have an extra object embedded in it just for atomic access.