Please enable JavaScript.
Coggle requires JavaScript to display documents.
Chapter 4: Threads & Concurrency - Coggle Diagram
Chapter 4: Threads & Concurrency
Multithreading Models
Three common
ways of establishing such a relationship:
Many-to-One
One-to-One
Many-to-Many
Many-to-One
maps many user-level threads to one
kernel thread
Thread management is done by the thread library in user space, so it is efficient
the entire process will block if a thread makes a blocking system call
Green threads
—a thread library available
for Solaris systems and adopted in early versions of Java
One-to-One
each user thread to a kernel thread.
more concurrency than the many-to-one model
by allowing another
thread to run when a thread makes a blocking system call.
allows multiple threads to run in parallel on multiprocessors.
a large number of kernel threads may burden the performance of
a system.
Example;
Linux
Windows
Many-to-Many
multiplexes many user-level threads to a smaller or equal number of kernel threads.
The number of kernel threads may be specific to either a particular application or a particular machine
developers can create as many user threads as necessary
the corresponding kernel threads can run in parallel on a multiprocessor
when a thread performs a blocking system call, the kernel can schedule another
thread for execution.
This variation is sometimes
referred to as the
two-level model
except that it allows a user thread to be
bound
to kernel thread
Threading Issues
The fork() and exec() System Calls
fork()
system call is used to create a
separate, duplicate process.
exec()
:
if a thread invokes the exec() system call, the program specified in the parameter to exec() will replace the entire process—including all threads
depends on the application
If exec() is called immediately after forking, then duplicating all threads is
unnecessary
If the separate process does not call exec() after forking, the
separate process should duplicate all threads.
Signal Handling
to notify a process that a particular event has
occurred
A signal may be received either
synchronously
or
asynchronously
A signal is generated by the occurrence of a particular event.
The signal is delivered to a process
Once delivered, the signal must be handled.
an asynchronous signal is sent to another
process.
A signal may be handled by one of two possible handlers:
A default signal handler
A user-defined signal handler
Every signal has a
default signal handler
that the kernel runs when handling
that signal
This default action can be overridden by a
user-define signal handler
that is called to handle the signal
Where should a signal be delivered for multi-threaded?
Deliver the signal to the thread to which the signal applies.
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the process
The method for delivering a signal depends on the type of signal generated.
asynchronous procedure calls (APCs)
: enables a user thread to specify a function that is to be called
when the user thread receives notification of a particular event
Thread Cancellation
Thread cancellation
involves terminating a thread before it has completed.
A thread that is to be canceled is often referred to as the
target thread
Cancellation of a target thread may occur in two different scenarios:
Asynchronous cancellation
: One thread immediately terminates the target
thread.
-
Deferred cancellation
: The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an
orderly fashion.
Implicit Threading
Thread Pools
Creating new threads every time one is needed and then deleting it when it is done
can be inefficien
t, and
can also lead to a very large ( unlimited ) number of threads being created.
An alternative solution is to create a number of threads when the process first starts, and put those threads into a
thread pool.
Threads are allocated from the pool as needed, and returned to the pool when no longer needed.
When no threads are available in the pool, the process may have to wait until one becomes available.
Multithreaded Programming
The benefits of multithreaded programming :
-
Responsiveness
– may allow continued execution if part of process is blocked, especially important for user interfaces
Resource Sharing
– threads share resources of process, easier than shared memory or message passing
Economy
– cheaper than process creation, thread switching lower overhead than context switching
Scalability
– process can take advantage of multicore architectures
If a process has multiple threads of control, it can perform
more than one task at a time
Multicore Programming
Five areas where multi-core chips present new challenge for application programmers:
Identifying tasks
- Examining applications to find activities that can be performed concurrently.
Balance
Finding tasks to run concurrently that provide equal value. I.e. don't waste a thread on trivial tasks.
Data splitting
To prevent the threads from interfering with one another.
-
Data dependency
If one task is dependent upon the results of another, then the tasks need to be synchronized to assure access in the proper order.
Testing and debugging
Inherently more difficult in parallel processing situations, as the race conditions become much more complex and difficult to identify.
Types of Parallelism:
Data parallelism
divides the data up amongst multiple cores ( threads ), and performs the same task on each subset of the data.
Task parallelism
divides the different tasks to be performed among the different cores and performs them simultaneously.
Thread Libraries
thread library
provides the programmer with an API for creating and managing
threads.
two primary ways of implementing a thread library:
provide a library entirely in user space with no kernel
support.
to implement a kernel-level library supported
directly by the operating system.
Three main thread libraries are in use today:
POSIX Pthreads
Windows
Java