Please enable JavaScript.
Coggle requires JavaScript to display documents.
471 principles of operating systems, concurrency, scheduling and dispatch …
471 principles of operating systems, concurrency, scheduling and dispatch
The two components of an operating system:
thread
unit of cpu utilization
composed of
a program counter
a stack
a set of registers
a thread id
types of threads
user threads
application level thread
user threads must be mapped to kernel threads
thread mapping models
many-to-one
multiple user threads to one kernel thread
2 more items...
one-to-one
one user thread to one kernel thread
2 more items...
many-to-many
multiple user threads to equal or smaller number of kernel threads
1 more item...
kernel threads
kernel (OS) level thread
process
unit of resource allocation
five states a process can be in:
new
ready
ready to running when scheduler dispatch
running
terminated
1 more item...
running back to ready when interrupt
2 more items...
running to terminated when exit
1 more item...
running back to waiting when i/o or event wait
1 more item...
meaning
1 more item...
meaning
cpu scheduler allocates processor for the process
waiting
waiting back to ready when i/o or event completion
new to ready when admitted
meaning
cpu scheduler allocates memory for the process
three states where a process has resources
ready(memory), waiting, running(processor)
only in these states do processes exist(have a process control block tracked by the operating system)
OS keeps track of process info using process control blocks
process control block (PCB)
stored in
OS's memory
contains at least 6 things and one optional thing (top to bottom):
process state
1 more item...
those resources are:
processors
memory
registers
a program in execution
process execution consists of
cpu-i/o burst cycle
i/o bursts
i/o wait
cpu bursts
instruction execution
followed by
cpu scheduling
short-term scheduler
selects process in ready queue to be assigned to a cpu when that cpu's process terminates
types
preemptive
running processes can be stopped by scheduler and moved back to ready state, then that process's cpu is reallocated to a different process in the ready queue
when state move from
waiting to ready
running to ready
to ready
non-preemptive
running processes cannot be moved back to ready queue and have its cpu allocated to a different process by the scheduler until the running process either waits or terminates
when state move from
running to terminated
running to waiting
from running
dispatcher module
gives control of the cpu to the process selected by the short term scheduler
algorithms
maximize
maximize throughput
throughput
1 more item...
maximize cpu utilization
cpu utilization
2 more items...
minimize
minimize waiting time
waiting time
1 more item...
minimize response time
response time
1 more item...
minimize turnaround time
turnaround time
2 more items...
471-2
5 more items...
first-come, first served
fifo queue, whatever process is in the front of the ready queue is selected
non-preemptive
disadvantages
convoy effect
1 more item...
shortest-job-first
disadvantages
requires that the system knows the length of the next cpu request
processes with shortest remaining time go first
advantages
gives the shortest average waiting time for a set of processes
can be preemptive or nonpreemptive
preemptive
remaining-time-first scheduling
1 more item...
non preemptive
when a process with a burst less than the currently running process's remaining time arrives, that newly arrivedprocess must wait for the current process to finish
priority scheduling
highest priority(lowest integer) is selected first
advantages
can be preemptive or nonpreemptive
disadvantages
possibility of starvation
1 more item...
preemptive
when a process with a priority higher (integer less) than the currently running process's priority arrives, that newly arrived process begins execution
non-preemptive
when a process with a priority higher (integer less) than the currently running process's priority arrives, that newly arrived process must wait for the current process to finish
round-robin scheduling
non-preemptive
advantages
for low q
2 more items...
each process gets a unit of cpu time, after that time is up the process is preempted and placed in the back of the ready queue, cpu is automatically released if a process finishes before its time is up
unit of cpu time each process gets is called
1 more item...
disadvantages
high q
1 more item...
low q
1 more item...
multilevel queue scheduling
multiple ready queues for different categories of job, each queue with its own scheduling algorithm
ex:
2 more items...
multilevel feedback queue scheduling
multiple ready queues for different categories of job, where process can move between queues, with some method for determining upgrading/demotion from each queue, and some method to determine which queue a process will start at when it needs service, and each queue with its own scheduling algorithm
ex:
1 more item...
methods of choosing cpu schedulers
deterministic modeling
calculate performance of each algorithm for a specific workload
queuing models
use queueing theory
implementation
implement the algorithm, then measure its performance
time it takes for cpu to stop one process and start another
dispatch latency
how to create a process
pid_t pid;
pid = fork();
fork
creates child process with identical values to the parent process at the time of the fork call, then both proceses run next instruction after fork() at the same time unless one of them is made to wait
child process
can get its parents pid by
getpid()
1 more item...
can run before its parent by
if(pid) == 0{
}
else{
wait(NULL)
exit(0)
}
making the parent wait by checking pid of currently running process, fork() will be 0 for the child process only
what happens when a child runs exit()?
1 more item...
possible return values
returns 0
to newly created child process
returns positive value
the pid of the child to the parent
returns negative
if failed to create child
how to make a process wait
wait()
parameters
NULL if parent is waiting for any child process to terminate
how to terminate a process
exit()
parameters
1 if an error occurred
0 if successfull termination
types of processes
independent
processes that run concurrently and cannot affect other processes
io-bound
cooperating
processes that run concurrently and can affect one another
types of process communication
message passing
when processes send messages to one another
requires:
OS support
system calls for each message
advantages
simple to set up, works well across multiple computers
disadvantages
slower due to requiring
1 more item...
useful for
process communication across computers
infrequent or small number of data transfers
types of message passing systems
direct
1 more item...
indirect
1 more item...
synchronous
3 more items...
asynchronous
3 more items...
automatic buffering
2 more items...
explicit buffering
2 more items...
shared memory
processes communicate using shared memory located in the address space of one of the processes that can be accessed by another process
similar to how two pointers can point to the same address
disadvantages
not simple to set up, does not work well across multiple computers
advantages
faster due to not requiring
1 more item...
useful for
when large amounts of information must be shared quickly on the same computer between processes
zombie
child process that has tried to exit but cannot because their parent is not waiting for them
orphan
child process continues to run after its parent has terminated
parent
single threaded process
single program counter, stack, set of registers, set of instructions, data, files
multi-threaded process
threads each have their own unique pc, stack, and set of registers, but share common set of instructions, data, open files
useful for
eliminating need for blocking/waiting for processes to terminate before moving on to next instructions
cpu-bound
duration of cpu-bursts > duration of i/o bursts
example of cpu-bound jobs
scientific computation apps
i/o-bound
duration of i/o bursts > duration of cpu bursts
example of io-bound jobs
data processing apps