Please enable JavaScript.
Coggle requires JavaScript to display documents.
Analisys of algorithm efficiency, basic operation - Coggle Diagram
Analisys of algorithm efficiency
Analisys framework
Measuring an input´s size
Depending on how the algorithm works, there will be an appropiate size metric
For checking the primality of a positive integer n, it is preferable e to measure size by the number b of bits in the n´s binary representation
b = [log2 n] + 1
Units for measuring running time
The metric used can´t depend on the type of the computer or the compiler
What we have to do is identify the
T (n) ≈ cop x C(n)
cop >> execution time of an algorithm’s basic operation on a particular computer
C(n) >> the number of
times this operation needs to be executed for this algorithm
T(n) >> the running time of a program implementing this algorithm on that computer
orders of growth
approximation of the time required to run a computer program as the input size increases
It´s hard to decide which is the best algorithm for small inputs
It is only when we have large numbers on the input that the difference in algorithm efficiencies becomes both clear and important. For large values of n, it is the function’s order of growth that counts
The lowest growing function is the logarithm function
Algorithms that require an exponential number of operations are practical
for solving only problems of very small sizes
Worst-case, Best-case and Average-case efficiencies
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size
The way to determine
the worst-case efficiency of an algorithm is to analyze e the algorithm to see what kind of inputs yield the largest value of the basic operation’s count C(n) among all possible inputs of size n and then compute this worst-case value Cworst(n)
For any instance of size n, the running time will not exceed
Cworst(n), its running time on the worst-case inputs
The best-case efficiency of an algorithm is its efficiency for the best-case input
of size n, which is an input (or inputs) of size n for which the algorithm runs the
fastest among all possible inputs of that size
First, we determine the kind of inputs for which the count
C(n) will be the smallest among all possible inputs of size n
Then we ascertain the value of C(n) on these most
convenient inputs
The average-case efficiency seeks to provide information about an algorithm´s behavior on a random input
To analyze the algorithm’s averagecase efficiency, we must make some assumptions about possible inputs of size n
The investigation of the
average-case efficiency is considerably more difficult than analysis of the
worst-case and best-case efficiencies
The direct approach for doing this involves
dividing all instances of size n into several classes so that for each instance of the
class the number of times the algorithm’s basic operation is executed is the same
Amortized efficiency
It applies not to
a single run of an algorithm but rather to a sequence of operations performed on the same data structure
we can reduce the high cost of such a worst-case occurrence over the entire sequence
Two resources
memory space
running time
Asymptotic notations and basic efficiency classes
3 notations to compare and rank orders of growth
Ω (big omega)
Ω (g(n)) stands for the set of all functions with a higher
or same order of growth as g(n)
Θ (big theta)
Θ (g(n)) is the set of all functions that have the same order of growth
as g(n)
O (big oh)
O(g(n)) is the set of all functions with a lower or same order of growth
as g(n)
Usuallly multiplicative constants do not alter the orders of growth natural speed
As a rule, you should expect an algorithm from a better asymptotic efficiency class
to outperform an algorithm from a worse class even for moderately sized inputs
Mathematical Anallysis of nonrecursive algorithms
General plan
Decide on a parameter (or parameters) indicating an input’s size
Identify the algorithm’s basic operation (As a rule, it is located in the innermost loop)
Check whether the number of times the basic operation is executed depends only on the size of an input. If it also depends on some additional property, the worst-case, average-case, and, if necessary, best-case efficiencies have to
be investigated separately
Set up a sum expressing the number of times the algorithm’s basic operation
is executed
Using standard formulas and rules of sum manipulation, either find a closedform formula for the count or, at the very least, establish its order of growth
This plan will not always be successful. An irregular change in a
loop variable, a sum too complicated to analyze, and the difficulties intrinsic to the average case analysis are just some of the obstacles that can prove to be insurmountable
Mathematical Anallysis of recursive algorithms
To determine a
solution, we need an initial condition that tells us the value with which the sequence starts. We can obtain this value by inspecting the condition that
makes the algorithm stop its recursive calls
Thus, we succeeded in setting up the recurrence relation and initial condition
for the algorithm’s number of multiplications
General Plan
Check whether the number of times the basic operation is executed can varyon different inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be investigated separately
Set up a recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed
Solve the recurrence or, at least, ascertain the order of growth of its solution
Decide on a parameter (or parameters) indicating an input’s size
Identify the algorithm’s basic operation
Empirical analysis of algorithms
General plan
Understand the experiment’s purpose
Decide on the efficiency metric M to be measured and the measurement unit(an operation count vs. a time unit)
Decide on characteristics of the input sample (its range, size, and so on)
Prepare a program implementing the algorithm (or algorithms) for the experimentation
Generate a sample of inputs
Run the algorithm (or algorithms) on the sample’s inputs and record the data
observed
Analyze the data obtained
possible goals
checking the
accuracy of a theoretical assertion about the algorithm’s efficienc
comparing the
efficiency of several algorithms for solving the same problem or different implementations of the same algorithm
developing a hypothesis about the algorithm’s
efficiency class
ascertaining the efficiency of the program implementing the
algorithm on a particular machine
Generating random numbers on
a digital computer is known to present a difficult problem because, in principle, the problem can be solved only approximately
This is the reason computer scientists prefer to call such numbers pseudorandom
Even if you decide to use a pattern for input sizes, you will typically
want instances themselves generated randomly to experiment the algorithm
As a practical matter, the easiest
and most natural way of getting such numbers is to take advantage of a random number generator available in computer language libraries
The most widely used and thoroughly studied of
algorithms to genenerate pseudorandom numbersis the linear congruential method
Random(n, m, seed, a, b)
//Generates a sequence of n pseudorandom numbers according to the linear
// congruential method
//Input: A positive integer n and positive integer parameters m, seed, a, b
//Output: A sequence r1,...,rn of n pseudorandom integers uniformly
// distributed among integer values between 0 and m − 1
//Note: Pseudorandom numbers between 0 and 1 can be obtained
// by treating the integers generated as digits after the decimal point
r0 ← seed
for i ← 1 to n do
ri ← (a ∗ ri−1 + b) mod m
seed may be chosen arbitrarily and is often set to the
current date and time; "m" should be large and may be conveniently taken as 2^w, where w is the computer’s word size; "a" should be selected as an integer between
0.01m and 0.99m with no particular pattern in its digits but such that "a mod 8 = 5"; and the value of "b" can be chosen as 1
algorithm visualization
The use of images to convey some useful information about algorithms
That information can be a visual illustration of an algorithm’s operation, of its performance on different kinds of inputs, or of its execution speed versus that of other algorithms for the same problem
There are two principal variations of algorithm visualization
Static algorithm visualization
It shows an algorithm’s progress through a series
of still images
Dynamic algorithm visualization, also called algorithm animation
It shows a continuous,
movie-like presentation of an algorithm’s operations
basic operation
The operation contributing
the most to the total running time
Then, we compute the number of times the basic
operation is executed