Please enable JavaScript.
Coggle requires JavaScript to display documents.
Fundamentals of the Analysis of Algorithm Efficiency - Coggle Diagram
Fundamentals of the Analysis of Algorithm Efficiency
The Analysis Framework
Both
time and space efficiencies
are measured as functions of the algorithm’s
input size
Time efficiency
is measured by counting the number of times the algorithm’s
basic operation is executed. It indicates how fast the algorithm runs.
Space efficiency
is measured by counting the
number of extra memory units consumed by the algorithm, and deals with the extra space it requires.
The efficiencies of some algorithms may differ significantly for inputs of the
same size. For such algorithms, we need to distinguish between the **worst-case,
average-case, and best-case efficiencies.**
An alternative:
empirical analyses
, specially for the average case
The framework’s primary interest lies in the order of growth of the algorithm’s
running time
(extra memory units consumed) as its input size goes to infinity.
Units for measuring running time:
SLIDE
:
https://drive.google.com/file/d/1hZXR8BwnKC9DlO08pSWxVI3Aw6_LgEHK/view
Absolute: T(n)
Relative: C(n) (in terms of the basic operation op)
Let cop be the execution time of op on a particular computer T(n) ≈ cop ∗ C(n)
An algorithm’s time efficiency is principally measured as a function of its input
size by counting the number of times its basic operation is executed. A basic
operation is the operation that contributes the most to running time. Typically it is the most time-consuming operation in the algorithm’s innermost loop.
Asymptotic Notations and Basic Efficiency Classes
We compare the order of growth
O: big oh (no worse than)
Ω: big omega (no better than)
Θ: big theta (similar to)
Examples
O(n²) = all non-negative functions with an order of growth no worse than n²
Ω(n²) = all non-negative functions with an order of growth no better than n²
Θ(n²) = all non-negative functions with an order of growth similar to n²
Basic asymptotic efficiency classes
constant = 1
logarithmic = log n
linear = n
lineararithmic = n log n
quadratic = n²
cubic = n³
exponential = 2^n
factorial = n!
Theorem:
∀ t1, t2, g1, g2 • {t1, t2, g1, g2} ⊆ dom(O) ∧ t1 ∈ O(g1) ∧ t2 ∈ O(g2) ⇒ t2 ◦ t1 ∈ O(max(g1, g2))
The same applies for Ω and Θ
Practical implication: the algorithm’s
overall efficiency
is determined
by the part with a
higher order of growth
Example: algorithm = sorting + binary search
Let Cworst(n) ∈ O(n log n) for sorting
Therefore, Cworst(n) ∈ O(n log n) for algorithm
The order of growth of n log n is larger than n (sequential search)
Mathematical Analysis of Recursive Algorithms
Typically, C(n) can be defined as recurrence relations
Algorithm: F(n) (Factorial)
if
n = 0
then return
1;
else return
F(n − 1) ∗ n;
Recurrence relation:
C(0) = 0, for n = 0
C(n) = C(n − 1) + 1, for n > 0
Solving the recurrence using the method of backward substitutions:
C(n) = C(n − 1) + 1
= (C(n − 1 − 1) + 1) + 1 = C(n − 2) + 2
= (C(n − 2 − 1) + 1) + 2 = C(n − 3) + 3
= · · ·
= C(n − i) + i
= · · ·
= C(n − n) + n
= C(0) + n
= 0 + n
n
BinarySearch
Recurrence relation:
Cworst(n) = 0, for n = 0
Cworst(n) = 1 + Cworst floor(n/2), for n > 0
Assuming (for simplicity) that both sides of A have the same size
Cworst(n) = Cworst((n-1) / 2) + 1
Cworst((n-1) / 2) = 1 + Cworst(((n-1/2) - 1) / 2) + Cworst((n-2^0 - 2¹) / 2²)
. . .
The recurrence ends when the argument of Cworst is equal to 0:
Cworst((n-2^0 - 2¹ - ... - 2^(j+1)) / 2^(j+2)) = Cworst(0) = 0
And Cworst(n) can be defined as follows:
Cworst(n) = j + 2
= log2(n + 1) − 2 + 2
= log2(n + 1)
(the process is on the sllide!)
The main tool for analyzing the time efficiency of a recursive algorithm is to
set up a
recurrence relation
expressing the number of executions of its basic operation and ascertain the solution’s order of growth.
Succinctness of a recursive algorithm may mask its inefficiency
Mathematical Analysis of Nonrecursive Algorithms
Unique Elements
Algorithm:
UniqueElements(A[0..n-1])
for
i ← 0
to
n − 2
do
----
for
j ← i + 1
to
n − 1
do
--------
if
A[i] = A[j]
then return
false;
return
true;
OBS:
S(a,b) = somatório de a à b
Cworst(n) = S(i=0, n-2) * S(j=i+1, n-1) 1
= S(i=0, n-2) ((n − 1) − (i + 1) + 1)
= S(i=0, n-2) (n − 1 − i)
= S(i=0, n-2) (n − 1) - S(i=0, n-2) i
= (n − 1) S(i=0, n-2) 1 - ((n−2)(n−1)) / 2
= (n − 1)² - ((n−2)(n−1)) / 2
= (2(n²-2n+1) - (n²-3n+2)) / 2
= 2n²-4n+2-n²+3n-2
= (n²-n ) / 2
= ((n-1)n) / 2
Remember that:
S(i=l, u) c * ai
= c * S(i=l, u) ai
S(i=l, u) (ai ± bi)
= S(i=l, u) ai ± S(i=l, u) bi
= Sn = ((a1+an)*n) / 2
For
n ≤ 1
Cbest(0) = 0 (unique elements
true
)
Cbest(1) = 0 (unique elements:
true
)
For an arbitrary
n > 1
Cbest(n) = 1 (unique elements:
false
)
Sequential Search
Algorithm:
SequentialSearch(A[0..n-1], K)
i ← 0;
while
i < n ∧ A[i] 6= K
do
----i ← i + 1;
if
i < n
then return
i;
else return
−1;
For n = 0
Cworst(0) = 0
For an arbitrary n ≥ 1
Cworst(n) = S(i=0,n-1) 1
= (n − 1) − 0 + 1
= n
For n = 0
Cbest(0) = 0 (sequential search: -1)
For an arbitrary n ≥ 1
Cbest(n) = 1 (sequential search: 0)
Probability of successful search: p, where 0 ≤ p ≤ 1
Same probability for every i
Cavg(n) = (1
p/n + 2
p/n +...+ i
p/n +...+ n
p/n) + n*(1-p)
= p/n (1 + 2 +...+ i +...+ n) + n(1-p)
= p/n ((1+n)n)/2 + n(1-p)
= (p(n+1)) / 2 + n(1-p)
Typically, Cavg(n) is more difficult to be defined
For some algorithms, Cavg << Cworst
The main tool for analyzing the time efficiency of a nonrecursive algorithm
is to set up a
sum
expressing the number of executions of its basic operation and ascertain the sum’s order of growth.
Empirical Analysis of Algorithms
General Plan for the Empirical Analysis of Algorithm Time Efficiency
Understand the experiment’s purpose
Decide on the efficiency metric M to be measured and the measurement unit
(an operation count vs. a time unit)
Decide on characteristics of the input sample (its range, size, and so on)
Prepare a program implementing the algorithm (or algorithms) for the experimentation.
Generate a sample of inputs
Run the algorithm (or algorithms) on the sample’s inputs and record the data
observed.
Analyze the data obtained
This process often involves generating pseudorandom numbers. The applicability to any
algorithm is the principal strength of this approach; the dependence of results
on the particular computer and instance sample is its main weakness.
Algorithm Visualization
Algorithm visualization is the use of images to convey useful information
about algorithms. The two principal variations of algorithm visualization are
static algorithm visualization and dynamic algorithm visualization (also called
algorithm animation).