Please enable JavaScript.
Coggle requires JavaScript to display documents.
M4 - Algorithms and Data Structures, 2.3 Mathematical Analysis of…
M4 - Algorithms and Data Structures
2 Fundamentals of the Analysis of Algorithm Efficiency
2.2 Asymptotic Notations and Basic Efficiency Classes
To compare and rank such orders of growth, computer scientists use three notations:
O (big oh), Ω (big omega), and Θ (big theta)
t(n)
will be an algorithm’s running time (usually indicated by its basic operation count C(n)), and
g(n)
will be some simple function to compare the count with.
O: big oh (no worse than)
Θ: big theta (similar to)
Ω: big omega (no better than)
Informal Introduction
O(g(n)) is the set of all functions with a lower or same order of growth as g(n)
n ∈ O(n2), 100n + 5 ∈ O(n2), 1/2n(n − 1) ∈ O(n2)
n3 ~∈ O(n2), 0.00001n3 ~∈ O(n2), n4 + n + 1 ~∈ O(n2)
Ω(g(n)), stands for the set of all functions with a higher or same order of growth as g(n)
n3 ∈ Ω(n2), 1/2n(n − 1) ∈ Ω(n2), but 100n + 5 ~∈ Ω(n2)
Θ(g(n)) is the set of all functions that have the same order of growth as g(n)
Thus, every quadratic function an² + bn + c with a > 0 is in Θ(n2)
O-notation
A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)), if t (n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that
t (n) ≤ cg(n) for all n ≥ n0.
Ω-notation
A function t (n) is said to be in Ω(g(n)), denoted t (n) ∈ Ω(g(n)), if t (n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0.
Θ-notation
A function t (n) is said to be in Θ(g(n)), denoted t (n) ∈ Θ(g(n)), if t (n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some nonnegative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0
Useful Property Involving the Asymptotic Notations
THEOREM
If
t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)})
. (The analogous assertions are true for the Ω and Θ notations as well.)
It implies that the algorithm’s overall efficiency is determined by the part with a higher order of growth, i.e., its least efficient part.
Using Limits for Comparing Orders of Growth
lim (n→∞) t (n)/g(n)
= (3 cases)
1. 0
implies that t (n) has a smaller order of growth than g(n) or
2. c
implies that t (n) has the same order of growth as g(n) or
3. ∞
implies that t (n) has a larger order of growth than g(n).
the first two cases mean that t (n) ∈ O(g(n)), the last two mean that t (n) ∈ Ω(g(n)), and the second case means that t (n) ∈ Θ(g(n))
Basic Efficiency Classes
As a rule, you should expect an algorithm from a better asymptotic efficiency class to outperform an algorithm from a worse class even for moderately sized inputs.
This observation is especially true for an algorithm with a better than exponential running time versus an exponential (or worse) algorithm.
2.4 Mathematical Analysis of Recursive Algorithms
General Plan for Analyzing the Time Efficiency of Recursive Algorithms
Decide on a parameter (or parameters) indicating an input’s size
Identify the algorithm’s basic operation.
Check whether the number of times the basic operation is executed can vary on different inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be investigated separately.
Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
Solve the recurrence or, at least, ascertain the order of growth of its solution.
2.6 Empirical Analysis of Algorithms
General Plan for the Empirical Analysis of Algorithm Time Efficiency
Understand the experiment’s purpose.
Decide on the efficiency metric M to be measured and the measurement unit
(an operation count vs. a time unit).
Decide on characteristics of the input sample (its range, size, and so on).
Prepare a program implementing the algorithm (or algorithms) for the experimentatio
Generate a sample of inputs
Run the algorithm (or algorithms) on the sample’s inputs and record the data
observed.
1 more item...
2.7 Algorithm Visualization
There are two principal variations of algorithm visualization:
Static algorithm visualization
Dynamic algorithm visualization, also called algorithm animation
There are two principal applications of algorithm visualization: research and
education
an investigation of an algorithm’s efficiency with respect to two resources:
Time efficiency(time complexity) and Space efficiency(space complexity)
2.1 The Analysis Framework
Units for Measuring Running Time
The thing to do is to identify the most important operation of the algorithm, called the
basic operation
, the operation contributing the most to the total running time, and compute the number of times the basic operation is executed.
it is usually the most time-consuming operation in the algorithm’s innermost loop.
time efficiency suggests measuring it by counting the number of times the algorithm’s basic operation is executed on inputs of size n
cop
be the execution time of an algorithm’s basic operation on a particular computer;
C(n)
be the number of times this operation needs to be executed for this algorithm; the running time
T (n)
of a program implementing this algorithm on that computer by the formula:
T (n) ≈ cop C(n)
the efficiency analysis framework ignores multiplicative constants and concentrates on the count’s order of growth to within a constant multiple for large-size inputs.
Orders of Growth
we omit a logarithm’s base and write simply log n in situations where we are interested just in a function’s order of growth to within a multiplicative constant
Algorithms that require an exponential number of operations are practical for solving only problems of very small sizes
1 > log n > n > n log n > n² > n³ > 2^n > n!
Measuring an Input’s Size
Worst-Case, Best-Case, and Average-Case Efficiencies
The
worst-case efficiency
of an algorithm is an input (or inputs) of size n for which the runs the longest among all possible inputs of that size -
Cworst(n) = n
The
best-case efficiency
of an algorithm is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size -
Cbest(n) = 1
that neither the worst-case analysis nor its best-case counterpart yields the necessary information about analgorithm’s behavior on a “typical” or “random” input. This is the information that the
average-case efficiency
seeks to provide -
Cavg(n) = p(n + 1)/2 + n(1 − p)
The direct approach for doing this involves dividing all instances of size n into several classes so that for each instance of the class the number of times the algorithm’s basic operation is executed is the same
It should be clear from the preceding discussion that the average-case efficiency cannot be obtained by taking the average of the worst-case and the best-case efficiencies
amortized efficiency
: it applies not to a single run of an algorithm but rather to a sequence of operations performed on the same data structure
It turns out that in some situations a single operation can be expensive, but the total time for an entire sequence of n such operations is always significantly better than the worst-case efficiency of that single operation multiplied by n
2.3 Mathematical Analysis of Nonrecursive Algorithms
General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms
Decide on a parameter (or parameters) indicating an input’s size.
Identify the algorithm’s basic operation. (As a rule, it is located in the inner most loop.)
Check whether the number of times the basic operation is executed depends only on the size of an input. If it also depends on some additional property, the worst-case, average-case, and, if necessary, best-case efficiencies have to be investigated separately.
Set up a sum expressing the number of times the algorithm’s basic operation is executed.
Using standard formulas and rules of sum manipulation, either find a closed form formula for the count or, at the very least, establish its order of growth.