Please enable JavaScript.
Coggle requires JavaScript to display documents.
Dictionary Advanced/ Algorithm Complexity - Coggle Diagram
Dictionary Advanced/
Algorithm Complexity
Algorithm Complexity
Memory Complexity/
space complexity
Memory complexity is another way to estimate algorithm complexity. Unlike the previous method, this complexity depends on the size of the algorithm's executive (work) memory; refers to the memory occupied by the algorithm.
Time Complexity
as the name suggests, refers to the time taken by the algorithm to complete its execution.
An efficient program may take more time to execute on a slower computer than an inefficient program on a fast computer.
So clock time is not a good metric to find time complexity.
The answer is
Big(O) notation.
Big(O) Notation
#
Orders of Complexity
Good
Constant time – O (1)
If the function performs a constant number of operations regardless of the number of input data — f(n) = C — it means that the function is performed in a
constant time
(it does not scale with input).
Logarithmic time – O (log n)
An algorithm is ’logarithmic’ when the number of operations decreases by a specific factor with each step.
square root time - O(√n)
An algorithm has ‘sqrt’ (or √) time complexity when the number of operations increases dependant on the number of primes under the square root of the given number.
Awful
Polynomial Time
#
O (n^k) (where k > 0)
Cubic time – O (n^3)
Quadratic time – O (n^2)
An algorithm is ‘quadratic’ when the number of operations become the square of the number of elements.
...
exponential time - O(2^n)
An algorithm is ’exponential’ when the number of operations grows exponentially with the number of elements (i.e. growth whose rate becomes ever more rapid in proportion to the growing total number or size).
factorial time - O(n!)
An algorithm is ‘factorial’ when the number of operations increases in line with the number of permutations associated with the number of elements.
Ok
Linear time – O (n)
If the number of operations in a function is directly proportional to the size of the input data, then such a function is called
linear
.
Bad
linearithmic time - O(n log n)
O(log n) in meen log 2(n);
log 2(8) = 3
An algorithm is ’linearithmic’ when the number of operations increases by the number of elements (i.e. linear time) times the result of log n (i.e. logarithmic time).
It defines the relationship between the number of inputs and the steps taken by the algorithm to process those inputs.
We can use Big(O) to define both time and space complexity.
The reason for selecting Big O over other notations is that it’s the most relevant for performance analysis, as
it helps us to understand the worst case behaviour.
Big-O Cheat Sheet Poster
https://www.bigocheatsheet.com/
The algorithm's complexity is a function of the dependence of the number of operations in the algorithm n (the size of the input data).
Algorithm complexity is used to measure the performance of an algorithm in terms of time taken and the space consumed.
Best case complexity
refers to the complexity of an algorithm in the ideal situation. For instance, if you want to search an item X in a list of N items-- the best case scenario is that we find the item at the first index, in which case the algorithm complexity will be O(1).
Worst Case Complexity
is that we find the item at the nth (or last) index, in which case the algorithm complexity would be be O(n) (where n is the total number of items in the list).
When we use the term "algorithmic complexity", we generally refer to the worst case complexity.
This is to ensure that we are optimizing for the least ideal situation.
Dictionary
Hash Tables
Collision
But what if two different keys or objects have the same index from the hash? Such a case has a special name — collision.
—
Python randomly chooses one of the free cells.
In the worst case, collisions can increase the algorithmic complexity of the hash table to O(N), but on average, the complexity will be equal to O(1).
Table Resize
In Python, hash tables have such a property that when the table is filled more than ⅔ of its size, it increases by a factor of 2. How it happens:
number of elements = ⅔
table size + 1 (same as the number of elements > ⅔
table size);
the size of the table is doubled;
old values are enumerated one by one (key and value are preserved, but the index changes).
Typically, a hash table stores a key, a value, and a hash of the key in each cell.
In order to store value in the table, Python first performs certain operations:
Finds the hash value from the key
Finds the index that is equal to the remainder when divided by the size of the hash table
The key, value, and hash is written in the cell that has a number equal to the index
Searching for an element in the dictionary takes a constant time and has an algorithmic complexity equal to O(1). Such algorithmic complexity is typical for most operations on dictionaries. This is a significant advantage over other types of collections for data storage because working with dictionaries is fast.
This is the reason for storing data in classes as a dictionary.
The algorithmic complexity of the
setitem
() and
getitem
() methods is O(1).
Dictionaries in Python use
hash tables
Hash
A hash function is a function that accepts a hashed object and returns a numeric value. Python has a built-in hash() function that performs hashing.
For integer number, the hash function returns exactly this integer number.
Please note: the hash value of -1 equals -2 — this is an exception, as -1 is a reserved value that prevents hash functions from being able to produce a hash value of -1.
The hash function for the boolean values returns 1 and 0, respectively, because True has a value of 1, which is an integer type. Similarly is for the False value.
For other immutable data types hash value changes each time when you run your program.
Hash is only for immutable data types.For the list, dict, and set, we cannot get the hash value. Only hashed values can be hashed. only immutable data types are hashed, and mutable will cause an error. But it is not quite so.
The first, is when the tuple immutable value stores a mutable value - can not be hashed.
The second case is a hash for a class object - can be hashed.
the execution time of the hash function is constant — O(1)
the hash function always returns the same values for the same objects
How does hash() work for custom objects?
As stated above, hash() method internally calls
hash
() method. So, any objects can override
hash
() for custom hash values.
But for correct hash implementation,
hash
() should always return an integer. And, both
eq
() and
hash
() methods have to be implemented.