Please enable JavaScript.
Coggle requires JavaScript to display documents.
M17 - Algorithms and Data Structures - Coggle Diagram
M17 - Algorithms and Data Structures
8 Dynamic Programming
8.1 Three Basic Examples
EXAMPLE 1 Coin-row problem
There is a row of n coins whose values are some positive integers, not necessarily distinct. The goal is to pick up the maximum amount of money subject to the constraint that no two coins adjacent in the initial row can be picked up
takes Θ(n) time and Θ(n)space.
This is by far superior to the alternatives: the straightforward topdown application of recurrence and solving the problem by exhaustive search
EXAMPLE 2 Change-making problem
Give change for amount n using the minimum
number of coins of denominations.
Let F (n) be the minimum number of coins whose values add up to n
The amount n can only be obtained by adding one coin of denomination dj to the amount n − dj
The time and space efficiencies of the algorithm are obviously O(nm) and Θ(n)
EXAMPLE 3 Coin-collecting problem
Several coins are placed in cells of an n × m board, no more than one coin per cell.
A robot, located in the upper left cell of the board, needs to collect as many of the coins as possible and bring them to the bottom right cell
On each step, the robot can move either one cell to the right or one cell down from its current location.
When the robot visits a cell with a coin,
it always picks up that coin
Design an algorithm to find the maximum number of coins the robot can collect and a path it needs to follow to do this
the time efficiency of the algorithm is Θ(nm). Its space efficiency is, obviously, also Θ(nm).
8.2 The Knapsack Problem and Memory Functions
given n items of known weights w1,...,wn and values v1,...,vn and a knapsack of capacity W, find the most valuable subset of the items that fit into the knapsack
We assume here that all the weights and the knapsack capacity are positive integers
The time efficiency and space efficiency of this algorithm are both in Θ(nW ).
a method that solves only subproblems that are necessary and does so only once. Such a method exists; it is based on using
memory functions.
a memory function algorithm may be less space-efficient than a space-efficient version of a bottom-up algorithm.
a general method for optimizing multistage decision processes.
is a technique for solving problems with overlapping subproblems
dynamic programming suggests solving each of the smaller subproblems only once and recording the results in a table from which a solution to the original problem can then be obtained
a dynamic programming algorithm can sometimes be refined to avoid using extra space
a majority of dynamic programming applications deal with optimization problems, we also need to mention a general principle that underlines such applications
the principle of optimality
12 Coping with the Limitations of Algorithm Power
12.3 Approximation Algorithms for NP -Hard Problems
Approximation Algorithms for the Traveling Salesman Problem
Greedy Algorithms for the TSP
Nearest-neighbor algorithm
The following well-known greedy algorithm is based on the nearest-neighbor heuristic: always go next to the nearest unvisited city
Repeat the following operation until all the cities have been visited: go to the unvisited city nearest the one visited last (ties can be broken arbitrarily).
Return to the starting city
Choose an arbitrary city as the start.
Multifragment-heuristic algorithm
considers it as the problem of finding a minimum-weight collection of edges in a given complete weighted graph so that all the vertices have degree 2
Sort the edges in increasing order of their weights. Initialize the set of tour edges to be constructed to the empty set
Repeat this step n times, where n is the number of cities in the instance being solvedd: add the next edge on the sorted edge list to the set of tour edges, provided this addition does not create a vertex of degree 3 or a cycle of length less than n; otherwise, skip the edge
Return the set of tour edges.
Minimum-Spanning-Tree–Based Algorithms
Twice-around-the-tree algorithm
Construct a minimum spanning tree of the graph corresponding to a given instance of the traveling salesman problem.
Starting at an arbitrary vertex, perform a walk around the minimum spanning tree recording all the vertices passed by. (This can be done by a DFS traversal.)
Scan the vertex list obtained in Step 2 and eliminate from it all repeated occurrences of the same vertex except the starting one at the end of the list. The vertices remaining on the list will form a Hamiltonian circuit, which is the output of the algorithm.
Christofides Algorithm
It also uses a minimum spanning tree but does this in a more sophisticated way than the twice-around-the-tree algorithm.
Local Search Heuristics
For Euclidean instances, surprisingly good approximations to optimal tours can be obtained by iterative-improvement algorithms
These algorithms start with some initial tour, e.g., constructed randomly or by some simpler approximation algorithm such as the nearest-neighbor.
Approximation Algorithms for the Knapsack Problem
another well-known NP-hard problem
Greedy Algorithms for the Knapsack Problem
Greedy algorithm for the discrete knapsack problem
Compute the value-to-weight ratios ri = vi/wi, i = 1, . . . , n, for the items given
Sort the items in nonincreasing order of the ratios computed in Step 1. (Ties can be broken arbitrarily.)
Repeat the following operation until no item is left in the sorted list: if the current item on the list fits into the knapsack, place it in the knapsack and proceed to the next item; otherwise, just proceed to the next item.
Greedy algorithm for the continuous knapsack problem
Compute the value-to-weight ratios vi/wi, i = 1, . . . , n, for the items given.
Sort the items in nonincreasing order of the ratios computed in Step 1. (Ties can be broken arbitrarily.)
Repeat the following operation until the knapsack is filled to its full capacity or no item is left in the sorted list: if the current item on the list fits into the knapsack in its entirety, take it and proceed to the next item; otherwise, take its largest fraction to fill the knapsack to its full capacity and stop
A polynomial-time approximation algorithm is said to be a
c approximation algorithm
, where c ≥ 1, if the accuracy ratio of the approximation it produces does not exceed c for any instance of the problem in question:
r(sa) ≤ c
.