Please enable JavaScript.
Coggle requires JavaScript to display documents.
Dynamic Programming (DP), a, a - Coggle Diagram
Dynamic Programming (DP)
Definition
Dynamic programming
is a technique for solving problems with overlapping subproblems. Rather than solving overlapping subproblems again and again,
dynamic programming
suggests solving each of the smaller subproblems only once and recording the results in a table from which a solution to the original problem can then be obtained.
Problems that we can solve using
dynamic programming
Coin-row problem
There is a row of
n
coins whose values are some
positive integers
c1, c2,...,cn
, not necessarily distinct. The goal is to pick up the maximum amount of money subject to the constraint that no two coins adjacent in the initial row can be picked up.
Change-making problem
Give change for amount
n
using the minimum
number of coins of denominations
d1 < d2 < ... < dm
.
Coin-collecting problem
Several coins are placed in cells of an
n × m
board, no more than one coin per cell. A robot, located in the upper left cell of the board, needs to collect as many of the coins as possible and bring them to the bottom right cell. On each step, the robot can move either one cell to the right or one cell down from its current location. When the robot visits a cell with a coin, it always picks up that coin. The problem is to find the maximum number of coins the robot can collect and a path it needs to follow to do this.
Knapsack problem
Given
n
items of known weights
wi
and values
vi
,
i
= 1,2,...,
n
, and a knapsack of capacity
W
, find the most valuable subset of the items that fit in the knapsack, that is the
knapsack problem
.
Approximation Algorithms for
NP
-Hard Problems
If an algorithm reaches an approximation ratio of
P(n)
, then we call it a
P(n)-approximation algorithm
. For a maximization problem,
0 < C < C×
, and the ratio of
C/C*
gives the factor by which the cost of an optimal solution is larger than the cost of the approximate algorithm.
"Solving" the problem using
approximation algorithms
means that the solution doesn't need to be optimal, but only accetable.
Approximation algorithms for the Traveling Salesman problem (TSP)
Nearest-neighbor algorithm
Step 1
: Choose an arbitrary city as the start.
Step 2
: Repeat the following operation until all the cities have been visited: go to the unvisited city nearest the one visited last.
Step 3
: Return to the starting city
It is an
greedy algorithm
Multifragment-heuristic algorithm
Step 1
: Sort the edges in increasing order of their weights. Initialize the set of tour edges to be constructed to the empty set.
Step 2
: Repeat this step n times, where n is the number of cities in the instance being solved: add the next edge on the sorted edge list to the set of tour edges, provided this addition does not create a vertex of degree 3 or a cycle of length less than n; otherwise, skip the edge.
Step 3
: Return the set of tour edges.
It is an
greedy algorithm
Twice-around-the-tree algorithm
Step 1
: Construct a minimum spanning tree of the graph corresponding to a given instance of the traveling salesman problem.
Step 2
: Starting at an arbitrary vertex, perform a walk around the MST recording all the vertices passed by.
Step 3
: Scan the vertex list obtained in Step 2 and eliminate from it all repeated occurrences of the same vertex except the starting one at the end of the list. The
vertices remaining on the list will form a Hamiltonian circuit, which is the output of the algorithm.
It is a
MST-based
algorithm
Approximation algorithms for the Knapsack problem
Greedy algorithm for the discrete knapsack problem
Step 1
: Compute the value-to-weight ratios
ri
=
vi/wi
,
i
= 1, . . . ,
n
, for the items given.
Step 2
: Sort the items in nonincreasing order of the ratios computed in Step 1.
Step 3
: Repeat the following operation until no item is left in the sorted list: if the current item on the list fits into the knapsack, place it in the knapsack and proceed to the next item. Otherwise, just proceed to the next item.
Greedy algorithm for the continuous knapsack problem
Step 1
: Compute the value-to-weight ratios
vi/wi
,
i
= 1, . . . ,
n
, for the items given.
Step 2
: Sort the items in nonincreasing order of the ratios computed in Step 1.
Step 3
: Repeat the following operation until the knapsack is filled to its full capacity or no item is left in the sorted list: if the current item on the list fits into the knapsack in its entirety, take it and proceed to the next item. Otherwise, take its largest fraction to fill the knapsack to its full capacity and stop.
a
a