Please enable JavaScript.
Coggle requires JavaScript to display documents.
Dynamic Programming - Coggle Diagram
Dynamic Programming
Examples
Change-making problem
Coin-collecting problem
Coin-row problem
It was invented as as a general method for optimizing multistage decision processes
Dynamic programming has eventually
come to be considered as a general algorithm design technique that does not have to be limited to special types of
optimization problems
It is a technique for solving problems with overlapping subproblems
A majority of dynamic programming applications deal with optimization problems
The principle of optimality , created by Richard Bellman, says that an optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances
Although its applicability to a particular problem needs to be checked, of course,
such a check is usually not a principal difficulty in developing a dynamic programming algorithm
The Knapsack problems
All weights and knapsack capacity are positive integers; item values do not need to be integers
To design a dynamic programming
algorithm, we need to derive a recurrence
relation that expresses a solution to an instance of the knapsack problem in terms of solutions to its smaller subinstances
We can divide all the subsets of the first i items that fit
the knapsack of capacity j into two categories: those that do not include the ith item and those that do
Subsets that do
not
include the ith item
The value of an optimal subset is, by definition, F (i − 1, j )
Subset that
do
include the ith item
An optimal subset is made up of this item and an
optimal subset of the first i − 1 items that fits into the knapsack of capacity j − wi. The value of such an optimal subset is vi + F (i − 1, j − wi)
The value of an optimal solution among all feasible subsets of the first i items is the maximum of these two values
We can find the composition of an optimal subset by backtracing the computations of the maximal value entry in the table
Memory functions
The classic dynamic programming approach works
bottom up: it fills a table with solutions to all smaller subproblems, but each of them is solved only once
The goal is to get a method that solves only subproblems that are necessary and does so only once. Such a method exists; it is based on using memory functions
1 more item...
Approximation Algorithms for NP-Hard Problems
There is a radically different way of dealing with difficult optimization problems:
solve them approximately by a fast algorithm. This approach is particularly
appealing for applications where a good but not necessarily optimal solution will suffice
Although approximation algorithms run a gamut in level of sophistication, most of them are based on some problem-specific heuristic
Some of the problems have special classes of instances that are both particularly important for real-life applications and easier to solve than their general counterparts, like the Traveling Salesman problem
The simplest approximation algorithms for the traveling salesman problem are based on the greedy technique
There are two more important
techniques
Nearest-neighbor algorithm
Multifragment-heuristic algorithm
Their performances are unbouded
Approximation Algorithms for the Knapsack Problem (greedy)
Sort the items in nonincreasing order of the ratios computed in step 1
Repeat the following operation until no item is left in the sorted list:
if the current item on the list fits into the knapsack, place it in the knapsack and proceed to the next item; otherwise, just proceed to the
next item
Step 1 Compute the value-to-weight ratios ri = vi/wi, i = 1, . . . , n, for the
items given
A heuristic is a common-sense rule drawn from experience rather than from a mathematically proved assertion