Dynamic Programming

Three Basic Examples

The Knapsack Problem and Memory Functions

Memory Functions

The goal is to get a method that solves only subproblems that are necessary and does so only once. Such a method exists; it is based on using memory functions.

Since a majority of dynamic programming applications deal with optimization problems, we also need to mention a general principle that underlines such applications. Richard Bellman called it the principle of optimality.


Approximation Algorithms for NP-Hard Problems

In this section, we discuss a different approach to handling difficult problems of combinatorial optimization, such as the traveling salesman problem and the knapsack problem.

Approximation Algorithms for the Traveling
Salesman Problem

Greedy Algorithms for the TSP

Nearest-neighbor algorithm

Multifragment-heuristic algorithm

Minimum-Spanning-Tree–Based Algorithms

Twice-around-the-tree algorithm

Christofides Algorithm

Local Search Heuristics

Approximation Algorithms for the Knapsack Problem

Greedy Algorithms for the Knapsack Problem

Greedy algorithm for the discrete knapsack problem

Greedy algorithm for the continuous knapsack problem