Dynamic Programming:

Finds the global optimum --> uses memoization.

Memoization -> saving the intermediate values while solving the problem in a data structure like array, stack , tree etc. So that the same values aren't calculated repeatedly. This helps a lot for problem statements where the complexity takes up exponential values.

  • Bottom up
  • makes choices knowing optimal solution to subproblems.
  • eg - knapsack 0/1

Greedy Approach:

Finds the local optimum of every subproblem. Its greedy in it's approach, in the sense that it will get you the best solution of every subproblem at the moment which might not be the best solution globally.

  • Local optimization of sub-problem.
  • Top to bottom up
  • cannot handle overlapping subproblems
  • eg: fractional knapsack

Main difference between Dynamic and Greedy:

Dynamic gets you the global optimum while greedy gets you the local optimum for every subproblem which in effect might not be the global optimum solution for the problem.

When to use DP?

  • we examine the two key ingredients that an optimization problem must have in order for dynamic programming to apply: optimal substructure and overlapping subproblems.
  • a problem exhibits optimal substructure if an optimal solution to the problem contains within it opti- mal solutions to subproblems. Whenever a problem exhibits optimal substructure, we have a good clue that dynamic programming might apply.
  • The second ingredient that an optimization problem must have for dynamic pro- gramming to apply is that the space of subproblems must be “small” in the sense that a recursive algorithm for the problem solves the same subproblems over and over, rather than always generating new subproblems. Typically, the total number of distinct subproblems is a polynomial in the input size. When a recursive algo- rithm revisits the same problem repeatedly, we say that the optimization problem hasoverlapping subproblems.
  • Dynamic-programming algorithms typically take advantage of overlapping subproblems by solving each subproblem once and then storing the solution in a table where it can be looked up when needed, using constant time per lookup.
  • Like Divide and Conquer, Dynamic Programming combines solutions to sub-problems. Dynamic Programming is mainly used when solutions of same subproblems are needed again and again. In dynamic programming, computed solutions to subproblems are stored in a table so that these don’t have to recomputed. So Dynamic Programming is not useful when there are no common (overlapping) subproblems because there is no point storing the solutions if they are not needed again. For example,

    Binary Search

    doesn’t have common subproblems. If we take example of following recursive program for Fibonacci Numbers, there are many subproblems which are solved again and again.

Greedy example -Chva ́tal’s greedy set-covering heuristic

When to use Greedy:

  • the greedy-choice property and optimal substructure are the two key ingredients. If we can demonstrate that the problem has these properties, then we are well on the way to developing a greedy algorithm for it.
  • The first key ingredient is the greedy-choice property: we can assemble a globally optimal solution by making locally optimal (greedy) choices. In other words, when we are considering which choice to make, we make the choice that looks best in the current problem, without considering results from subproblems.
  • A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. This property is a key ingredient of assessing the applicability of dynamic programming as well as greedy algorithms.

results matching ""

    No results matching ""