Q1. Use your own words to illustrate in what scenarios we should use greedy algorithm or dynamic programming. Please give examples of when each paradigm works. (For example, we can apply dynamic programming on rod cutting, greedy algorithm cannot work here because rod cutting needs to use sub-rob-cutting cases to calculate final result. Use example outside course material or try to illustrate in general way).

Ans:

What is Greedy Algorithm?

Greedy Algorithm finds the local optimum of every subproblem. Its greedy in it's approach, in the sense that it will get you the best solution of every subproblem at the moment which might not be the best solution globally.

Greedy gives us a optimal solution which is not necessarily the global optimum. It uses a top-bottom approach.

When to use Greedy Aproach?

If the problem exhibits the following two properties:

  • greedy-choice property
  • optimal substructure

then, we can use the greedy approach for the problem.

Greedy-choice property means we can construct the solution by making locally optimal choices (taking the best one at the moment). During this, it doesn't consider results from the other subproblems.

Optimal substructure means if the final optimal solution contains the optimal solutions of its subproblems.

Example:

Suppose you have a hungry person and a tree in front of him which has fruits at two levels, lower and upper. Lower level fruits have lesser profit level than the upper level fruits. Because the person is greedy, he will consume all the lower level fruits because they look like the best solution at the moment and fill up his stomach with them.

This approach is greedy because the person chooses the local optimum solution instead of thinking ahead in the future.

Another example can be, you have to get to a restaurant from your home and there are 20 possible paths.

A greedy approach would be, you leave the house and take the path you know will take the least time and on the way start making changes to your path according to what looks as the best option at that very moment.

What is Dynamic Programming?

Dynamic Programming is a way of solving problems where we break down the problem into smaller subproblems and find optimal solutions to each of the subproblems only once and store their results in some kind of a data structure essentially using memoization.

DP gives us a global optimum and uses a bottom-up approach.

When to use Dynamic Programming?

If the problem exhibits the following two properties:

  • Optimal substructure
  • Overlapping subproblems

then, we can use the dynamic programming for the problem.

Optimal Substructure: is the same as in Greedy Approach.

Overlapping subproblems: Dynamic Programming breaks down the problem into subproblems and then combines the solutions together. The computed results are stored in a table or some other data structure and its used again and again whenever needed. If a problem generates new subproblems at every stage then there's no point in storing the results in a table as they won't be needed again. Hence Dynamic Programming is useful when the problem has overlapping subproblems.

Example:

DP:

Stair case problem:

There are n stairs, a person standing at the bottom wants to reach the top. The person can climb either 1 stair or 2 stairs at a time. Count the number of ways, the person can reach the top.

Assuming there are n stairs,

the person can reach the nth stair from either (n-1)th stair or (n-2)th stair.

So we can write it as :

ways(n) = ways(n-1) + ways(n-2)

Its easy to observe that the above expression will generate a fibonnacci series.

Since, we know that in this series we calculate the same subproblems values again and again, we can use dynamic programming memoization technique and solve this problem.

_________________________________________________________________________________________________________________________

Q2. Greedy algorithms are also used widely in memory management. Given a list of blocks, in which the sizes are { 100, 150, 180, 200, 300 }, how would a greedy algorithm (grab the smallest block that works first) fit processes with sizes {10, 222, 198, 147, 178 }?

Ans:

Blocks : 100, 150, 180, 200, 300

(D) process 1: block 1, process 2: block 5, process 3: block 4, process 4: block 2, process 5: block 3


Q3. Which of the following is/are property/properties of a dynamic programming problem?

  • A. Optimal substructure
  • B. Overlapping subproblems
  • C. Greedy approach
  • D. Both optimal substructure and overlapping subproblems

Q4. If a problem can be broken into subproblems which are solved several times, the problem possesses ____________ property.

  • A. Overlapping subproblems
  • B. Optimal substructure
  • C. Memoization
  • D. Greedy

Q5. When dynamic programming is applied to a problem with overlapping subproblems, the time complexity of the resulting program typically will be significantly less than a straight-forward recursive approach.

A. True

B. False


Q6. In dynamic programming, the technique of storing the previously calculated values is called ___________

  • A. Saving value property
  • B. Hashing
  • C. Memoization
  • D. Mapping

Q8. Describe the Huffman encoding that will deal with the following letters and their corresponding frequencies. (g, 6), (d,13), (f,17), (b,18), (c, 29), (e, 30), (a, 37)

Ans:

Huffman encoding is a lossless compression greedy algorithm.

It basically assigns variable length codes to input characters making sure that the character with the highest frequency gets the smallest code length. The least frequented character gets the largest code.

Steps to build Huffman Tree
Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.

  1. Take all the uniques characters and create their leaf nodes. Build a min heap of all the leaf nodes.
  2. Choose two nodes with the minimum frequency from the min heap.
  3. Construct a new internal node which is the sum of the frequencies of the two nodes we chose in the above step. Make the first extracted node as it's left child and the other one as its right child. Add this node to the min heap.
  4. Repeat steps 2 and 3 till the heap contains only one node. That will be the root node and the tree is complete.
  5. Finally, assign 0 to all the left edges of the tree and assign 1 to the right edges and we can get codes for each character by traversing through the tree.

Q15. Explain why the time complexity of Huffman coding is O(n lg n) (this could be a full formal proof, but a good intuitive explanation is sufficient)?

Ans:

The simplest construction algorithm uses a min heap where the node with lowest frequency is given highest priority:

  1. Create a leaf node for each symbol and add it to the min heap.
  2. While there is more than one node in the heap: (2 * (n-1) times this loop will run)
    1. Extract the two nodes of highest priority (lowest frequency) from the heap --> extract heap
    2. Create a new internal node with these two nodes as children and with frequency equal to the sum of the two nodes' frequencies.
    3. Add the new node to the heap. (min heapify)
    4. Repeat the above steps till there's only one single root node remaining.
  3. The remaining node is the root node and the tree is complete.

Every time we add a new node, min heapify is called whose complexity is O(log n). Also, while extracting the two nodes, a finished tree has upto n leaf nodes and n-1 internal nodes, the extract min heap will be run twice to be able to extract two minimum frequency nodes. Hence the complexity is O(nlogn), where n is the number of symbols.


Q9. Huffman’s algorithm is a greedy algorithm because:

  • A. The frequencies are combined at each step
  • B. The two smallest probability nodes are chosen at each step
  • C. It involves drawing a Huffman tree
  • D. Both a and c.

Q10. A thief enters a store and sees the following items: Bag A worth $100 weighing 2 pounds, Bag B worth $10 weighing 2 pounds and Bag C worth $120 weighing 3 pounds. His Knapsack holds 4 pounds. What is his maximum profit according to the Fractional Knapsack Problem?

  • A. 120
  • B. 195
  • C. 180
  • D. None of the above

Explanation:

Fractional knapsack : https://www.youtube.com/watch?v=m1peWxrt6g&index=8&list=PLqM7alHXFySESatj68JKWHRVhoJ1BxtLW

Value/weight, then descending order then take fractions:

100/2 = 50

10/2 = 5

120/3 = 40

descending order : 100/2, 120/3, 10/2

Capacity = 4 = 1 * (100) + 2/3 (120) = 100 + 80 = 180.


Q12. The Lucas series is similar to the Fibonacci series, except that its base cases are L(0) = 2 and L(1) = 1. Please write memoized code for computing the n-th Lucas number. (Pseudo-code is fine here, but you can also write code in any common language like C, Java, Python, Ruby, etc.)

Ans:

Pseudo code

Dictionary dict;

dict[0] = 2, dict[1] = 1;

Integer lucas(n)

if dict[n] == null

dict[n] = lucas(n-1) + lucas(n-2)

return dict[n]


Q13. Suppose you were to drive from St. Louis to Denver along I-70. Your gas tank, when full, holds enough gas to travel m miles, and you have a map that gives distances between gas stations along the route. Let d1 < d2 <... < dn be the locations of all the gas stations along the route where d-sub-i is the distance from St. Louis to the gas station. You can assume that the distance between neighboring gas stations is at most m miles. Your goal is to make as few gas stops as possible along the way. Give the most efficient algorithm you can find to determine at which gas stations you should stop and prove that your strategy yields an optimal solution. Be sure to give the time complexity of your algorithm as a function of n. (Pseudo code expected.)

Ans:

Algorithm:

The approach followed here is ‘greedy’.

i is the current position of the car. Distance is an array of distances between neighboring gas stations that we can get from the map.

distanceArray = [G1-G2, G2-G3, ..... , Gn-1 - Gn]

distanceToTravel = The amount of distance it will travel before refilling the tank.

Algorithm:

i = 1

distanceToTravel = distanceArray [i]

while (G$$i+1$$ != Gn) { // the next gas station i snot the last gas station

distanceToTravel = distanceArray [i]

/* Greedy loop */

while ((distanceToTravel + distanceArray[i+1] <= d) AND (Gi+2 exists)) {

  destination = Gi+1

  distanceToTravel = distanceToTravel + distanceArray \[i+1\]

  i = i + 1

}

Put ‘destination’ in Solution set S {}

}


Q14. An example of set of coin denominations for which the greedy algorithm does not yield an optimal solution is {_________________}.

Ans.

Consider the following values:

Coin denominations: 25, 20, 10, 5, 1

Sum = 40

According to Greedy Approach :

we would get: 1 * 25 + 1 * 10 + 1 * 5 = 3 coins summing upto to 40.

Whereas the optimal solution would have been: 2 * 20 = 2 coins summing upto 40.

So in this case the Greedy Algorithm didn't yield an optimal solution.

def
cut_rod_sub
(p, n, r, s)
:
if
 r[n] 
>
= 
0
:

return
 r[n]
    r[n] = 
0
for
 i 
in
 range(
1
, n + 
1
):
        ret = p[i] + cut_rod_sub(p, n - i, r, s)

if
 r[n] 
<
 ret:
            r[n] = ret
            s[n] = i

return
 r[n]



def
cut_rod
(p, n)
:

    r = [
-1
for
 _ 
in
 xrange(n + 
1
)]
    s = [i 
for
 i 
in
 xrange(n + 
1
)]
    cut_rod_sub(p, n, r, s)
    r = r[n]
    subs = []

while
 n 
>
0
:
        subs.append(s[n])
        n -= s[n]

return
 r, subs

results matching ""

    No results matching ""