An Analysis using Fibonacci series in C++
Recursion and Dynamic Programming are like two sides of the same coin. Both are widely used in many instances and here in this article, we are going to explore both approaches using a simple well-known example.
First of all, what is recursion?
Recursion is an approach where the solution of a problem depends on the solutions of the smaller instances of the same problem. Shortly, in recursion, a method calls itself to solve a problem.
Example: Finding the factorial of an integer, say 4.
We all know that 4! = 4x3x2x1 = 24. It can be done using a for loop like this.
int fact = 1;
for(int i = 4; i>0; i++) {
fact = fact * i;
}
But when it comes to being in recursion? The process is simple.
We can write 4! = 4x3!
Similarly;
3! = 3x2!
2! = 2x1!
1! = 1
Here, each factorial calculation happens recursively.
Now, let us see how we can find the element of the Fibonacci series by giving the index of the element using this recursive approach, i.e. n th Fibonacci element.
When n = 5;
0, 1, 1, 2, 3, 5, 8, 13, 21, 34,....
How it works?
![]() |
Picture 5 - Recursion tree |
In this implementation, you can see a number of the same fib(i) functions that have been repeated.
Thus, this program is having unwanted processes that cost memory and time. Even though it is a successful implementation, it is a bad implementation.
Time Complexity:
T(n) = T(n-1) + T(n-2) and this is exponential (O(2n)).
Space Complexity:
The space complexity for the above program is O(n) because of the implementation of recursion on the stack.
What about Dynamic Programming?
Dynamic programming is a technique used to solve recursive problems with a highly overlapping subproblem structure. "Highly overlapping" refers to the recurring subproblems again and again.
In Dynamic Programming, we trade space for time. That is, instead of calculating all the states that take a lot of time (but no space), we take up space to store the results of all the sub-problems in order to save time.
The majority of the Dynamic Programming problems can be categorized into two types:
- Optimization problems (to select the most feasible solution so that the value of the required function is minimized or maximized)
- Combinatorial problems (to figure out the number of ways to do something or the probability of some event happening)
When developing a dynamic-programming algorithm, we follow a sequence of
four steps:
- Characterize the structure of an optimal solution.
- Recursively define the value of an optimal solution (by expressing it in terms of optimal solutions for smaller sub-problems).
- Compute the value of an optimal solution, typically in a bottom-up fashion.
- Construct an optimal solution from computed information.
When we solve a problem with dynamic programming, we need to follow three steps:
- Determine the recurrence relationship that applies to the problem in question
- Initialize the start value of memory/array/matrix
- Make sure that when we make a "recursive call" (access the memoized solution of the subproblem) it is always resolved in advance.
Top-down vs Bottom-up
Bottom-Up: I'll make hot water, then I'll add tea powder, then I'll add sugar, then I'll add milk powder, I'll have a nice tea.
Top-Down: I'll have a nice tea. How? I'll add milk powder in a nice plain tea. How? I'll ass enough sugar. How? I'll add enough tea powder. How? I'll make hot water.
That is not a great example but I hope you've got what I'm trying to say. In Top-Down, you're starting to build a large solution straight away by demonstrating how you build it from smaller solutions. You start with small solutions in Bottom-Up and then build up. One can think of dynamic programming as a table-filling algorithm: you know the calculations that you need to do, so you can choose the best order to do them and ignore those that you don't have to fill in.
Let us see how the same problem (n th Fibonacci element) can be implemented by using the Dynamic Programming approach.
As we have seen in the definition, Dynamic Programming deals with repeating subproblems.
In this methodology, we design the solution as if we were to fix it recursively, but we fix it from the ground up, memorizing the solutions to the sub-problems (steps) that we take to the top.
Therefore, for the Fibonacci sequence, we first solve and memote F(1) and F(2), then calculate F(3) using the two memo steps, and so on. This means that the calculation of every single element of the sequence is O(1) because we already know the former two.
What would be the output?
What happens here?
Not as in the Recursive Approach, Dynamic Programming omits the repetition of the same functionalities. The nodes in red will not be executed in Dynamic Programming as all those sub-nodes have been implemented once and they are marked in blue nodes. This makes the whole program not go exponential. This is possible through Dynamic Programming due to its nature that choosing between two options to find the optimum solution in each step of execution.
Time Complexity: O(n)
Space Complexity: O(n) for storing computed values.
One very last piece of wisdom: keep on exercising Dynamic Programming. No matter how frustrating these algorithms may seem, repeatedly writing dynamic programs will make sub-problems and recurrences more natural.
Following questions are the most popular Dynamic Programming problems for several evaluations :
- Given a matrix consisting of 0's and 1's, find the maximum size sub-matrix consisting of only 1's.
- Given an array containing both positive and negative integers, find the contiguous array with the maximum sum.
- Longest Increasing Subsequence - Find the length of the longest subsequence of a given sequence such that all the elements are sorted in increasing/non-decreasing order.
- There are many problems that reduce this problem such as box stacking and building bridges. These days the interviewers expect an NLogN solution.
- Edit Distance - Given two strings and a set of operations Change (C), insert (I), and delete (D), find the minimum number of edits (operations) required to transform one string into another.
- 0/1 Knapsack Problem - A thief robbing a store and can carry a maximal weight of W into their knapsack. There are n items and ith item weigh wi and is worth vi dollars. What items should the thief take?
- Balanced Partition - You have a set of n integers each in the range 0 … K. Partition these integers into two subsets such that you minimize |S1 – S2|, where S1 and S2 denote the sums of the elements in each of the two subsets.
- Coin Change - Given a value N, if we want to make a change for N cents, and we have an infinite supply of each of S = { S1, S2, .., Sm} valued coins, how many ways can we make the change?
- Longest Common Subsequence - Find the longest common subsequence of two strings A and B where the elements are letters from the two strings and they should be in the same order.
- Longest Palindromic Subsequence - The question is the same as above but the subsequence should be palindromic as well.
- Minimum Number of Jumps - Given an array of integers where each element represents the maximum number of steps that can be made forward from that element, find the minimum number of jumps to reach the end of the array (starting from the first element).
Try to solve these and start to fall in love with Dynamic Programming!
Hope this helps. Share your thoughts too.
Comments
Post a Comment