Introduction
I dreamt of blogging my knowledge for about two years. It is too long time, but finally today it comes alive. So as a start I thought of blogging about my favorite field, "Algorithms".
Wikipedia defines an algorithm as follows,
"In mathematics and computer science, an algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning."
Introduction to Dynamic Programming(DP)
This method is some what similar to divide-and-conquer method(hopefully you will have an idea what is 'divide and conquer approach'), as this method also solves the main problem by solving the subproblems. In divide and conquer method, first it divide the main problem into disjoint subproblems and solves those subproblems recursively, and then it combine those solutions to solve the original problem.
Here in dynamic programming(DP) approach, it works efficient when subproblems overlap(ie : when subproblems share subproblems). While divide and conquer algorithm repeatedly solve the common subproblems, a DP algorithm solves each subproblem only once and save it's answer on a table and retrieve that answer when the same subproblem appears again, rather than re-computing it. DP thus uses additional memory to save computation time
So as a summery, unlike divide and conquer, in dynamic programming,
- Subproblems overlap
- Subproblems may share subproblems
- Can store results and re-use them
- Divide and conquer performs redundant calculations
Typically when developing a DP algorithm, we follow four main steps
- Characterize the structure of an optimal solution.
- Recursively define the value of an optimal solution.
- Compute the value of an optimal solution, usually the bottom-up fashion.
- Construct an optimal solution from computed information.
There are usually two similar methods to implement a dynamic programming approach.
The first approach is 'top-down with memoization'. In this approach, we write the procedure recursively in a natural manner, but modified to save the result of each subproblem (usually in an array or hash table). The procedure now first checks to see whether it has previously solved this subproblem. If so, it returns the saved value, saving further computation at this level; if not, the procedure computes the value in the usual manner. We say that the recursive procedure has been memoized; it “remembers” what results it has computed previously.
The second approach is the bottom-up method. This approach typically depends on some natural notion of the “size” of a subproblem, such that solving any particular subproblem depends only on solving “smaller” subproblems. We sort the subproblems by size and solve them in size order, smallest first. When solving a particular subproblem, we have already solved all of the smaller subproblems its solution depends upon, and we have saved their solutions. We solve each subproblem only once, and when we first see it, we have already solved all of its prerequisite subproblems.
These two approaches yield algorithms with the same asymptotic running time, except in unusual circumstances where the top-down approach does not actually recurse to examine all possible subproblems. The bottom-up approach often has much better constant factors, since it has less overhead for procedure calls.
However in this article, we'll only discuss further about 'memoization algorithms'.
Memoization
Wikipedia defines memoization as follows,
"In computing, memoization is an optimization technique used primarily to speed up computer programs by having function calls avoid repeating the calculation of results for previously processed inputs. Memoization has also been used in other contexts (and for purposes other than speed gains), such as in simple mutually recursive descent parsing in a general top-down parsing algorithm that accommodates ambiguity and left recursion in polynomial time and space. Although related to caching, memoization refers to a specific case of this optimization, distinguishing it from forms of caching such as buffering or page replacement. In the context of some logic programming languages, memoization is also known as tabling"
However I think it would be easy to understand this concept if we try this using an example. So I choose problem 67 of Project Euler.net . The problem is to find a maximum path sum of a given pyramid of numbers.
eg :
By starting at the top of the tringle below and moving adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Yes, it is possible to find the solution for this tringle using brute force method(by trying out every route). But as there are 2^(99) sifferent paths in total, so lets assume you could check one trillion (1012) routes every second. Then it would take over twenty billion years to check them all.
So now you can understand that it is not literally possible method, so here is the solution for this problem using 'memoization algorithm' coded in java language;
public class Euler67{
public void solution() throws FileNotFoundException{
File f=new File("input.txt");
Scanner sc=new Scanner(f);
int[] tree = new int[10000];
int tmp=sc.nextInt();
int k=0;
while(tmp!=-1){
tree[k]=tmp;
k++;
tmp=sc.nextInt();
}
int[] val=new int[tree.length];
val[0]=0;
int[] trn=new int[10000];
trn[0]=0;
for(int i=1;i<10000;i++){
trn[i]=trn[i-1]+i;
}
val[0]=0;
int j=1;
int one,two;
val[1]=tree[0];
for(int i=2;i<val.length;i++){
one=trn[j-1]-trn[j]+i;
two=one-1;
if(one-1==trn[j-1]){
val[i]=val[one]+tree[i-1];
}else if(two==trn[j]){
val[i]=val[two]+tree[i-1];
}else{
val[i]=Math.max((val[one]+tree[i-1]), (val[two]+tree[i-1]));
}
if(i>=trn[j+1]){
j++;
}
}
Arrays.sort(val);
System.out.println(val[9999]);
}
}