There are four towers who is named A,B,C,D respectively.
Initially there are n disks. All disks are different in size.
The disks are initially stacked on tower A increasing in size from the top to the bottom.
You can't put disk A onto disk B if the size of disk A is bigger than the size of disk B.
The goal is to transfer all of the disks from tower A to tower D with smallest amount of options.
Above all is the description of the problem.
The solution uses $$$f(n)$$$ to represent the minimum times to transfer n disks increasing in size from the top to the bottom from tower A to tower D. And it uses $$$d(n)$$$ to represent the minimum times to transfer n disks increasing in size from the top to the bottom from tower A to tower C without the help of tower D (It means you can't stack disks on tower D).
Then the solution claims that $$$f(n) = min(2 \times f(i) + d(n - i)) (1 \le i \lt n)$$$, but I don't know why. Please give me a proof,thanks!









Consider the smallest i disks on tower A. We first choose to move them to some tower other than D, say B. This takes at least f(i) steps. Then we will have to move the remaining (n-i) disks to tower D, without using the tower B, which takes d(n-i) steps. Finally the i disks on tower B will be moved on to D in another f(i) steps. Thus for a fixed i, the answer is 2f(i)+d(n-i). Iterating over i we will pick the one which gives the minimum value of this expression.
But I wonder why should I move the smallest i disks from A to B first? What if I move disk 1, disk2, ... , disk j(j < i-1) and disk i on tower B? In short, what if the disk sequence on tower B is not continue?
Let's make the question clear. First, we have to move the smallest disks from tower A to tower B. Then we have to move the rest disks from tower A to tower D without using tower B. So f(n) = 2 + d(n-1), but it is proved that this is not right. Why is this solution wrong and that solution is right?
You say that i=1 in the equation given. Why should that necessarily be the minimum?
Yep, but please note that I use a same pattern you have used to prove the algorithm's correctness. Since this pattern doesn't work when i=1, then why does it have to be correct when i=i?
Auto comment: topic has been updated by 040902fyn (previous revision, new revision, compare).