I am not getting WA for the sample case 2 ( an error of 0.01) for this question (Problem A — Casino) http://mirror.codeforces.com/gym/100357
My approach is as follows :-
Since player always chooses best bet possible , for any amount of money we have, loosing a bet and going back to smaller amount of money can never be the best bet. Suppose dp1[a] denotes the max prob. of winning using best bet and also stores the index of the best bet.
dp1[a] = max over a<j<=n {dp1[i + w[j]] * p[j]/s};
Now we have the best bets stored for every index. Let dp[a][b] denote that at "a" unit of time, if we have "b" units of money, then what is the probability of getting to this state. Now we can reach b amounts of money from any state if the money at that state + money won from best bet == b. Using this we can get a dp relation . Finally we can run this dp over some 10000 units of time or any other large number till it becomes stable for an error of 10^9. Or we can use matrix exponentiation over the dp and compute even for a much larger limit.
Please help me, i can't understand why my method is wrong. Even if you can not understand my approach, it would be great if you explain how to solve the question.