Last night I worked on this problem on Codeforces round 823 1730B - Meeting on the Line
I've made two submissions :
The accepted one using a setprecision
function while the wrong answer one is not
After I take a look at the cause, then it stumbles upon this following message from checker : wrong answer 36th numbers differ - expected: '40759558.0000000', found: '40759600.0000000', error = '0.0000010'
How come the difference becomes very large? I know floating point has its own "weaknesses" for handling precision and stuff, but how come the difference of using "setprecision" and not using them produce a very different outcome?
Appreciate for the answers because I'm curious. Thanks!