### DCSekhara's blog

By DCSekhara, history, 22 months ago,

Suppose we are working in a setting where we only have integers and the operations that we are doing on them are multiply, add, subtract and comparisons between them

In such a scenario is it safe to replace all integers with doubles? (Safe in the sense of precision issues)

• +3

 » 22 months ago, # |   -23 I think it is safe, since I've never had any problems. Only comparisons may be the problem because of decimal places, but you'll never have those decimal places anyway since you are working just with integers.
 » 22 months ago, # | ← Rev. 2 →   +18 There are so-called safe integers--the integers which are exactly representable in a double type. The name comes from ECMAScript, which does not even know what an integer is and always uses floating-point numbers (barring some corner cases). All integers from $-2^{53}$ to $2^{53}$ are safe, which means you can treat double just like an integral type as long as you stay within these bounds. For a 80-bit IEEE 754 type, that is, long double on *nix GCC, the range is $-2^{64}$ to $2^{64}$.
 » 22 months ago, # |   +3 Hmm probably depends on what kind of integers and what kind of doubles we're talking about right? Because double has gaps between very big integers and for sufficiently big integers we could probably use that.
•  » » 22 months ago, # ^ | ← Rev. 2 →   0 Yeah I missed the point that "doubles" have gaps between very large integers. Thanks!
 » 22 months ago, # |   +16 Why would you want to do this though
•  » » 22 months ago, # ^ |   +4 I saw a question which had to deal with 100 bit integers. I did it using a custom BigINT class but some other solution that just used double passed. So hence I got the doubt..
 » 22 months ago, # |   0 nice blog