I know that floating point numbers have precision and the digits after the precision is not reliable.
But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?
for example we have two float numbers x
and y
. Can we assume the result x/y
from machine 1 is exactly the same as the result from machine 2? I.E. ==
comparison would return true
But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?
No, not necessarily.
In particular, in some situations the JIT is permitted to use a more accurate intermediate representation - e.g. 80 bits when your original data is 64 bits - whereas in other situations it won't. That can result in seeing different results when any of the following is true:
try
block in the method...)From the C# 5 specification section 4.1.6:
Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form
x * y / z
, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.
See more on this question at Stackoverflow