Mathematically, 0.9 recurring can be shown to be equal to 1. This question however, is not about infinity, convergence, or the maths behind this.
The above assumption can be represented using doubles in C# with the following.
var oneOverNine = 1d / 9d;
var resultTimesNine = oneOverNine * 9d;
Using the code above, (resultTimesNine == 1d)
evaluates to true.
When using decimals instead, the evaluation yields false, yet, my question is not about the disparate precision of double and decimal.
Since no type has infinite precision, how and why does double maintain such an equality where decimal does not? What is happening literally 'between the lines' of code above, with regards to the manner in which the oneOverNine
variable is stored in memory?