Random.NextDouble() (a Double from the range [0.0,1.0)) is sometimes multiplied with a large Int64 (let Int64 big = 9000000000L), and the result floored to obtain a random Int64 value larger than what can be obtained from Random.Next() (an Int32 from the range [0,Int32.MaxValue)).
Random r = new Random();
long big = 9000000000L;
long answer = (long) (r.NextDouble() * big);
It seems to me that the total number of unique values for a Double in the range [0.0, 1.0) provides an upper-bound for the number of unique Int64 it can possibly generate. A loose upper-bound, in fact, as many different Doubles will map to the same Int64.
Hence, I would like to know: what is the total number of unique values for a double in the range [0.0, 1.0)?
Even better if you can tell me what is the largest value "big" can take so that "answer" can be a value from the range [0,big), and whether the distribution of values of "answer" is uniform, assuming that Random.NextDouble() is uniform.
Edit: Double (double) here refers to IEEE 754 floating-point double, while Int64 (long) and Int32 (int) refer to 64-bit and 32-bit signed 2's complement respectively.
Inspired by this question: Generating 10 digits unique random number in java
While I used C#, this question is language-agnostic and is more about discrete mathematics than programming, but it bothers me not mainly from a sense of mathematical curiousity, but from that of a programmer wanting to use a formula only if it does what it is supposed to do and from a security viewpoint.