How to divide two signed 64 bit numbers on a 32 bit architecture?
I'm not asking for any code, just an algorithm I can implement. Let's assume that I have available instructions for unsigned and signed division (div and idiv).
I'm open to dividing by subtracting, if it can be done that way. It needs to work for signed numbers.
(editor's note: the first version of this question used the term "double precision", hence the comments asking for clarification on floating point vs. BigInteger).