3

when my friends asked me the question why the integer size is changing from architecture to architecture? I told my friends like integer is replaced with an instruction of assembly i.e.MOVW (move the word) the word size is 16 bits(in some companies) and 32 bits(in some other) so that's why is it changing from architecture to architecture.

Is the above explanation is correct? please tell me if i was wrong.

and another question is,

why the size float is always 32 bits regardless of architecture? is it handled by hardware?

  • possible duplicate of [Why is float not a double on a 64-bit system?](http://stackoverflow.com/questions/12255625/why-is-float-not-a-double-on-a-64-bit-system) – Patrick Hofman Feb 23 '15 at 10:48
  • What do you mean by "architecture"? Can you give a couple of examples of different ones? – musefan Feb 23 '15 at 10:56

3 Answers3

3

Because the IEEE-754 standard describes 32bit and 64bit floating point numbers in full detail down to the last bit. I dont know wether the C definition mandates IEEE-754 (probably not), but surely any serious mathematical programmer expects IEEE-754.

The difference to int: int is defined as the most efficient number type (most times the size of a register). Some compilers seem to favor 32bit ints on 64bit machines due to backward compatibility. But floating point sizes were never subject to compatibility issues.

Markus Kull
  • 1,471
  • 13
  • 16
  • Floating-point sizes are subject to compatibility issues. When he was consulting for Intel, helping design the 8087, Kahan foresaw successive generations of 10-byte, then 12-byte floating-point types before arriving to cheap quad-precision for everyone. The fact this did not happen can be attributed to compatibility concerns, and even the 10-byte extended format is becoming obsolete (although it's still very useful when used correctly) because it is not double-precision. – Pascal Cuoq Feb 23 '15 at 12:45
  • The quote starts “For now the 10-byte Extended format is a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast, …” – Pascal Cuoq Feb 23 '15 at 12:47
2

The current level of agreement on floating point formats is a consequence of the IEEE 754 standard, as others have pointed out.

Before that standard became dominant, there was no such agreement. Specifically, the Cray XMP computers had a 64-bit floating point type as their single precision float.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
  • 1
    Borland's Turbo Pascal had a Real type that had 48 bits. Don't remember the exact details. Only when built-in FPUs became en vogue, they introduced Single (32 bit), Double (64 bit) and Extended (80 bit) FPU-supported types. – Rudy Velthuis Feb 24 '15 at 16:36
1

An integer can still be 32 bits on a 64 bit machine. It is depending on the compiler, rather than the machine mostly. The 'int pointer' size can be changed to 64 bits on 64 bits machines, since the memory address size is 64 bits. That means your 'argument' isn't valid.

A float is then still a float too: usually we say it is 32 bits, but everyone is free to deviate from it. I think this has to do with the processor architectural standards we commonly apply to.

Patrick Hofman
  • 153,850
  • 22
  • 249
  • 325