I am having trouble trying to understand how exactly is my computer doing bitwise operations depending on endianness. I've read this thread and this article and I think I have confirmed my machine works in little endian (I have tried several programs described in both sources and all of them seem to output my machine is indeed little endian). I have the following macros defined that use SDL and swap 2 and 4 byte values in case needed:
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
#define HTON16(n) (n)
#define NTOH16(n) (n)
#define HTON32(n) (n)
#define NTOH32(n) (n)
#define HTON64(n) (n)
#define NTOH64(n) (n)
#else
#define HTON16(n) SDL_Swap16(n)
#define NTOH16(n) SDL_Swap16(n)
#define HTON32(n) SDL_Swap32(n)
#define NTOH32(n) SDL_Swap32(n)
#define HTON64(n) SDL_Swap64(n)
#define NTOH64(n) SDL_Swap64(n)
#endif
My problem is: When writing a 2 byte number (in this case 43981 = 0xabcd) to a char[], say, at the entry 0, the following code would produce the first 2 bytes of data in little endian, i.e. 0xcdab, when I'm trying to do the opposite thing:
char data[100];
int host_value = 43981; // 0xabcd
int net_value = HTON16(host_value);
data[0] = (net_value & 0xff00) >> 8;
data[1] = (net_value & 0xff);
My solution to the previous problem is just not using HTON16 on the host value, and operating with it as if my machine was big endian. Also, in the same machine, when doing the following to write the same host value to data, it does produce data to have the two first byte set to 0xabcd:
*((unsigned short *)&data[0]) = HTON16(host_value);
I would like to understand why these two cases work differently. Any help is appreciated.