(Last Mod: 27 November 2010 21:38:36 )
The C Standard actually allows the compiler designer to choose from among three different representations - two's complement, signed binary, and one's compliment.
The C Standard allows the representation to contain unused padding bits. The only integer data type required to use all of its bits is the unsigned char data type.
The C Standard requires that the size of a char (plain, signed, or unsigned) be exactly CHAR_BIT bits wide. This is a constant defined in stddef.h. All other data types must be an integer multiple of CHAR_BIT. The value of CHAR_BIT must be no less than eight, but may be more.
The C Standard makes now such requirement or assumption. Certain characters are required to be contained in the character set and a few other constraints are imposed. Beyond that it is up to the compiler writer.
The C Standard requires that bitwise operations use only integer operands - the use of a non-integer operand is not undefined behavior, it is a constraint violation that will not compile.
We can do this - and the code will compile and run - but if our intent was to get at the internal representation of a floating point data type we have completely defeated the purpose since the type cast retains the value (as best it can) and changes the representation before passing it to the operator.
This is true for the left-shift operator. The right shift operator will shift in a zero for unsigned data types and for signed data types if the value stored non-negative. If the value is negative, the behavior is implementation-defined.
This is up to the compiler designer. It can either be a signed or an unsigned char, at their discretion.
This is up to the compiler designer. It can either be a signed or an unsigned char, at their discretion.