Is there really ever a time where you need an integer type containing exactly N-bits? There are C99 types which guarantee at least N-bits. There are even C90 types which guarantee at least 8, 16 and 32 bits (the standard C integer types). Why not use one of those?
I never use C99 exact-width types in code… ever. Chances are that you shouldn’t either because:
Exact width integer types reduce portability
This is because:
1) Exact width integer types do not exist before C99
Sure you could create an abstraction that detects if the standard is less than C99 and introduce the types, but then you would be overriding the POSIX namespace by defining your own integer types suffixed with “_t”. POSIX.1-2008 – The System Interfaces: 2.2.2 The Name Space
GCC also will not like you:
The names of all library types, macros, variables and functions that come from the ISO C standard are reserved unconditionally; your program may not redefine these names.
GNU libc manual: 1.3.3 Reserved Names
From my own experience using GCC on OS X, the fixed width types are defined even when using
--std=c90, meaning you’ll just get errors if you try to redefine them. Bummer.
2) Exact width integer types are not guaranteed to exist at all:
These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32 or 64 bits, it shall define the corresponding typedef names.
ISO/IEC 9899:1999 – 126.96.36.199 Exact-width integer types
Even in C99, the (u)intN_t type does not need to exist unless there is a native integer type of that width. You may argue and say that there are not many platforms which do not have these types – there are: DSPs. If you start using these types, you limit the platforms on which your software can run – and are also probably developing bad habits.
Using exact width integer types could have a negative performance impact
If you need at least N-bits and it does not matter if there are more, why restrict yourself to a type which which could require additional overhead? If you are writing C99 code, use one of the (u)int_fastN_t types. Maybe, you could even use a standard C integer type!
The endianness of exact width integer types is unspecified
I am not not implying that the endianness is specified for other C types. I am just trying to make a point: you cannot even use these types for portable serialisation/de-serialisation without feral-octet-swapping-macro-garbage as the underlying layout of the type is system dependent.
If you are interested in the conditions for when memcpy can be used to copy memory into a particular type, maybe you should check out the abstraction which is part of my digest program. It contains a heap of checks to ensure that memcpy is only used on systems when it is known that it will do the right thing. It tries to deal with potential padding, non 8-bit chars and endianness in a clean way that isn’t broken.
This article deliberately did not discuss the signed variants of these types…
1) Exact width integer types DO EXIST before C99. MSVC has had them for ages. __int8, __int16, __int32, __int64, etc.
2) Exact width integer types are not guaranteed to exist on ancient compilers on ancient systems. You should always use the newest standard; why write new code with old standards?
3? (Where the hell did your numbers go?!) The only way a performance impact could exist is if you’re on some obscure architecture which doesn’t use 8, 16, 32, 64 bits. I’ve never heard of one. Ever.
4?! This argument isn’t even an argument. All types (above 8bits) are either big endian, little endian, or that weird middle ground that I forget and is no longer used. x86 is little endian. ARM is little endian (by default).
— Final statement removed by Nick as it related to my employer —
Thanks for the comments.