I’d like to comment about integer size and default fallback of integer literal, which is currently changing.
AFAIU the main argument for sizeof(int) = sizeof(u32)
is that program should work identically on 32-bit and 64-systems.
That’s what folks love Java for. However there’s a big downside of having sizeof(int) = sizeof(u32)
. Programs often fail to work correctly when program create arrays of size more than 2G (or, for example, mmap
files larger than 2 Gb).
I’ve seen two real-world examples of such problems.
First is a C++ project in a company I work. They use ints
everywhere, and their programs just corrupt data, when data size is more 2 Gb, because of integer overflow. Their program works identically on 32-bit and 64-bit systems, it just doesn’t correctly load more than 2Gb of data on both 32-bit and 64-bit systems. That problem was hard to locate, and now it is very hard to fix, because project source is quite large, there are thousands of variable declarations.
Another program is lighttpd server. If I recall correctly, in version 1.x it could not process HTTP uploads of size more than 2Gb. Fortunately, it didn’t crash, it just rejected such uploads. Again it worked identically on 32-bit and 64-bit systems.
Of course, programmers in both situations were wrong and careless, they should have used size_t
.
My point is that, “default” integer should be safe to overflows when working with large arrays. Default integer size can prevent such problems.
There’s a concern that 64-bit integers are slower than 32-bit. For me this argument is less important than correctness on 64-bit systems. Because such trivial performance problems are much easier to fix than incorrect work on large data sets.
And if “default” integer size is sizeof(void*)
, then it should be called just int
and uint
(because uptr
and iptr
are a bit weird and unusual for default), and fallback of integer literal should be to int
.