Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You have two fundamental misunderstandings here.

First, you misunderstand what I - and the article - mean by detecting overflow. To put it another way - you have two integer values, A and B, and you want to know if A + B is representable in a given (signed) integer type. If neither A nor B are of the longest supported type, it's easy, but if A or B are of the longest supported (signed) integer type on the system, it's much harder. That's what this article is talking about.

Say, for example, you're doing the Pythonic approach of storing integers as either an integer or a pointer to a bigint. Well, if you have two non-bigints, you need to know if the result will fit in a native word or if you're going to have to make a bigint and do the slow path.

Second, detecting if an operation would cause undefined behavior is something you can indeed do in valid portable C code, for some operations at least. Among other things, that's exactly what checked access for an array does!



"or if you're going to have to make a bigint and do the slow path."

You can cheat, sort of, and run your signedint code on a 64 bit processor and pretend you're running on a 63 bit processor and hit the slow path when you hit MAXINT for a 63 bit processor, not for a 64 bit processor. You can always add 63 bit signed ints on a 64 bit signed int capable processor.

Or maybe simpler explanation is there do exist 64 bit signed ints that can't be added and represented in a 64 bit signed register. But there do not exist any 32 bit signed ints that can't be added together and fit in a 64 bit register.

Its possible to play benchmarking games to make an unrealistic result that makes this look horrific. In reality under "normal" workloads it'll work just fine and the time you save by implementing artificially low limits on a slightly larger processor will save you a lot of debugging and writing and troubleshooting time.

"Well, yeah, this is a 64 bit proc, but to make life simpler we pretend its 1 sign bit and 62 int bits and anything bigger gets promoted to some arbitrary precision software library"

Another trick is not to use signed ints or at least minimize it, which is not as crazy as it may sound. If you can eliminate a whole class of arithmetic errors by changing program style...


Generally you'll be doing this anyways (as you'll be storing it with a bit to tag if it is a pointer).

However, it is... interesting... that a language that is so low-level does not have any simple construct for doing this, especially seeing how trivial it is in x86 assembly (add the numbers, then check the overflow flag).

And saying "just don't used signed arithmetic" misses the point. If a "general-purpose" programming language cannot do something that is trivial in the next layer down, and that occurs as commonly as wanting to do something if there would be an overflow, I am inclined to blame the language, not the coder.


> especially seeing how trivial it is in x86 assembly (add the numbers, then check the overflow flag)

I'm no expert, especially not on modern x86 processors or compilers for them, but I suspect it isn't so trivial as scattering such instructions may affects branch predictor tables and indirectly degrade performance in other parts of code.


You require one branch regardless, and the trivial version is one branch, i.e. already at the minimum amount of branches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: