Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the fastest way to implement integer division in hardware?


I always assumed that CPUs used Newton's method, though that could be wrong. https://en.wikipedia.org/wiki/Division_algorithm#Newton%E2%8...

Edit: Yeah, that's only used for floating point. Looks like integer division is usually an algorithm called SRT: https://en.wikipedia.org/wiki/Division_algorithm#SRT_divisio...


There are multiple choices used by different implementors. Quadratically convergent iterations like Newton-Raphson and Goldschmidt, and digit-by-digit algorithms like SRT are all common choices represented in mainstream hardware designs and software libraries (for both floating-point and integer).


It's actually a hard question to answer but first one needs to specify the width of the integer and if it is signed or not.

Just one semi-random example, dividing two 1b unsigned integers is not going to need the same hardware as dividing two 1024b signed integers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: