Why does integer division round down in many scripting languages?

Ideally, we would like to have two operations div and mod, satisfying, for each b>0:

  1. (a div b) * b + (a mod b) = a
  2. 0 <= (a mod b) < b
  3. (-a) div b = -(a div b)

This is, however, a mathematical impossibility. If all the above were true, we would have

1 div 2 = 0
1 mod 2 = 1

since this is the unique integer solution to (1) and (2). Hence, we would also have, by (3),

0 = -0 = -(1 div 2) = (-1) div 2

which, by (1), implies

-1 = ((-1) div 2) * 2 + ((-1) mod 2) = 0 * 2 + ((-1) mod 2) = (-1) mod 2

making (-1) mod 2 < 0 which contradicts (2).

Hence, we need to give up some property among (1), (2), and (3).

Some programming languages give up (3), and make div round down (Python, Ruby).

In some (rare) cases the language offers multiple division operators. For instance, in Haskell we have div,mod satisfying only (1) and (2), similarly to Python, and we also have quot,rem satisfying only (1) and (3). The latter pair of operators rounds division towards zero, at the price of returning negative remainders, e.g., we have (-1) `quot` 2 = 0 and (-1) `rem` 2 = (-1).

C# also gives up (2), and allows % to return a negative remainder. Coherently, integer division rounds towards zero. Java, Scala, Pascal, and C, starting from C99, also adopt this strategy.

Leave a Comment