Float vs. integer:
Historically, floating-point could be much slower than integer arithmetic. On modern computers, this is no longer really the case (it is somewhat slower on some platforms, but unless you write perfect code and optimize for every cycle, the difference will be swamped by the other inefficiencies in your code).
On somewhat limited processors, like those in high-end cell phones, floating-point may be somewhat slower than integer, but it’s generally within an order of magnitude (or better), so long as there is hardware floating-point available. It’s worth noting that this gap is closing pretty rapidly as cell phones are called on to run more and more general computing workloads.
On very limited processors (cheap cell phones and your toaster), there is generally no floating-point hardware, so floating-point operations need to be emulated in software. This is slow — a couple orders of magnitude slower than integer arithmetic.
As I said though, people are expecting their phones and other devices to behave more and more like “real computers”, and hardware designers are rapidly beefing up FPUs to meet that demand. Unless you’re chasing every last cycle, or you’re writing code for very limited CPUs that have little or no floating-point support, the performance distinction doesn’t matter to you.
Different size integer types:
Typically, CPUs are fastest at operating on integers of their native word size (with some caveats about 64-bit systems). 32 bit operations are often faster than 8- or 16- bit operations on modern CPUs, but this varies quite a bit between architectures. Also, remember that you can’t consider the speed of a CPU in isolation; it’s part of a complex system. Even if operating on 16-bit numbers is 2x slower than operating on 32-bit numbers, you can fit twice as much data into the cache hierarchy when you represent it with 16-bit numbers instead of 32-bits. If that makes the difference between having all your data come from cache instead of taking frequent cache misses, then the faster memory access will trump the slower operation of the CPU.
Vectorization tips the balance further in favor of narrower types (
float and 8- and 16-bit integers) — you can do more operations in a vector of the same width. However, good vector code is hard to write, so it’s not as though you get this benefit without a lot of careful work.
Why are there performance differences?
There are really only two factors that effect whether or not an operation is fast on a CPU: the circuit complexity of the operation, and user demand for the operation to be fast.
(Within reason) any operation can be made fast, if the chip designers are willing to throw enough transistors at the problem. But transistors cost money (or rather, using lots of transistors makes your chip larger, which means you get fewer chips per wafer and lower yields, which costs money), so chip designers have to balance how much complexity to use for which operations, and they do this based on (perceived) user demand. Roughly, you might think of breaking operations into four categories:
high demand low demand high complexity FP add, multiply division low complexity integer add popcount, hcf boolean ops, shifts
high-demand, low-complexity operations will be fast on nearly any CPU: they’re the low-hanging fruit, and confer maximum user benefit per transistor.
high-demand, high-complexity operations will be fast on expensive CPUs (like those used in computers), because users are willing to pay for them. You’re probably not willing to pay an extra $3 for your toaster to have a fast FP multiply, however, so cheap CPUs will skimp on these instructions.
low-demand, high-complexity operations will generally be slow on nearly all processors; there just isn’t enough benefit to justify the cost.
low-demand, low-complexity operations will be fast if someone bothers to think about them, and non-existent otherwise.
- Agner Fog maintains a nice website with lots of discussion of low-level performance details (and has very scientific data collection methodology to back it up).
- The Intel® 64 and IA-32 Architectures Optimization Reference Manual (PDF download link is part way down the page) covers a lot of these issues as well, though it is focused on one specific family of architectures.