Enums are completely resolved at compile time (enum constants as integer literals, enum variables as integer variables), there’s no speed penalty in using them.
In general the average enumeration won’t have an underlying type bigger than
int (unless you put in it very big constants); in facts, at §7.2 ¶ 5 it’s explicitly said:
The underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. It is implementation-defined which integral type is used as the underlying type for an enumeration except that the underlying type shall not be larger than
intunless the value of an enumerator cannot fit in an
You should use enumerations when it’s appropriate because they usually make the code easier to read and to maintain (have you ever tried to debug a program full of “magic numbers”?
As for your results: probably your test methodology doesn’t take into account the normal speed fluctuations you get when you run code on “normal” machines1; have you tried running the test many (100+) times and calculating mean and standard deviation of your times? The results should be compatible: the difference between the means shouldn’t be bigger than 1 or 2 times the RSS2 of the two standard deviations (assuming, as usual, a Gaussian distribution for the fluctuations).
Another check you could do is to compare the generated assembly code (with g++ you can get it with the
- On “normal” PCs you have some indeterministic fluctuations because of other tasks running, cache/RAM/VM state, …
- Root Sum Squared, the square root of the sum of the squared standard deviations.