What does the C++ compiler do to ensure that different but adjacent memory locations are safe to be used on different threads?

This is hardware-dependent. On hardware I am familiar with, C++ doesn’t have to do anything special, because from hardware perspective accessing different bytes even on a cached line is handled ‘transparently’. From the hardware, this situation is not really different from

char a[2];
// or
char a, b;

In the cases above, we are talking about two adjacent objects, which are guaranteed to be independently accessible.

However, I’ve put ‘transparently’ in quotes for a reason. When you really have a case like that, you could be suffering (performance-wise) from a ‘false sharing’ – which happens when two (or more) threads access adjacent memory simultaneously and it ends up being cached in several CPU’s caches. This leads to constant cache invalidation. In the real life, care should be taken to prevent this from happening when possible.

Leave a Comment