11

I've heard that data placed in the C++ stack most likely appear in the CPU cache.
http://www.gamedev.net/topic/564817-stack-and-cache-question-optimizing-sw-in-c/#entry4617168

"The stack is the most efficient place to store data because the same range of memory addresses is reused again and again."

Due to implied order of operations, most frequently accessed data on stack is almost certainly always in L1 cache.

Is this true?


I mean, is it really better to try to store frequently accessed data in stack, than in heap?

tower120
  • 5,007
  • 6
  • 40
  • 88
  • I think you're going to have to be more specific if you want a better answer than "it depends on the size of your data, and the size of your cache". – Tomas Aschan May 20 '14 at 13:11
  • 1
    IMO it's a too big simplification. Nowadays there are much more variables to consider. Why a frequently accessed heap allocated memory block has less chances to be cached? BTW L1 cache is incredibly small, do you work with few KB of data? – Adriano Repetti May 20 '14 at 13:12
  • @Adriano hmm, I think that ones, from stack 100-500 Kb. Well ok, not in L1, but at least somewhere (in cache)? :) – tower120 May 20 '14 at 13:17
  • 1
    But stack isn't such evil even if it's not always/all cached. It depends on your data (size and nature) and your code (algorithm and paths). If your "bottleneck" is a large matrix multiplication routine then jumps will be your enemy (and you won't notice any benefit if all your single thread application stack is kept in cache). To summarize: no, IMO that assertion is too naive and oversimplify everything (so you may waste time on wrong things). – Adriano Repetti May 20 '14 at 13:24
  • 1
    A decent rule of thumb. An object allocated on the stack is more likely to be in L1 cache than the same object allocated with new (). But that's all quite secondary. Slavish adherence to this rule will get you into trouble. There are other and more important reasons to put things on the stack, or to not put them on the stack, depending on the situation. – gnasher729 May 20 '14 at 13:48
  • The bottom line is that, unless special code constructs to avoid it are used (which will not generally be used by most compilers - it would require hand-written assembler code), anything that is accessed ends up in cache for at least some time. How long it stays there depends on how frequently it is accessed relative to everything else that is being accessed. – twalberg May 20 '14 at 14:09

1 Answers1

10

The exact implementation of the C++ Standard is an implementation detail: it varies from compiler to compiler, from platform to platform, etc...

Now, even though you could in theory use a split stack for C++, major implementations use a contiguous segment of memory (of varying size).

This contiguity and frequent reuse do indeed easily reap the benefits of caches, however it is not a panacea either. Actually, you can also create artificial scenarios for cache bounces: if your L1 cache is small (32k ?) and has 2-ways associativity, then you can easily craft a scenario that requires accessing the L2 cache. Just use a 64k array on your stack (it's small enough not to blow it up), and then access data at 0, 16k, 32k, and 48k repeatedly in a loop: it should trigger lots of evictions and requires fetches from L2 cache.

So, it is not really that the stack itself is so cache-friendly, but rather than its usage is predictable and well-known. You could reap the same cache benefits with a custom-made allocator (though allocation would be slightly slower).

On the other hand, there are other advantages and disadvantages to using the stack:

  • disadvantage: if you attempt to consume too much of it, you get a Stack Overflow.
  • disadvantage: if you overwrite an array on the stack, you might corrupt the stack itself, and it is a debug nightmare (it's also used by so called Stack Smashing attacks).
  • advantage: C++ has specific patterns (RAII, SBRM) that take advantage of the behavior of the stack. Deterministic "undo" actions are a joy to program with.

So in the end I would be wary of deciding between stack and heap solely based on potential cache behavior.

Matthieu M.
  • 287,565
  • 48
  • 449
  • 722
  • +1 for _"disadvantage: if you overwrite an array on the stack, you might corrupt the stack itself, and it is a debug nightmare..."_. For me 90% effort on 5% pointer related bugs, now it's always first thing I check... – Adriano Repetti May 20 '14 at 18:35
  • @Matthieu M. Can you take a look at this as well? http://stackoverflow.com/questions/30555623/how-many-bits-are-in-the-address-field-for-a-directly-mapped-cache This question is based on caching as well – committedandroider May 31 '15 at 16:49