I know that it is done automatically - the more frequently a piece of data is accessed, the closer to the processor it is stored. But can I somehow influence their placement with Java syntax? Volatile, the way I understand it, puts data in level 3 cache or RAM since it's visible to all the threads, is that right?
-
Does this answer your question? [Java volatile keyword](https://stackoverflow.com/questions/33643800/java-volatile-keyword) – luk2302 Oct 06 '20 at 09:20
-
*"But can I somehow influence their placement with Java syntax?"* - no. That is not your decision (as a programmer) to make, in particular the Java spec does not know about these concepts. So some compiler or runtime may behave in one way, some other runtime might behave differently. – luk2302 Oct 06 '20 at 09:30
-
Decide where to put variable It's the job of JIT not developer, so you can't control it. And the specification never says volatile variable is put in l3 cache, it only give you some guarantee about their read/write behavior. – haoyu wang Oct 14 '20 at 03:38
3 Answers
No, Java syntax does not allow direct access to the hardware. The Java language and virtual machine specification is the contract governing how Java code is interpreted - and it is explicitly written to target a Virtual Machine instead of an actual one.
From Section 1.2:
The Java Virtual Machine is the cornerstone of the Java platform. It is the component of the technology responsible for its hardware- and operating system-independence, the small size of its compiled code, and its ability to protect users from malicious programs.
The Java Virtual Machine is an abstract computing machine. Like a real computing machine, it has an instruction set and manipulates various memory areas at run time. It is reasonably common to implement a programming language using a virtual machine; the best-known virtual machine may be the P-Code machine of UCSD Pascal.
There is no need for a Java VM to even have accessible registers or caches. From the point of view of the specs, a Turing Machine could very well implement a conformant Java VM.
- 17,561
- 2
- 43
- 74
-
1Having no explicit language support or JVM specification is not the same as having no influence on what happens at the lower level. – pveentjer Oct 06 '20 at 10:01
-
@pveentjer - you can make reasonable guesses on what a JVM implementation will do, and even look it up in its source-code for a particular version; but the general contract is that JVM implementations can choose to do anything as long as they adhere to the spec. – tucuxi Oct 06 '20 at 13:44
-
This behavior you can't really look up in the 'source code'. You need to understand how the hardware actually works. – pveentjer Oct 06 '20 at 14:09
-
@pveentjer the [source code for a JVM](https://github.com/openjdk/jdk13u-dev/tree/master/src/hotspot/cpu) targets each supported architecture and describes exactly what will go where and when. Of course, those who wrote it had a deep understanding of the hardware. I do not claim that the code is simple - but you can, with enough time, figure out how registers are allocated for a particular JVM & architecture by looking at that code. – tucuxi Oct 06 '20 at 14:44
-
1The 'registers' are not that important since they are architectural registers and not what actually happens inside the CPU (the architectural registers are renamed and assigned to physical registers by the processor (ROB)). Everything regarding loading an object into the L1D by making sure the cache line is in the appropriate state is not part of the ISA; it is part of the micro-architecture. And that is what the OP is asking for. So just staring at the generated assembly isn't sufficient to understand what is actually happening. – pveentjer Oct 06 '20 at 15:02
Java works differently regarding optimisations to a large degree. You the the developer say what to do in your code. Then, at runtime, the just in time compiler looks at what is going on, and then (if necessary) translates "slow" java byte code into highly optimized machine code.
In other words: the JIT decides what code is worth optimizing. That might include optimized "data layouting".
But as said: you as a developer have "no say" in this.
- 137,827
- 25
- 176
- 248
You can't control this behavior.
If the CPU reads a field of an object the object is pulled into the L1d. This is independent of the field being volatile or not.
It doesn't matter if a field is accessed only once or many times; it will still end up in the L1d. Unless you have a non temporal load/store; but this behavior is not accessible from Java.
Volatile prevents reordering of instructions on both compiler and CPU/memory-sub-system level. In case of the X86, the volatile read you get for free (acquire semantics) due to the TSO memory model of X86. The volatile write is implemented by stopping the front-end from executing loads till the store buffer has been drained. This prevents the reordering of older stores with newer loads to a different address.
For more information see: https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
- 10,545
- 3
- 23
- 40
-
Once the just-in-time optimizer runs it may very well remove the code entirely if it detects that it has no effects - thus, no L1d or any other allocation after dead code elimination. – tucuxi Oct 06 '20 at 14:55