Java Tutorial

Java Control Statements

Object Oriented Programming

Java Built-in Classes

Java File Handling

Java Error & Exceptions

Java Multithreading

Java Synchronization

Java Networking

Java Collections

Java Interfaces

Java Data Structures

Java Collections Algorithms

Advanced Java

Java Miscellaneous

Java APIs & Frameworks

Java Class References

Java Useful Resources

Java - Just-In-Time (JIT) Compiler



Just-in-time (JIT) compiler is a compiler that is used by JVM internally to translate the hot spots in the byte code to machine-understandable code. The main purpose of JIT compiler is to do heavy optimizations in performance.

Java-compiled code is targeted for JVM. A Java compiler, javac compiles the Java code into bytecode. Now JVM interprets this bytecode and executes it on the underlying hardware. In case of some code is to be executed again and again, JVM identifies the code as hotspots and compiles the code further using the JIT compiler to the native machine code level and reuses the compiled code whenever needed.

Let's first understand the difference between Compiled vs Interpreted language and how Java takes benefits of both approaches.

Compiled Vs. Interpreted Languages

Languages such as C, C++, and FORTRAN are compiled languages. Their code is delivered as binary code targeted at the underlying machine. This means that the high-level code is compiled into binary code at once by a static compiler written specifically for the underlying architecture. The binary that is produced will not run on any other architecture.

On the other hand, interpreted languages like Python and Perl can run on any machine, as long as they have a valid interpreter. It goes over line-by-line over the high-level code, converting that into binary code.

Interpreted code is typically slower than compiled code. For example, consider a loop. An interpreted will convert the corresponding code for each iteration of the loop. On the other hand, a compiled code will translate only one. Further, since interpreters see only one line at a time, they are unable to perform any significant code such as changing the order of execution of statements like compilers.

Example

We shall look into an example of such optimization below −

Adding two numbers stored in memory: Since accessing memory can consume multiple CPU cycles, a good compiler will issue instructions to fetch the data from memory and execute the addition only when the data is available. It will not wait and in the meantime, execute other instructions. On the other hand, no such optimization would be possible during interpretation since the interpreter is not aware of the entire code at any given time.

But then, interpreted languages can run on any machine that has a valid interpreter of that language.

Is Java Compiled or Interpreted?

Java tried to find a middle ground. Since the JVM sits in between the javac compiler and the underlying hardware, the javac (or any other compiler) compiler compiles Java code in the Bytecode, which is understood by a platform-specific JVM. The JVM then compiles the Bytecode in binary using JIT (Just-in-time) compilation, as the code executes.

HotSpots

In a typical program, there’s only a small section of code that is executed frequently, and often, it is this code that affects the performance of the whole application significantly. Such sections of code are called HotSpots.

If some section of code is executed only once, then compiling it would be a waste of effort, and it would be faster to interpret the Bytecode instead. But if the section is a hot section and is executed multiple times, the JVM would compile it instead. For example, if a method is called multiple times, the extra cycles that it would take to compile the code would be offset by the faster binary that is generated.

Further, the more the JVM runs a particular method or a loop, the more information it gathers to make sundry optimizations so that a faster binary is generated.

Working of JIT Compiler

JIT compiler helps in improving the Java programs execution time by compiling certain hotspot codes to machine or native code.

JVM scans the complete code and identifies the hotspots or the code which is to be optimized by JIT and then invokes JIT Compiler at runtime in turn improves the efficiency of the program and runs it faster.

As JIT compilation is a processor and memory-intensive activity, JIT compilation is to be planned accordingly.

Compilation Levels

JVM supports five compilation levels −

  • Interpreter
  • C1 with full optimization (no profiling)
  • C1 with invocation and back-edge counters (light profiling)
  • C1 with full profiling
  • C2 (uses profiling data from the previous steps)

Use -Xint if you want to disable all JIT compilers and use only the interpreter.

Client Vs. Server JIT (Just-In-Time) Compiler

Use -client and -server to activate the respective modes. The client compiler (C1) starts compiling code sooner than the server compiler (C2). So, by the time C2 has started compilation, C1 would have already compiled sections of code.
But while it waits, C2 profiles the code to know about it more than C1 does. Hence, the time it waits if offset by the optimizations can be used to generate a much faster binary.

From the perspective of a user, the trade-off is between the startup time of the program and the time taken for the program to run. If startup time is the premium, then C1 should be used. If the application is expected to run for a long time (typical of applications deployed on servers), it is better to use C2 as it generates much faster code which greatly offsets any extra startup time.

For programs such as IDEs (NetBeans, Eclipse) and other GUI programs, the startup time is critical. NetBeans might take a minute or longer to start. Hundreds of classes are compiled when programs such as NetBeans are started. In such cases, the C1 compiler is the best choice.

Note that there are two versions of C1 − 32b and 64b. C2 comes only in 64b.

Examples of JIT Compiler Optimizations

Following examples showcases JIT Compiler Optimizations:

Example of JIT optimization in case of objects

Let us consider the following code −

for(int i = 0 ; i <= 100; i++) {
   System.out.println(obj1.equals(obj2)); //two objects
}

If this code is interpreted, the interpreter would deduce for each iteration that classes of obj1. This is because each class in Java has an .equals() method, that is extended from the Object class and can be overridden. So even if obj1 is a string for each iteration, the deduction will still be done.

On the other hand, what would actually happen is that the JVM would notice that for each iteration, obj1 is of class String and hence, it would generate code corresponding to the .equals() method of the String class directly. Thus, no lookups will be required, and the compiled code would execute faster.

This kind of behavior is only possible when the JVM knows how the code behaves. Thus, it waits before compiling certain sections of the code.

Example of JIT optimization in case of primitive values

Below is another example −

int sum = 7;
for(int i = 0 ; i <= 100; i++) {
   sum += i;
}

An interpreter, for each loop, fetches the value of 'sum' from the memory, adds 'i' to it, and stores it back into memory. Memory access is an expensive operation and typically takes multiple CPU cycles. Since this code runs multiple times, it is a HotSpot. The JIT will compile this code and make the following optimization.

A local copy of 'sum' would be stored in a register, specific to a particular thread. All the operations would be done to the value in the register and when the loop completes, the value would be written back to the memory.

What if other threads are accessing the variable as well? Since updates are being done to a local copy of the variable by some other thread, they would see a stale value. Thread synchronization is needed in such cases. A very basic sync primitive would be to declare 'sum' as volatile. Now, before accessing a variable, a thread would flush its local registers and fetch the value from the memory. After accessing it, the value is immediately written to the memory.

Optimizations Done by Just-In-Time (JIT) Compiler

Below are some general optimizations that are done by the JIT compilers −

  • Method inlining
  • Dead code elimination
  • Heuristics for optimizing call sites
  • Constant folding
Advertisements