I’ve previously discussed JIT compilers, referencing them at [1], [2], [3], and [4]. Recently, I’ve been thinking about how JITs and runtimes behave.
It’s known that Java HotSpot and .NET Core (from version 3.0 onwards) utilize JIT compilation for frequently executed methods. This process involves applying runtime-specific optimizations and even replacing existing native code with enhanced versions through a technique called hot swapping. Given these dynamic optimizations, a question arises: is the optimized native code stored for reuse in subsequent executions?
To my knowledge, .NET doesn’t have this capability. However, Java seems to achieve this using it’s done, which leverages Shared Classes Cache for sharing between JVMs and offers configuration options for disk persistence. Interestingly, a different perspective presented in this context ([link redacted] - see original post) contradicts this notion, essentially denying the existence of shared classes. While this contrasting viewpoint seems logical, it doesn’t align with the reality of these shared classes.
It’s important to remember that JIT output can vary between runs as it adapts to the current load pattern, potentially enabling more aggressive optimizations compared to pre-compiled code that needs to be universally applicable.
If the load pattern changes and an optimization becomes less effective or even detrimental, the JIT can de-optimize and explore alternative optimization strategies (refer to About the dynamic de-optimization of HotSpot).
While storing and reusing compiled code versions could sometimes enhance performance, the associated overhead of managing and matching code sections with relevant optimizations is likely deemed too resource-intensive. This illustrates a trade-off between file system operations and the typically swift compilation of small code portions.
Applying this logic to the concept of a Class Cache, one might speculate that it primarily stores the initial native code generated by the JIT without extensive optimizations. The dynamically optimized code, which undergoes hot swapping based on runtime data, might not be cached. Instead, the Class Cache could store profiling data to compare against future executions, determining if they operate under similar conditions.
Assuming this assumption about the Class Cache is accurate, it raises a question: why not utilize AOT compilation instead of a Class Cache and then rely on JIT compilation during runtime for further optimizing hot methods? The pros and cons of AOT versus JIT are extensively discussed in various resources, including like this. An intriguing point raised in these discussions, which I hadn’t previously considered, is that JIT compilation focuses on actually executed code, while AOT compilation encompasses the entire application. For large applications where usage patterns center around a small subset of functionalities, AOT compilation could lead to wasted processing time.
Shifting to the realm of JavaScript, V8 employs two JIT compilers: one non-optimized and one optimized. Frequently used (hot) code undergoes a second JIT compilation with the optimized compiler. It’s plausible that this optimized JIT might execute multiple times for “very hot” code, potentially leading to additional hot swapping. Regardless, I’ve learned from here that V8 saves compiled code to disk to expedite subsequent executions. My understanding suggests this applies only to code generated by the non-optimized JIT.
Before concluding, it’s worth noting the intriguing approach adopted in modern Android (specifically, ART since Android 8), which combines interpreter, JIT, and AOT compilation. As explained in the documentation:
“ART uses ahead-of-time (AOT) compilation, and starting in Android 7.0 (Nougat or N), it uses a hybrid combination of AOT, just-in-time (JIT) compilation, and profile-guided compilation. The combination of all these compilation modes is configurable and will be discussed in this section. As an example, Pixel devices are configured with the following compilation flow:
An application is initially installed without any AOT compilation. The first few times the application runs, it will be interpreted, and methods frequently executed will be JIT compiled. When the device is idle and charging, a compilation daemon runs to AOT-compile frequently used code based on a profile generated during the first runs. The next restart of an application will use the profile-guided code and avoid doing JIT compilation at runtime for methods already compiled. Methods that get JIT-compiled during the new runs will be added to the profile, which will then be picked up by the compilation daemon.”
The background AOT compilation targeting only frequently used code (instead of the entire application) is a noteworthy aspect. It’s unclear whether the “profile generated” solely contains information about frequently executed methods or includes broader profiling data that the AOT compiler can leverage for optimization. It’s possible that Android’s JIT supports hot swapping, which could allow the AOT-compiled code to be replaced with JIT-generated code tailored to specific execution characteristics. This speculation stems from encountering information about de-optimizations in this context.