Why is it hard to beat AOT compiler with a JIT compiler (in terms of app. performance)?

There’s a definite trade-off between JIT and AOT (ahead-of-time) compilation.

As you stated, JIT has access to run-time information that can aid in optimization. This includes data about the machine it’s executing on, enabling platform-specific native optimization. However, JIT also has the overhead of translating byte-code to native instructions.

This overhead often becomes apparent in applications where a fast start-up or near real-time responses are necessary. JIT is also not as effective if the machine does not have sufficient resources for advanced optimization, or if the nature of the code is such that it cannot be “aggressively optimized.”

For example, taken from the article you linked:

… what should we
improve in the absence of clear performance bottlenecks? As you may
have guessed, the same problem exists for profile-guided JIT
compilers. Instead of a few hot spots to be aggressively optimized,
there are plenty of “warm spots” that are left intact.

AOT compilers can also spend as much time optimizing as they like, whereas JIT compilation is bound by time requirements (to maintain responsiveness) and the resources of the client machine. For this reason AOT compilers can perform complex optimization that would be too costly during JIT.

Also see this SO question: JIT compiler vs offline compilers

Leave a Comment