> Graal can only create native binaries targeting the system on which it is running. That means that if you want to create binaries for {Linux,Windows,Mac}x{Intel,ARM}, you need 6 different machines in order to build the 6 binaries and somehow aggregate them together for publication or deployment. This is not a blocker, but can definitely be inconvenient v.s. some other toolchains which allow you to build native binaries for all targets on a single machine.
To me this is a huge downside. One of the reasons I like Go so much is that I can build binaries for any platform I want to support on my Mac or with Docker.
GCJ could do that, as well as JET I believe, but both are long dead.
The history of native compilation for java I find fascinating: Nothing looks like a normal, boring programming language as much as java, yet somehow it resists being treated as a normal, boring programming language, and needs its own galaxy of special idioms and tools, like it was some smalltalk or lisp.
because java is not designed for it. It's retrofitted in, because of trends.
There's actually, imho, no need for native compilation for java tbh. Why isn't shipping the runtime with the code a more common practise? it's how intellij or eclipse worked. People today ship electron apps for god sakes!
If you _need_ native, for one reason or another and it's non-negotiable, then choose a native compiled language.
> Why isn't shipping the runtime with the code a more common practise?
That's what I actually do for our product. The JRE is shipped with it as well as startup scripts and services registration bridge for Windows.
I guess the native compile thing is more for "Java on the Cloud" which tbh is not where Java shines. I say this as a long-term Java developer. Java is great for monoliths - on premises customers are supper happy. Java on the cloud seems like a waste of resources to me.
> Why isn't shipping the runtime with the code a more common practise?
It really shouldn't be. It bloats everything. Containers are bigger. Software is bigger. There's no reason to do this, since very few apps use the entire surface area of a runtime (Python, Node, Java, etc).
Instead, a native GraalVM binary can embed all these things, including only the portions which the application needs to run. Thus it behaves like a dynamic piece of software during development, but a static, high-performance, low-surface-area-of-vulnerability native binary in production.
> That's what I actually do for our product. The JRE is shipped with it as well as startup scripts
Please, as a user, I implore you to look into native targets. It's not as hard as it seems anymore. Some thins really do need Jlink.
> I guess the native compile thing is more for "Java on the Cloud"
Not so. GraalVM is great for embedded development. It's great for desktop development or even shared native library development.
> Java on the cloud seems like a waste of resources to me.
There is no difference now between C++ "in the cloud" and Java "in the cloud" except Java remains memory safe by default.
> Please, as a user, I implore you to look into native targets. It's not as hard as it seems anymore. Some thins really do need Jlink.
You are not my target. It's enterprise software - I'm given a Windows Server VM and the rights to install our software, that's it. Most IT admins don't even want to bother to install or let alone support JRE updates. We got tons of customers and this experience is uniform.
> It's great for desktop development
For desktop I'd totally use Jlink. Really the only possible place GraalVM's NI seems like a plausible fit to me is CLI tools and Java Microservices, where fast start and low memory consumption actually make sense. edit: But honestly at this point for Microservices I'd probably go with Go.
While not a fan of the cloud - in case of java; package it in docker or whatever deployables along with the JRE (or even JDK) and call it a day.
Compile to native makes sense only for very fast startup cases.
As for wasted resources - there are cases where the servers have to be deployed in specific jurisdictions (compliance) - it's easier to use AWS than manage tons of small sites. Still I don't see any need for native compilation.
> Compile to native makes sense only for very fast startup cases.
It also makes sense to reduce container size and vulnerability surface, and to reduce warmup time to peak JIT performance.
> Still I don't see any need for native compilation.
Sure, there isn't a "need," per se, just like there is not a "need" for JIT. This is how technology evolves. Things get better. Java's startup and time-to-peak performance are now comparable to native langs.
>and to reduce warmup time to peak JIT performance.
This actually works in the opposite way - JIT is a guided compilation which uses the input for best results. There are many optimizations done for multi call sites, e.g. static linking + guard, dual site inline, inline caches, and the v-table calls if all fails. Without guided optimization such decisions are not likely to happen.
In other words for peak runtime performance JIT would surpass ahead of time/native compilation.
>reduce container size
I would be exceptionally hard pressed to find a single case where that would matter (aside specific client deployments) - server side and the like I'd not care about extra 100MB or so deployable. E.g. if I want really small footprint I'd go and do it all in pure C w/ an absolute min external library support.
>startup times
I already mentioned it - and anything not sub 0.5sec, Java does quite ok. There is a lot to be said about the use of DI frameworks with blatant class path scans, but that's not necessarily Java's fault.
my understanding is that native compile isn't shipping a jit, but is running the jitted code that was produced during compilation. However, if your code is so "dynamic" (aka, has lots of deeply nested inheritance) that it slows down procedure calls from so many levels of indirection, then a JVM runtime will jit a better version of native code when running using runtime information (such as jump directly to the correct virtual function instead of indirection). The compiled version do not have access to this runtime information, so you can't get this sort of optimization.
However, for "simple" code (like arithmetic, or manipulation of bits, arrays etc), i think it's pretty much the same.
> There is a lot to be said about the use of DI frameworks with blatant class path scans, but that's not necessarily Java's fault.
I used to support Atlassian's products on the intranet before we migrated (almost) everything on their cloud: Jira, Bitbucket, Bamboo, Confluence. I think Jira and Bitbucket were hands down the slowest of the bunch to boot and from what I could tell it was exactly due to classpath-scanning and god knows what else. I could never believe a 16 CPU 32 GB KVM needs from 3 to sometimes 5 minutes just to boot Jira. To me this was insane.
> Why isn't shipping the runtime with the code a more common practise?
Well, Java has jlink which is meant to do exactly that (ship the parts of the JVM you actually need, which is easy to figure out as there are tools to do that), and that's the recommended approach today for shipping anything Java.
> There's actually, imho, no need for native compilation for java
Sure, that is true, but with native compilation available, Java can do even more things. There are other reasons, too: startup time, quicker time to peak performance, and better interfacing with FFM and JNI, to name a few.
> If you _need_ native, for one reason or another and it's non-negotiable, then choose a native compiled language.
You could do that, sure. But then you would need to worry about all the things that come with that. For example, memory safety -- GraalVM preserves memory safety guarantees with native binaries. Even binaries which themselves ship JIT features.
"Why isn't shipping the runtime with the code a more common practise?"
I'd say it is a pretty common thing. I can't tell you why it isn't the norm nowadays but it used to be that Sun and Oracle prohibited modifications to the JRE in their license. So you had ship the whole bloated thing or nothing to be compliant, even if it was technically not too hard to strip the JRE down to only the things your app needed.
Actually one of the few production deployments for GCJ used to be Red-Hat AOT compiling Eclipse.
It wasn't retroffied, rather only available behind a paywall, that is what kept companies like Excelsior JET in business.
OpenJ9 from IBM traces its roots back to Java AOT compilation to embedded development, used to be called WebSphere Real Time.
PTC and Aicas are two companies specialized in real time JVM implementations for embedded development, with military and factory automation as main customer targets, also supporting AOT compilation for decades.
Was it actually any good? My only experience with it was Debian using it as default Java interpreter, which meant that any attempt to execute Java without first removing it resulted in untold amounts of suffering.
My understanding is that its main merit was to remove the need to have the non-free Sun then Oracle Jdk installed. Aparently its technical merits were not sufficient to maintaint it in a post OpenJdk world.
Both ecosystems complement themselves, some stuff Java ecosystem does much better than .NET, like multiple implementations, with various kinds of GC and JIT implementations, wider support of hardware deployments, tooling like Graal, industry standards, IDE implementations, a mobile OS,...
Other things the .NET ecosystem does better, support for value types, low level programming, SIMD, desktop development, game engines.
To this day they keep copying features from each other, and if either C# or Java isn't one's taste, there are still several other options on each platform.
Hence why I am confortably at home using them both, complemented by JS/TS for FE, and C++ for fiddling with their runtimes, or plugging into native libraries.
Most of C# 1.0 for starters, given that it was born from the lawsuit and J++ extensions, J/Direct, COM interop, events and Windows Foundation Classes.
More recently, default interface methods, apparently the ongoing discriminate union design seems to now be settling on similar Scala/Java approach, I still have to spend some time delving into it.
Not delving now on the CLR, versus the various JVM implementations, regarding GC, JIT, PGO, tiered compilation, escape analysis,...
While I agree I think most use cases aren't for releasing end-user software that needs to run on many architectures and OSes but rather building enterprise software for one certain combination, usually x86 and linux, to deploy on a server or containerized in a hyperscaler.
But to contradict myself, I actually use windows, WSL and oracle cloud to build for windows x86, linux x86 and linux ARM64 to build a java desktop application. Of course I'd prefer it if I could just use my main machine and OS to build all those like Go can.
Also, I don't believe this is fully true. GraalVM depends on the underlying C compiler to generate native code, so if properly configured with target triples and flags (just like any cross-compiling setup) it should work.
GraalVM reuses the normal java class loading logic which is platform-dependent. So e.g. on windows it will load some windows-specific File subclasses and the like, so this is more of an architectural problem to a certain degree.
(It really does loadClass everything reachable based on the code to be compiled)
Well, it's also not true. But aside from that, plenty of software builds in its native environment. Cross-compiling is straight up hard for many toolchains.
> this is a huge downside.
Also the compiler can only do static analysis and code generation. I'd much rather a hotspot VM analyze the running code and identify the optimizations on the fly.
It's not either/or. Compiling a Native Image binary gives you the JIT power of the JVM, but with faster warmup time.
Peak-peak performance is not as good as JVM in many cases, but how long does your process actually spend within that space? Native Image can be optimal for many cases
Unused RAM is wasted. If you have plenty available then doing useless memory releases (including dropping/destroying objects with RAII, which can end up doing a lot of work in case of a data structure containing other objects that have to be dropped) will just hurt your throughput. Nonetheless, Java has a pretty trivial single flag to set an appropriate heap size if you are not happy with the default.
--
You are thinking of AOT compiled, but does that AOT language support arbitrary class loading to expand functionality, live observability while taking almost zero overhead for that?
--
So what do you do if a statically linked library of yours has a vulnerability?
What's your point? This is what the regular JVM/HotSpot has been doing for ages (and GraalVM, too). Whereas native-image does AOT, with optional profile-guided optimization to identify hotspots on the fly.
Apparently I left out a line break. It should read
> this is a huge downside.
The compiler can only do static analysis and code generation. I'd much rather a hotspot VM analyze the running code and identify the optimizations on the fly.
So I believe we're in agreement, that AOT imposes does limited optimizations compared to HotSpot JIT
To me this is a huge downside. One of the reasons I like Go so much is that I can build binaries for any platform I want to support on my Mac or with Docker.