Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Java application servers are dead" and since there's no alternative to Java application servers here's a solution I cooked up myself.

Like I mentioned last time I appreciate an overview of modern Java practices but boy howdy I can't discern if this is clever trolling or a cheap way to make me read Part 3.



> "Java application servers are dead" and since there's no alternative to Java application servers here's a solution I cooked up myself.

The alternative are servlet containers, like Jetty or Tomcat. Jetty for example is very popular and really good and can be easily embedded inside an app for single JAR deployments, which is an awesome way to deploy an app btw.


Fair enough, it's a matter of terminology then, I guess?

Generally I think of Tomcat as being as much of an application server as something like JBoss is.

Granted, JBoss/Wildfly has a lot more enterprise-y features, but they both 'serve' web 'applications.'


I was talking about app/servlet containers (Tomcat, JBoss, WebSphere, etc.), vs embedded, single-app servers (Jetty, embedded Tomcat, and Dropwizard, which is essentially Jetty+Jersey+added goodies)


Dropwizard is a framework that delegates to Jetty the responsibility of serving requests by default, but you can host a Dropwizard app on whatever server you like.

In Java terminology, "application server" has actually started to mean servers capable of the full Java EE stack, which really means EJB and JMS. Jetty is not capable of those. Tomcat from what I know is not capable of those either. Both are targeting first and foremost the servlets API, which is pretty light and arguably good (well, at least since the latest Servlets 3.1, which finally adds asynchronous readings of requests).

Servlets is the piece of Java EE that I actually like - for everything else, there are third-party libraries and frameworks - though I've been working lately with Play framework, which comes with its own server and deployments on top of servlet containers is not officially supported, but that was primarily because it is a fully async framework and Servlets < 3.1 was inadequate for that, so things might change.

I also prefer embedding and the deployment of fat JARs to WARs. Makes things easier - a WAR implies that you need a management interface and a configured instance of your production server. A JAR implies that everything comes bundled in, configuration and all and you only have to copy and execute it directly, plus with embedding you have more fine-grained control. When I was using an embedded Jetty for example, a fine-tuned the shit out of its thread and connection pools, all from code.


Thinking Java EE is primarily about EJB and JMS is pretty old school.

These days CDI is the centrepiece of Java EE, along with services like Interceptors and Bean Validation. JPA, JSF and JAX-RS also play pretty important roles, which are all things you don't find by default in Tomcat.

Actually, nearly every time I see people using Tomcat they add many of the things mentioned above. You might as well start with TomEE then and work from there.


CDI, Interceptors, Bean Validation, JPA, JSF - all of them suck.

JAX-RS is the only thing OK-ish in the list you mentioned, unfortunately its design that ignored asynchronicity has been shortsighted (heard that it got fixed in 2.0, but apparently they also added junk).


I'd not go so far as to say it's trolling. Perhaps overly-dramatic hyperbole. As a Java developer, I do see a shift away from packaging up wars and deploying them to application servers. See Dropwizard or Spring Boot for examples of frameworks that prefer to run fat jars that serve http requests via embedded containers.


Why is the Java community so enamored with creating Java-only solutions when general-purpose solutions work just as well (and often better)?

There's no need to bundle everything up into an executable jar or war. It's much easier to write a Chef recipe to copy all the right files to the right places. And it's even easier to create a Docker container with the exploded war in its correct place. Both of those solutions don't rely on the clunky put-everything-inside-a-jar and the resulting need to write your own ClassLoader.


Because Java-only solution to a complicated problem is normally simpler.

Few reasons contribute to that. One of them is exhaustive pure Java library ecosystem, which means that it's very unlikely that you'll run into intricacies of native library access on your host system. Another is excellent tooling, which means that when all your processes are Java processes, lots of operations-related tasks (process management & monitoring, log rotation, etc) get significantly easier.

More on that I wrote few weeks ago: http://www.mikhanov.com/2014/03/31/a-love-letter-to-java-363


Indeed, pure Java crypto libraries (Bouncy Castle) were really nice a few weeks ago. When other languages/frameworks are wrapping or calling openssl, it's a really nice feeling to have a common crypto algorithms implemented in your languge/platform. Same goes for image encoding/decoding, database access, lots of examples.

Plus, there's the benefit of (essentially) standard builds, project layout, dependency management, and deployment strategies (jar or war). A java developer can walk into a new job and be productive right away.


Interesting, a lot of people that are really into CI point to the Java ecosystem as the best example of how to do it. People make the artifact of other language build pipelines a .deb or something similar to have basically the same artifact as a .jar/.war


Java - "run anywhere" (supposedly)

Docker - "The Linux Container Engine"

Think you've got your answer there.


> Java - "run anywhere" (supposedly)

Sure, it's a nice benefit, but is it actually used? In 2014, how many of us are doing backend/service development in java and not deploying to Linux machines? There are certainly still backend/service java shops out there that aren't deploying to linux machines, but I wager that most are.

Regardless of those numbers though, why do the linux shops actually care about being able to deploy to non-linux machines if they never do in practice? Python can run just about anywhere these days, but python shops that deploy exclusively on Linux don't care about maintaining deployment compatibility with systems that they don't actually use.


    In 2014, how many of us are doing backend/service development in java and not deploying to Linux machines?
Among silicon valley webstartups? Not many. But there still are a lot of Windows based installs large megacorps and small offices all over. For a lot of them, switching away is not an option so you might want to support them.


Among silicon valley webstartups, I would be surprised to hear much about java service/backend work at all. My experience is primarily with the megacorps.

If you are developing backend/service shit for other companies to use, then that is one thing, but I am thinking about in-house development. Most megacorps aren't software vendors; if they are developing software it is for themselves.

Regardless though, some megacorps deploying to windows doesn't explain the practices of java shops that don't deploy to windows. It seems like portability just for the sake of portability, without any real purpose but with real added hassle.


You would be? Because its daily I hear about services written on the JVM.


GNU/Linux != UNIX, and there are still lots of it on the enterprise.

At the enterprise it is also common to develop on Windows and deploy across Windows and UNIX systems.

Finally, there are quite a few banks, finance and insurance institutions running Java on mainframes.


I've written telephony apps that deployed to Linux, Unix and Windows. I've written an OpenGL / Swing app on Windows and deployed it for Mac. This kind of stuff (a) does happen and (b) is valuable.


I don't understand how you believe that I was implying that nobody develops cross platform software.

I am talking about one specific kind of development: Service/backend development done in-house in a corporation that is not a software vendor. If you're doing Swing/OpenGL work, you're not doing the sort of work that I am talking about.

In my experience, while this sort of development may deploy to different nix's at different companies, within any particular company it tends to always deploy on only type of system. Most "mega-corps" are not software vendors; when they develop software they are developing it for themselves. This means that they control the entire stack, making portability fairly pointless.

When development in these organizations is done in other languages without such a "portability culture", not a second of thought is given to portability. The second a service is written in Java though, everyone starts jumping through hoops for something that they will never use.


Portability of Java helps you avoid the situation when your sysadmin tells you that he just upgraded a production server because the previous version of your distro reached its EOL and now your product does not start because libfoobar.so that you depend on is not included in the distro any more and he can't build it from source because the distro's default compiler also switched from gcc to clang.

Portability is important, even when you supposedly know your deployment platform.


Docker is an option if you're deploying on Linux, which is what the majority of server deployments use these days. Chef works on the rest of the Unixes, including OS X. If you're deploying Java on a Windows server, you're doing it wrong.

And the days of needing to use different development and production OSs are over...if you develop in Windows and deploy on some Unix variant, you should be using a VM.

I'm still not buying ClassLoader contortions as being in any way justified to support the single deployable artifact requirement. It's just not necessary and can lead to many subtle and hard-to-track-down issues as well as introducing unnecessary performance overhead.


Docker is an option if you're deploying on Linux, which is what the majority of server deployments use these days. Chef works on the rest of the Unixes, including OS X. If you're deploying Java on a Windows server, you're doing it wrong.

What about OS/400, OS/390, MVS, etc?

Believe it or not, the world is a lot bigger than *NIX and Windows. Go track down a mid-sized manufacturing company in the midwest or the southeast, for example, and I'd almost bet you money they have apps running on an AS/400 (iSeries), or an S/38 or S/36 or something, if not a mainframe.


> If you're deploying Java on a Windows server, you're doing it wrong.

You're assuming you have control over what the customer's servers are.


Capsule has exactly 0 class-loader contortions (that's why I'm not too keen on One-Jar)


ClassLoader contortions are one of the cleaner ways to handle the problem, but they're not the only way.

You could:

- Create a fat jar, which necessitates bytecode transformations to prevent namespace collisions if you don't want to be sloppy. JarJar did this kinda of thing many years ago, but it seems hard to trust a process like that.

- Create what is essentially a self-extracting archive. This looks like the path Capsule took. This introduces unnecessary state and increases startup time. This is what servlet containers do, so reinventing that wheel seems like a particularly foolish decision. Not to mention that it appears that Capsule adds the additional step of pulling in all dependencies from an external server...slow deployment and an additional point of failure!

I still believe that some sort of VM image or container makes the most sense. Short of that, any system that supports sshd can be configured with Chef. The "solutions" I see coming out of the Java camp all seem like hacks by comparison.


The next release will include the option to resolve (potentially download) all dependencies without launching the app, if someone finds that useful. If dependencies are resolved, or are embedded, startup time is increased by 100ms or so.


It still feels like a pre-automation mindset...this is a step that should happen during CI, not application startup or even deployment.

The delay is going to be a lot more than 100ms if you use something like this when scaling elastically. In that situation, you need to go from automatically provisioned to running and accepting load in as short a timeframe as possible. Fully-baked machine images or Docker containers work for this use case, your solution doesn't.

Also, making your Maven repository a dependency of your production environment just seems like a bad idea. It creates an additional attack vector from a security standpoint and an additional component that can fail from a reliability standpoint.


Copying a file takes the same time, regardless of it's capsule of chef that is copying the file. Once capsule has cached the file, you can cut a VM image, and no copying will occur later.

> making your Maven repository a dependency of your production environment It becomes a dependency of your deployment process, which probably already depends on a maven repo. And a maven repo can be just a NAS drive, which is often reliable enough for use in production.


1. Because the Java ecosystem usually gives you an API to everything: you can configure things to work exactly as you want. Everything is programmable and easily hackable.

2. Because everything is standardized, which is a lot easier than stitching together lots of pieces, each has to be deployed upgraded, configured, monitored and managed differently.

3. Because some Java libraries are used to do very sensitive stuff (performance wise, or correctness wise, or security wise), Java libraries, on average, tend to be of very high quality, at least relative to almost everything else (well, except maybe the Java browser plugin, but we're talking server-side here).

4. With Capsule there is no need to use a custom class loader, it's not any clunkier than Docker, and it's cross platform. It's just a cross-platform Java executable, which is stateless and requires no installation.


I agree with using tools from other languages for the job. In our company, we chose python fabric. Wrote some deploy scripts with it.

It basically downloads the new version, updates the database running liquibase, updates some configuration files that are on git and finally updates the application war on tomcat.

Not so bad for a single file script.


play framework as well ( with no jsp dependency)


I skimmed through Part 3 and it's basically saying that app servers are hard to deploy, maintain and dev against and you should be using Spring Boot or Dropwizard instead to create applications with the services you require.


I used Spring Boot the other day. It was really easy! Up until things just didn't work for no apparent reason, and neither the error messages nor documentation gave me any clues as to how to fix it.

App servers have historically been hard to deploy, maintain and dev against. There has been a huge amount of progress in the last few years. Based on my experience, Java EE / Wildfly is actually easier to develop with than Spring Boot.


App servers like TomEE, JBoss and GlassFish are not hard to install at all, and certainly not to dev against. What's the author smoking?

These servers are just unzip == fully installed things. We have them checked in and just deploy them automatically whenever there's a need for an update, pretty much as every other library out there.

I wonder are we missing something or do people just wrongly think there's "something" difficult, where there's no need at all for things to be difficult?


Where did you find part 3?


Ah my apologies, it's not actually part 3 it's the link he referenced when he talked about part 3: http://www.slideshare.net/ewolff/java-application-servers-ar...


Part 3 will be released toward the end of the week.


I'm guessing tomcat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: