I would imagine that Go has the advantage of being significantly faster than Racket. Although if people still loves to do things in Ruby maybe performance is less than a concern in web development.
> Although if people still loves to do things in Ruby maybe performance is less than a concern in web development.
As a user of web sites (and a web developer), I find it disappointing and frustrating to use web sites designed by people who clearly have not made performance a priority. So just as a tangential point: if you are one of those web developers who loves doing things on slower platforms, believing that it's just fine to disregard performance, please think of your users! We are the ones who suffer! 250ms here, 500ms there, sometimes over 1,000ms before the server responds with the first byte! It adds up to an annoying user experience.
Among popular sites, those that are slow tend to really irk me, and I seek out alternatives. For example, there is this very popular source code social network site...
User-perceived response time has little to do with the choice of language on the server. Access to DBs, lack of proper caching strategies, CDNs, minification of CSS and Javascript play a much larger role in the big picture. Only when you've extracted all the juice from all those lower-hanging fruits must you start to blame the language or framework used to developer a website IMHO.
I'm not talking about a client-side performance issue. Those are tangential matters. We should all endeavor to comply with best practices for delivery of static assets to improve client-side performance including minification, strategic concatenation, use of content delivery networks, caching strategies, and so on.
Let's put that aside, however.
What I am talking about is a real request to a non-cachable dynamic page or service endpoint. It's the frustration of "Waiting for [servername]..." status bar messages. It's the frustration of looking at the Network panel in Chrome or Firebug and realizing that it's just spinning waiting for the first byte of the response. It might be a server-templated HTML page. It might be an API call fetching some JSON. But the sorry state even in 2013 is that a disturbingly large number of sites will spend hundreds of milliseconds, and in some cases full seconds to respond to dynamic requests that ultimately deliver quite trivial response payloads.
I for one want developers to stop sweeping these performance failures under the rug by deflecting server-side performance failure with client-side cover.
I believe, in most cases of those milliseconds amount to network and database/storage latency, not site code's performance.
So it's just insignificant (performance-wise) whenever you write site in carefully optimized C or just write some Ruby magic that performs, say, 100x slower. 1ns vs 100ns is insignificant when it's your database that responds in 300ms.
You couldn't be more wrong. PHP and Ruby page generation times can easily exceed 1000ms if the programmer isn't careful. Nanoseconds is just rubbish. Even the fastest template engines typically take on the order of a few ms.
Each application is its own unique snowflake, admittedly, but I encourage web developers to profile their applications to understand where server time is actually consumed. Furthermore, I implore you to realize that time spent in the ORM is not the same as time waiting for the database. If your profiler conflates ORM machinations as database time, you are not seeing the correct data.
Modern databases are extremely efficient at fetching data from well-indexed tables. They are in many cases not the bottleneck although conventional wisdom would have you believe otherwise.
In our experience, slower platforms consume an impressively large amount of time doing things unrelated to the actual database queries. Under load, the database server can be nearly idle while the application server is falling over itself in its slow ORM, its template engine, or helper functions. Meanwhile, faster platforms mean faster frameworks, faster ORMs, and faster templates. These platforms can easily saturate the Gigabit Ethernet connection to their database server without the database server hiccuping.
The site's code absolutely is part of those milliseconds. Often times it can even be tbe bulk of them, particularly in cases of PHP noobs writing the code.
That discussion keeps popping up from time to time. I do prefer a better performance but most of the time that not even dependent on the language but on the developer. I've seen super fast websites done in PHP and super slow ones done in Java.
Web application performance will have its exceptional cases. But in general, I find Java web application when modestly-well architected, will respond in low tens of milliseconds--and in many cases single-digits--to the vast majority of their routes. A PHP developer would need to be nearly superhuman to accomplish the same level of performance.
Perhaps the two go hand in hand. I say evidence suggests that is not only plausible but perhaps common.
Something is indeed wrong. We tolerate servers responding to trivial requests in hundreds of milliseconds (sometimes even greater than 1,000 milliseconds!)
Here is an off-the-cuff--not cherry picked--screen grab from Firebug [1] showing 342ms of waiting for the server to respond to a relatively trivial request of listing summary information about approximately 20 entities that are presumably stored as rows in a database table from a popular web site. This type of delay is my chief complaint about a site that I am otherwise very fond of. Until this first response is delivered, the user is just plain waiting. Static assets should also arrive quickly, but I am more forgiving of them arriving gradually to decorate the page.
Yes, it's certainly possible the database server is the origin of this pain. And web developers should profile their applications to gain insight. Unfortunately, as I mentioned above, many profilers incorrectly conflate ORM or platform database driver time with genuine database time. Given what we have seen from frameworks and platforms making database queries and processing responses through ORMs, I know that MySQL (and other data platforms) on decent hardware can respond to tens of thousands of such trivial queries per second. Meanwhile, some application stacks reach reduced responsiveness and worsened user experience well before the database's CPU and disk register more than a blip.
Well five out of ten Go is at least 2X faster (that's significant enough to me). In any case, probably the difference won't be noticeable since the bottleneck for most websites is the database side.