I really would want to use Windows Server, especially with the upcoming nano version, but it is a full 50% more expensive on Azure. A D4 instance (8 cores Haswell Xeon, 28GB ram. 400GB local SSD) costs $458 with Oracle Linux and $833!!! with Windows. It just does not make any sense for me to chose Windows.
At this point, I wonder why Microsoft even bothers innovating on Windows Server when it is going to get chosen only by orgs who need to run MSFT software or C#. Those guys don't care about nano or containers or whatever.
For orgs that run on MSFT the subscriptions prices are actually quite good vs what they are paying for other enterprise software like Oracle,SAP and so on.
Also as an OS geek, I do happen to like Windows and see UNIX hegemony as killing OS research.
Windows isn't exactly known for OS innovation! A whole bunch of things were promised for Longhorn and then never delivered. Are you thinking of one of the many Microsoft Research projects that hasn't quite made it into use?
The sheer permanence of the Windows API makes it hard for Windows to innovate.
> asynchronous IO architecture only beaten by Solaris asynchronous IO
Wait, what? Event ports are nice, but they're still inherently limited by the UNIX "readiness"-oriented I/O interface. And they don't provide a unified file/socket asynchronous I/O interface, either. (No UNIX does.)
VMS got I/O right: I/O request packets and a completion-oriented I/O interface. This, combined with multiple IRQ levels, is the key element behind Windows' superior virtual memory management, combined file system cache management and actual asynchronous file I/O.
Strip everything else away from NT and UNIX and it comes down to buffer-based I/O versus packet-based I/O (Irps), APCs instead of signals, and multiple IRQ levels (not just hi/lo).
Side note: AIX copied the NT I/O completion port API verbatim. (Except they ignore the concurrency parameter, which speaks volumes about impedance mismatch between multiple threads and the traditional process-based UNIX IPC model.)
Another tidbit I find interesting: David Cutler was 47 when Bill Gates called him in 1988. He had been working on OS development in a lead capacity since 1975, and subsequently championed the NT kernel that Gates "bet the company on". The core NT APIs that were established in the late 80s and early 90s have stayed the same because they solved the problem correctly the first time 'round.
Linux, on the other hand, was started by Linus in the early nineties (when he was in his 20s) "just for fun". He implemented enough syscalls to get bash to work and went from there. At the end of the day, it was still a UNIX-like operating system.
Cutler knew UNIX inside out and technically knew where and why it was deficient; VMS and thus NT benefited from that.
At the end of the day, it's all about the I/O, and NT has been dominating that since inception.
That is the idea I had from when Solaris introduced asynchronous IO, but I never used them in anger.
I wasn't aware that Aix also had them.
As a side note to that, when we used to develop for Aix, it used the same programming model for shared libraries as Windows does, not what is nowadays common across UNIXes.
So Aix also had import libraries and export symbols descriptions.
The NT Kernel has some cool features. One of them is that it has environment subsystems. So it has POSIX environment subsystem, Win32 subsystem , OS/2 subsystem. So it's not limited by the win32 api.
Most of these are no longer maintained, but has the capability to do so.
The POSIX subsystem was one of the worst offenders for the "checkbox marketing" abuse that I've ever seen. I am hard pressed to think of anybody who successfully used it for anything because it came with so many caveats right out of the box. Microsoft implemented the absolute minimum they needed to check the box and nothing more.
I don't know about the OS/2 subsystem, but I wouldn't be surprised if it was only sufficient to run Lotus Notes or something back when that was a relevant business need.
In what way do these "environment subsystems" differ from a user-level library? Linux has a Win32 subsystem, it's called "Wine", and POSIX subsystems called "glibc" and "musl".
Seriously, what makes these "subsystems" different than "libraries", other than they are shipped with the OS?
There have, in fact, been multiple fork() implementation for Windows without kernel support. They weren't very efficient, but IIRC, only in version 1.7 (at about 15 years of age), did the Cygwin fork() implementation start to use kernel support.
And that still does not answer my question: In what way is the "NT kernel subsystem" different from a user mode library? Either the underlying kernel supports the features you need, or it doesn't. From my perspective, there's an NT kernel (with the so-called Native API) and a few libraries that make it look like Win32 or severely limited POSIX or at one point in time OS2.
I think creshal meant run better (on Linux) than it did before (on Linux). C# on linux has been possible for a while now with Mono, but Microsoft is giving it an even bigger push.
By making Windows on Azure significantly more expensive than other technologies they create a negative incentive to develop new software to run on Windows. With time, this will hurt their own relevance. For this example, it's $375 per server that Windows development would need to be cheaper than nix. If development costs the same, a .nix-based solution would be able to use that $375 for a higher-capacity compute node.
As pjmlp mentioned below bigger orgs have better Microsoft licensing deals - volume licenses and such and smaller ones can apply for Bizspark or whatever to get lower prices.
But yeah your point still stands that they should simplify the licensing for people who are neither big org nor a small startup. Especially so in the wake of cloud and Linux.
> Those guys don't care about nano or containers or whatever
That's ridiculous. Just because someone picks a different tech stack does not mean that they do not care about the advantages that all that fancy new stuff gives.
What I'm caring less and less about is MSFT. The feeling is mutual- they just really don't give a fuck about you if you're doing anything but using Azure though.
Just use ConEmu (or get off Windows) if you need a terminal. It's still far more fully-featured than the Console Host will ever be.
The stagnation in cmd.exe (and even PowerShell) appears to be a casualty of Microsoft assuming that point-and-click would take over the world completely, which clearly hasn't happened in development. I still shake my head at the ops nightmare that is RDPing into hosted Windows Servers.
As one of the sibling posts here notes, there's really no reason to use Windows Server for anything, unless you have legacy applications to support. They're going to have to do a lot, lot more to make this a competitor.
> I still shake my head at the ops nightmare that is RDPing into hosted Windows Servers.
Can you elaborate? I have all of my boxes saved in the MSFT RDP manager doodad (before which I used RoyalTS or seomthing). When I want to get on it I double click and it logs me in and there it is - a fully featured interface from which I can do whatever admin I need opening up the relevant tools etc using the point and click you dislike so much.
Probably that logging in snip-snap, one-server-at-a-time, and click-clicking on various boxes in the (admittedly convenient) GUI leads to a bunch of unique snowflake[0] servers that are prone to drift. If each server is doing one thing, that's obviously ok, but if you have 1000 web servers then treating them as immutable (aka phoenixes[1]) is conventional devops wisdom[2].
The powershell terminal is a little better than the normal CMD shell, but there's some weird stuff there... Why are they still using the block selection for copy-paste? Never has that been what I wanted
It was quite reasonable 25-odd years ago - which, until recently, was probably about when anybody last touched the code...
(To make line-based copying and pasting work I would imagine they had to do some quite invasive surgery at some point... the idea that the window is backed by a screen buffer that's a fixed-size grid of (attribute,character) pairs is exposed quite openly in the API.)
I realize the interface to the Windows API is UTF-16, but the program interface to the console (stdout) is not - it's based on 8 bit characters, generally code page 437. Converting from UTF-8 to UTF-16 is trivial, as long as you get the full byte sequence that makes up a character.
At this point, I wonder why Microsoft even bothers innovating on Windows Server when it is going to get chosen only by orgs who need to run MSFT software or C#. Those guys don't care about nano or containers or whatever.