Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Windows 10 Console Host Enhancements (nivot.org)
65 points by blinkingled on Feb 9, 2016 | hide | past | favorite | 51 comments


I really would want to use Windows Server, especially with the upcoming nano version, but it is a full 50% more expensive on Azure. A D4 instance (8 cores Haswell Xeon, 28GB ram. 400GB local SSD) costs $458 with Oracle Linux and $833!!! with Windows. It just does not make any sense for me to chose Windows.

At this point, I wonder why Microsoft even bothers innovating on Windows Server when it is going to get chosen only by orgs who need to run MSFT software or C#. Those guys don't care about nano or containers or whatever.


Or any other applications that only run on Windows - which in the enterprise world is a lot of applications, not just from Microsoft.


Still, forcing people to pay such a significant premium isn't exactly how you get people to use your product.

And I don't know about cloud pricing for Windows machines, but the price difference isn't nearly as crippling on regular VPS/physical hosting.


I was suprised to see how many Java applications run on Windows + MSSQL.


For orgs that run on MSFT the subscriptions prices are actually quite good vs what they are paying for other enterprise software like Oracle,SAP and so on.

Also as an OS geek, I do happen to like Windows and see UNIX hegemony as killing OS research.


Windows isn't exactly known for OS innovation! A whole bunch of things were promised for Longhorn and then never delivered. Are you thinking of one of the many Microsoft Research projects that hasn't quite made it into use?

The sheer permanence of the Windows API makes it hard for Windows to innovate.


It is not known by those that don't invest on it.

Just out of my head:

- pushing C away and allowing C++ on kernel space

- OO ABI besides a procedural one

- object based API even with Win16/32

- asynchronous IO architecture only beaten by Solaris asynchronous IO

- hybrid OS architecture pushing device drivers into user space

- the only shell that can somehow replicate the REPL experience of Xerox PARC systems

- pushing for memory safe systems programming, via C++/CX, .NET Native, static analysis and efforts like GSL at CppCon 2015

- device driver verification via a theorem prover

- integration of container model for mainstream users

- every OS object has security credentials


> asynchronous IO architecture only beaten by Solaris asynchronous IO

Wait, what? Event ports are nice, but they're still inherently limited by the UNIX "readiness"-oriented I/O interface. And they don't provide a unified file/socket asynchronous I/O interface, either. (No UNIX does.)

VMS got I/O right: I/O request packets and a completion-oriented I/O interface. This, combined with multiple IRQ levels, is the key element behind Windows' superior virtual memory management, combined file system cache management and actual asynchronous file I/O.

Strip everything else away from NT and UNIX and it comes down to buffer-based I/O versus packet-based I/O (Irps), APCs instead of signals, and multiple IRQ levels (not just hi/lo).

Side note: AIX copied the NT I/O completion port API verbatim. (Except they ignore the concurrency parameter, which speaks volumes about impedance mismatch between multiple threads and the traditional process-based UNIX IPC model.)

Fun quote from Wikipedia -- David Cutler on the UNIX I/O model: "getta byte getta byte getta a byte byte byte". (https://en.wikipedia.org/wiki/Dave_Cutler)

Another tidbit I find interesting: David Cutler was 47 when Bill Gates called him in 1988. He had been working on OS development in a lead capacity since 1975, and subsequently championed the NT kernel that Gates "bet the company on". The core NT APIs that were established in the late 80s and early 90s have stayed the same because they solved the problem correctly the first time 'round.

Linux, on the other hand, was started by Linus in the early nineties (when he was in his 20s) "just for fun". He implemented enough syscalls to get bash to work and went from there. At the end of the day, it was still a UNIX-like operating system.

Cutler knew UNIX inside out and technically knew where and why it was deficient; VMS and thus NT benefited from that.

At the end of the day, it's all about the I/O, and NT has been dominating that since inception.


That is the idea I had from when Solaris introduced asynchronous IO, but I never used them in anger.

I wasn't aware that Aix also had them.

As a side note to that, when we used to develop for Aix, it used the same programming model for shared libraries as Windows does, not what is nowadays common across UNIXes.

So Aix also had import libraries and export symbols descriptions.


The NT Kernel has some cool features. One of them is that it has environment subsystems. So it has POSIX environment subsystem, Win32 subsystem , OS/2 subsystem. So it's not limited by the win32 api.

Most of these are no longer maintained, but has the capability to do so.


The POSIX subsystem was one of the worst offenders for the "checkbox marketing" abuse that I've ever seen. I am hard pressed to think of anybody who successfully used it for anything because it came with so many caveats right out of the box. Microsoft implemented the absolute minimum they needed to check the box and nothing more.

I don't know about the OS/2 subsystem, but I wouldn't be surprised if it was only sufficient to run Lotus Notes or something back when that was a relevant business need.


In what way do these "environment subsystems" differ from a user-level library? Linux has a Win32 subsystem, it's called "Wine", and POSIX subsystems called "glibc" and "musl".

Seriously, what makes these "subsystems" different than "libraries", other than they are shipped with the OS?


You cannot implement features like fork() without kernel level support.

This is just one example.


There have, in fact, been multiple fork() implementation for Windows without kernel support. They weren't very efficient, but IIRC, only in version 1.7 (at about 15 years of age), did the Cygwin fork() implementation start to use kernel support.

And that still does not answer my question: In what way is the "NT kernel subsystem" different from a user mode library? Either the underlying kernel supports the features you need, or it doesn't. From my perspective, there's an NT kernel (with the so-called Native API) and a few libraries that make it look like Win32 or severely limited POSIX or at one point in time OS2.


But that is the whole point, efficiency.

They are some kind of blessed libraries, you cannot do everything they are able to do with plain userspace code.

I don't remember all the technical details from "Inside Windows Kernel" book series to discuss it further, but I can have a look.


> when it is going to get chosen only by orgs who need to run MSFT software or C#

And MSFT is positioning C# to run better on Linux too…


Define "run better". They're enabling it, but I fail to see why it should run better.


I think creshal meant run better (on Linux) than it did before (on Linux). C# on linux has been possible for a while now with Mono, but Microsoft is giving it an even bigger push.


Yes. Mono was (and is) a rather poor substitute for the real deal.


Fair enough, I didn't think of it from that point of view.


By making Windows on Azure significantly more expensive than other technologies they create a negative incentive to develop new software to run on Windows. With time, this will hurt their own relevance. For this example, it's $375 per server that Windows development would need to be cheaper than nix. If development costs the same, a .nix-based solution would be able to use that $375 for a higher-capacity compute node.


Let's not forget all the craziness that is the "client access license": http://blogs.technet.com/b/volume-licensing/archive/2014/03/...


If you deploying a website, that's not an issue.


Correct, but maybe one day someone might want to write or use some software that's not a web app?


As pjmlp mentioned below bigger orgs have better Microsoft licensing deals - volume licenses and such and smaller ones can apply for Bizspark or whatever to get lower prices.

But yeah your point still stands that they should simplify the licensing for people who are neither big org nor a small startup. Especially so in the wake of cloud and Linux.


> Those guys don't care about nano or containers or whatever

That's ridiculous. Just because someone picks a different tech stack does not mean that they do not care about the advantages that all that fancy new stuff gives.


"Those guys don't care about nano or containers or whatever."

The biggest take up of containers is actually large companies.


actually, really DO care about containers.

What I'm caring less and less about is MSFT. The feeling is mutual- they just really don't give a fuck about you if you're doing anything but using Azure though.

I'm finding that I need them less and less.


So great, they re-introduced the features they took out 20 years ago. Wake me up when they properly support UTF-8.

Too little, too late.


Just use ConEmu (or get off Windows) if you need a terminal. It's still far more fully-featured than the Console Host will ever be.

The stagnation in cmd.exe (and even PowerShell) appears to be a casualty of Microsoft assuming that point-and-click would take over the world completely, which clearly hasn't happened in development. I still shake my head at the ops nightmare that is RDPing into hosted Windows Servers.

As one of the sibling posts here notes, there's really no reason to use Windows Server for anything, unless you have legacy applications to support. They're going to have to do a lot, lot more to make this a competitor.


> I still shake my head at the ops nightmare that is RDPing into hosted Windows Servers.

Can you elaborate? I have all of my boxes saved in the MSFT RDP manager doodad (before which I used RoyalTS or seomthing). When I want to get on it I double click and it logs me in and there it is - a fully featured interface from which I can do whatever admin I need opening up the relevant tools etc using the point and click you dislike so much.


Probably that logging in snip-snap, one-server-at-a-time, and click-clicking on various boxes in the (admittedly convenient) GUI leads to a bunch of unique snowflake[0] servers that are prone to drift. If each server is doing one thing, that's obviously ok, but if you have 1000 web servers then treating them as immutable (aka phoenixes[1]) is conventional devops wisdom[2].

[0] https://www.ibm.com/developerworks/community/blogs/devops/en...

[1] https://www.thoughtworks.com/insights/blog/moving-to-phoenix...

[2] https://news.ycombinator.com/item?id=4226099


if your doing that, I would recommend powershell DSC.


It's odd though that they lean so heavily on PowerShell now, and still don't have a good first-party terminal to run it in.


The powershell terminal is a little better than the normal CMD shell, but there's some weird stuff there... Why are they still using the block selection for copy-paste? Never has that been what I wanted


A Slashdot comment (to this link) provided good context why the current console sucks:

http://forums.somethingawful.com/showthread.php?threadid=367...

That's from a ReactOS developer.


Whoa, that was a surprisingly interesting thread. (Not so much the console part, but the other questions he answered.)


It was quite reasonable 25-odd years ago - which, until recently, was probably about when anybody last touched the code...

(To make line-based copying and pasting work I would imagine they had to do some quite invasive surgery at some point... the idea that the window is backed by a screen buffer that's a fixed-size grid of (attribute,character) pairs is exposed quite openly in the API.)


That was fixed by conhost improvements in Windows 10. Selection is line based now.


What do you think of Powershell ISE?


They didn't support VT-100, just a subset of ANSI.


I'm not sure what that statement means. Do you mean they only added ANSI colors and not VT100 sequences?


Unlikely to happen, they went UTF-16 long ago.


I realize the interface to the Windows API is UTF-16, but the program interface to the console (stdout) is not - it's based on 8 bit characters, generally code page 437. Converting from UTF-8 to UTF-16 is trivial, as long as you get the full byte sequence that makes up a character.


Does this this [1] work for the console?

[1] http://stackoverflow.com/questions/388490/unicode-characters...


Have you read all the comments below the answer about how broken 65001 is?


There is ReadConsoleW/WriteConsoleW which have full UTF-16 support already.


Having to use a completely different abstraction instead of stdout is enough to just make people give up on outputting Unicode on Windows.


or when I can create nul.txt file.


Bitvise has had a free ssh client that puts full terminal emulation into a cmd window for years. It's very well done.

But I don't like the cmd environment for ssh, unfortunately securecrt spoiled me but it is too expensive these days.


Excellent! I couldn't get ansicon to work on W8, so it's great to see ANSI is built in in W10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: