Hacker Newsnew | past | comments | ask | show | jobs | submit | errordeveloper's commentslogin

Userspace networking had been around for quite a few years, yet didn't seem to have taken off, despite all the great benchmarks... I think the main blocker is that you cannot make apps use it without hacks. It looks like Snap requires a custom library that probably doesn't even have to implement BSD sockets, since that's something you don't have to care about at Google.

Some years ago I recall looking into this, and it looked kernel had limitations in higher speed ethernet (must had been 50GbE at the time), and it came down to single-threaded nature of the TCP/IP stack in the kernel at the time. So that's been one of the drivers for experiments in userspace. Looks like that might now be the case now [1], but if someone has more specific info I'd be quite curious to hear.

[1]: https://www.phoronix.com/scan.php?page=news_item&px=Broadcom...


There have been a lot of advances to the point that the Linux kernel can handle ~100Gbps TCP streams via native socket API [0].

[0] https://netdevconf.info/0x14/session.html?talk-the-path-to-t...


Not everyone had the same experience.

There a few minor UX flaws that make it frustrating to use, e.g. having to set Docker host, shared filesystem performance is poor, networking in enterprise desktop environment is broken (just to name a few top most issues).

Also, a lot of folks end up running Docker for Mac and minikube VMs, why should they have to run two VMs?

Additionally, minikube is completely different from production-grade deployments (single binary, which means a rewrite of main function for etcd and all control plane components, as well as hard to debug basic performance issues in control plane, there is one large process and you don't know what is wrong, also there is no way to use your favourite network add-on).

Additionally, minikube is based on legacy Docker libmachine, it is not really maintained anymore.


Certain things are not possible. However, we try to match functionality as much as possible with localkube (and soon kubeadm).

Shared folders, especially using 9p and/or cross-platform have been an issue, and I personally also experience this in the fork Minishift, and this likely the performance issue you meant.

But back to an earlier question I posted, have you filed the issues you had in the issue tracker? https://github.com/kubernetes/minikube/issues

Yes, the docker/machine code is an issue. For this, the libmachine is mostly moved inrepo and we are working on abstracting and even replacing this.


It will be possible to run all your add-ons in the local setup, including networking. Multi-node is essential for some use-case, but arguably is not critical for most people, yet it is coming in the future.


At Weaveworks, we have a built a tool called Flux [1]. It is able to relate manifests in a git repo to images in container registry. It has a CLI client (for use in CI scripts or from developer's workstation), it also has an API server and an in-cluster component, as well as GUI (part of Weave Cloud [2]).

Flux is OSS [3], and we use it to deploy our commercial product, Weave Cloud, itself which runs on Kubernetes.

1: https://www.weave.works/continuous-delivery-weave-flux

2: https://cloud.weave.works

3: https://github.com/weaveworks/flux


Yeah, the point is that you should need to buy any hardware even for development, which is the biggest win to me!


But having the hardware is vital. You have to test your design a lot. You're still going to need Vivado (which isn't cheap) and you'll need instance time to test the design on the real hardware with real workloads, along with any syntheiszable test benches you want to run on the hardware.

The pricing structure of the development AMI is going to be meaningful here, because it clearly includes some kind of Vivado license. It might not be as cheap as you expect, and you need to spend a lot of time with the synthesis tool to learn. The F1 machines themselves are certainly not going to be cheap at all.

If you want to learn FPGA development, you can get a board for less than $50 USD one-time cost and a fully open source toolchain for it -- check my sibling comments in this thread. Hell, if you really want, you can get a mid-range Xilinx Artix FPGA with a Vivado Design Edition voucher, and a board supporting all the features, for like $160, which is closer to what AWS is offering, and will still probably be quite cheap as a flat cost, if you're serious about learning what the tools can offer: http://store.digilentinc.com/basys-3-artix-7-fpga-trainer-bo... -- it supports almost all of the same basic device/Vivado features as the Virtex UltraScale, so "upgrading" to the Real Deal should be fine, once you're comfortable with the tools.


I expect a Vivado license with all my development cards.

P.S. The few orders I had were SoC Zynq boards in the $400-1000 range.


Right, if you know how to set it up or have infrastructure in place... But most developers don't have time to read 600+ pages on how to run bind server OR simply have ops team that is very conservative at what goes in the DNS land.


just buy it in? dyn have a REST API.

failing that, bind is not difficult to learn. If you have an artificial divide between your devs and you, then you have far bigger issue. If you can't convince them of the merit of using DNS then there really is no hope.

the whole point of DNS is that you can delegate subdomains, so you can neatly isolate zones from each other

Plus saying something looks hard is a terrible justification for not trying something. I know bind isn't trendy, but it works and is simple. Failing that, there are at least 3 companies out there with REST APIs and 100% uptime SLAs.

prototype all the things!


Well, adding a library for supporting SRV records is not simple... If you use Weave [1], it gives a unique IP address to every service instance and a DNS record, all with zero configuration. That means you just stick any service in whatever default port it has, and it's all good. You can also have round-robin load balancing through DNS for free.

[1]: http://weave.works/net


The performance penalties seem pretty significant at least as of a few months ago. Has that changed?


It's really great to hear this is all happening now!


Hello, author is here, do let me know if you have any questions!


For example, here is the guide on using Weave for Kubernetes on Azure cloud: https://github.com/GoogleCloudPlatform/kubernetes/blob/maste...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: