That's at least the 3rd time I see that article posted here!!!! It's NOT exceptional or some special kind of wizardry. Even NetworkManager now does that.
If you have an old setup, you can get matching speeds with dhcpcd and the following options added to the bottom of debian /etc/dhcpcd.conf:
ipv6rs
#ipv6ra_own
#ipv4only
noipv4ll
noarp
The latest 2 options are the helpful ones for speed. The rest is for my IPv6 setup - can't remember if the Macs give you a dhcpv6 lease that fast.
Since it's bragging time, my debian+systemd thinkpad x60 laptop using coreboot want to say it resumes from suspend in about 1 second, and it boots its kernel in 0.7s and the userspace tools in 0.5s (add another second for X thanks to SNA, and another second for LXDE/conky/etc.)
What is the boot time for mac already?
EDIT: linux laptops properly resuming is still impressive for me as I remember a time not so long ago when one had to use some kernel patches, tweak the drivers or at least do a few rmmod, and even then a proper resume could take several seconds due to the drivers reinitializing.
Linux has come a long way since them. What might have been surprising before is now taken for granted (as it should be!).
EDIT2: I see downvotes. I guess some fanboy is quite sad that linux can do the same or better.
You do realize you're partaking in unwarranted Linux adoration, yes? Quite literal situation of pot calling the kettle black.
If this were truly about the merits and capabilities of Linux as a kernel and whichever userspace utilies you slap on, there would be no self-importance in your comments. However, it's more about showing off your accomplishments and trying to make yourself feel and seem superior.
I applaud you on your boot time. Sadly, it's not really applicable anywhere beyond the confines of your workstation.
Systemd is great and all for desktops, but I like keeping that complexity and invasiveness away from my servers.
You despise "fanboys" and "adoration," but you're "proud" of an OS that (I'm guessing) you had nothing to do with the development of?
This, right here, is the problem. People choosing to attach their personal and tribal identity to an operating system. It doesn't matter what operating system it is, or how good it is. It's no more noble than doing the same thing with a football team or a comic book character.
You see, it's a very small patch - yet it took some work. There's no magic, just effort.
And for those who think that's not replicable anywhere beyond the confines of my workstation, I intend to release very soon a debian based distribution with "proper" systemd support that gets me from grub to dwm in less than 2 seconds instead of 3s, kernel and systemd time included, on the X60 (I had some inspiration tonight!!)
It's a work-in-progress, it could be improved on the non-tablet version, but it's still something that's possible right now on 8-years old hardware. That's what I call hacking.
For me, free software is not a tribal identity. It's very practical, with great benefits far beyond fast boot times.
Things change because people work on them, and yes they're usually proud of their work, sorry about that. Pride is part of hacking.
I must say I have been quite surprised by the general tone of this thread, including the ad-hominem attacks ("Seems he is working for Intel on CoreOS") on the OP that was trying to explain stuff. Many people don't post content, they just criticize.
That's good that you're contributing to free software. I like free software, you understand. But your sneering at "fanboys," your acting like someone's choice of operating system is a moral stance rather than a matter of convenience and personal preference, still says that you are not being as rational as you think you are. Only fanboys accuse people of being fanboys. Your choice to use free software may not be a tribal identity, but your choice to mock people who don't absolutely is.
> It certainly is when you pick a free software/open source OS over a proprietary one.
A whole lot of people have convinced themselves that it is, at least. I'm not quite ready to accept that using the wrong operating system makes you a bad person. I'm kind of amazed that I even have to say that.
It's throwing around annoying editorials like "unwarranted" every time it doesn't apply to what you're excited about, and not expecting others to do same.
You're the kind of person who feels they're fighting the good fight and the whole world is against them, proud of having this unpopular opinion that nobody agrees with.
I simply can't get Ubuntu 14.04 to hibernate and resume reliably. 1 out of 10 times it's dead on resume. The culprit seems to be fglrx (as usual), but no decent solution other than to give up on AMD and go back to Nvidia (not gonna happen).
We've come a long way since the first Slackware, but Linux on the desktop is still a hot mess, particularly in terms of WM. I sincerely wish we had less options, and some of the "it just works" from Macs.
That neither fglrx nor the proprietary Nvidia drivers works well with the rest of Linux is hardly a secret. The fact that there is a special tainting mechanism so you won't get support for that configuration should be regarded a big warning sign.
Linux, just like MacOS or Windows, needs supported hardware to run. Since the past 8 years or so, that has meant Intel. It can be confusing that Linux "supports" every hardware under the sun, which can sometimes be true of only select versions.
Run a supported configuration and it'll "just work". My bog standard Thinkpad has suspended many times a day for the past five years and I haven't even once had a problem with it.
> Intel is a joke in terms of GPUs, and absent in terms of desktop GPUs.
That's not really a useful thing to say. Intel GPUs started out as useful only for office and programming work (still an overwhelming majority of use cases). Now it's good enough for pretty much anything that's not among the most performance demanding.
If you need a workstation with that extra performance, you probably have an application which only supports a particular configuration anyway (Nvidia on Red Hat for example, for a popular CAD/CAM tool), if which case the question isn't really relevant anyhow.
What remains are primarily consumer gaming, for which you should better stick to Windows which by far the dominating platform for it. I'm told Steam makes a difference here, so perhaps that's worth checking out, but I'm not in the industry so I have no idea about that.
> What you call a "supported configuration", I call "you're lucky... For now". Wait until your next distro or hardware upgrade.
You might be tempted to think that's so, but that's the whole idea why Linux is such a work horse on well supported hardware.
For another popular operating system, you pretty much have no guarantees that whatever drivers you require for third party support will be available in the future.
Since the well supported drivers on Linux by definition is open source, you are guaranteed (either explicitly by your support provider, or implicitly by the way development works) that it will be supported for the forseeable future. That's why I can use a ten year old printer on Linux, for example, which lost its Windows support two generations ago. Or a ten port serial board, which is probably just as old.
If I was using a Mac and a poorly supported closed source third-party device driver was causing problems, I would be more apt to blame the driver's creator than Apple. I also would not assume that my issues meant OS X was a "hot mess".
Personally, I posted this because Linux can now DHCP far faster, with both systemd-networkd and NetworkManager, and I thought people might find it interesting to read about some of the techniques used to speed up DHCP.
You wrote this in a previous thread as well, but never responded to my comment then. I have no idea why you are hyping up networkd's DHCP client so much, given that it was established that a) its functionality is a small subset of dhcpcd and dhclient, b) dhcpcd could reach comparable speeds provided additional steps unsupported by networkd like ARP checking are disabled and c) networkd DHCP violates and/or sidesteps RFC guidelines.
I've used dhcpcd, and I've seen many other reports from systems using it. It's an improvement over previous DHCP clients, but I'd love to see benchmarks demonstrating its ability to obtain an address in a vaguely timely manner. I have in fact seen piles of benchmarks for networkd, and I look forward to seeing similar benchmarks for the same code in NetworkManager.
In the previous article, I was particularly pleased that NetworkManager is finally starting to integrate good library code rather than spawning off programs and attempting to manage them. That's half the problem with dhcpcd: it really doesn't matter how fast it is, if it doesn't actually integrate well with a broader network management system. (And on most systems I've used, NetworkManager uses and depends on dhclient instead.) I hope to see more where that came from, and it's a toss-up whether networkd or NM will integrate good library-based wireless support first.
As for RFC guidelines, as far as I know the most notable one is one that every sane DHCP client violates, namely the massive timeouts, delays, and backoffs, all written for scenarios in which a thousand systems all come up at the same time and try to get an address over a network comprised largely of tin cans and string. Those guidelines matter very little on modern networks, compared to the cumulative wait time of millions of users on millions of systems.
Nope, I have nothing to do with CoreOS. I work on Chrome OS, and I wish Chrome OS used systemd; I don't enjoy fighting with upstart. In any case, I don't speak for anyone but myself, which really ought to go without saying.
My Chromebook turned xUbuntu system is already back online before the screen turns back on after opening the lid. All of the Macs we use in the studio are much slower than that.
So (in this particular arena) Linux can, by Googling around and digging into conf files, match the speed that a Mac has out of the box? That's nice, but it's not exactly impressive. Why isn't it configured that way to start with?
(My personal machines are a Mac laptop, a Windows desktop, and a Linux server. I have no dog in this fight. I kind of don't understand why people think they do have a dog in this fight.)
In my pretty extensive tests, a long initial lease time is almost always the DHCP server's fault. I didn't see really any time wasted in DHCP REQUEST in my tests across many devices and operating systems. I think setting your DHCP server to authoritative will usually cause it to do a DHCP NACK in response to a bogus DHCP REQUEST, which prevents any delay there as observed in the article.
I measured lease time when connecting a client as "time from DHCP DISCOVER to DHCP ACK", which I found effectively represents the time between a device connecting to WiFi and being available on the network with an IP address.
With stock dnsmasq, the shortest "new connection" lease time I observed on an uncongested network was around five seconds. Almost all of the time was spent waiting for the DHCP server to respond.
With a modified dnsmasq, the shortest time to lease I observed was 20ms. That was also the longest time to lease on every platform I tested except iOS, which spends about a second trying to ICMP ping the IP address before it takes it. On iOS devices, the time to lease was almost always exactly 1s20ms the first time, and about 20ms on subsequent attempts with the same device.
You can do the same tests yourself by sitting on a wireless network with Wireshark and connecting with a device that has had the network forcibly "forgotten". Mark the first DHCP DISCOVER packet you see, and set the timestamps to relative. Filter the traffic to just show the MAC address of the router and connecting device, then look for the DHCP ACK from the router.
It's still a problem with the latest version of dnsmasq. Look for `if ((count < max) && !option_bool(OPT_NO_PING) && icmp_ping(addr))` in `src/dhcp.c`. Most of your lease acquire time is spent in `icmp_ping()` which actually just sends ARP requests for 3+ seconds then gives up. It's actually not even very reliable when someone is using the IP already, because many devices won't respond to ICMP.
There's an option to disable it (OPT_NO_PING) which does improve lease performance with pretty much no downside (as ICMP is a flawed way to check if an address is in use anyway).
-5, --no-ping (IPv4 only)
By default, the DHCP server will attempt to ensure
that an address in not in use before allocating it
to a host. It does this by sending an ICMP
echo request (aka "ping") to the address in question.
If it gets a reply, then the address must already be
in use, and another is tried. This flag disables
this check. Use with caution.
So, if you're configuring dnsmasq, these flags will give you some nice performance gains:
This has not been my experience; after resuming my macbook, it takes up to 30 seconds for it to reconnect to my home network. It is much faster if, after resuming, I disable and re-enable WiFi.
There is probably something wrong with the way I have my home network setup, but I haven't been bothered enough to dig any deeper.
Mine is slower than reported too. My circa-2010 MacBook Pro takes 10-20 seconds on my home network, whereas the 2014 MacBook Pro I'm using right now takes about one second on the same network. I had suspected it was just the newer hardware being better, but this story is from 2011 and they were already seeing one-second connections back then, so I'm not sure what does it.
I notice that when I return from work, it often takes 10-20 seconds as you say, but when I have been at home for a few days, it seems to take only a second. Maybe it's related to how long you've been using the current network or something?
Same experience here with a MacBook Pro circa 2012. Often seems slow to re-establish network connection after sleep. I've seen this on a variety of networks. Really just an annoyance, but better management of this on my Mac would be nice.
it'd be nicer if that list of known networks was, say, searchable, sortable and perhaps even, who knows, taller so I could see more than a handful at a time. It seems like it's written for people that connect to 4 networks and never move around.
My macbook pro(late 2013) consistently fails to connect to the wifi on it's first attempt and never connects in under 20seconds. It was a similar story on my previous macbook air as well. Note this is consistent with different laptops, different networks and different versions of the OS.
I'd rather my Mac were more reliable about getting on the net -- or if failures to do so were easier to fix and to understand.
Sometimes it takes fussing around (e.g., rebooting the cable modem, telling the Mac to try again to connect) to get on the net, and this time spent fussing around never seems to leads to an insight that would help the next time I need to fuss around.
It is a small thing, but this is one aspect of "client computing" where I would probably prefer the Linux way of doing things in which it is more tedious to do simple things, but when something goes wrong, it is easier to understand the problem.
Yeah, just yesterday I went to a local cafe and they'd changed the wireless password. I had to search online for a way to change the current password for a remembered wireless network! Turns out there's no way really, you have to run this wireless wizard thing and input the password that way. What?!
Yeah, I've never had problems with just changing it via the keychain. Also, manually connecting after it fails to connect causes it to prompt me for the password.
I did forget the network once, but no go. Also thought pw for wifi points would be accessible through the network prefs. I'll check keychain next time.
This might be true for connecting to a wired network, but like others stated in this thread, Macs can take a surprisingly long time to connect to a wireless network.
If you want to see a client connect to Wifi very fast, check out the electricImp [0] module. I've never seen mine take longer than two seconds to go from power-on to it being registered with the server (EC2 instance).
Don't get it. Unless the wifi hardware fails to initialize i have never found Android to be a slow connector. Are we so stuffed with stimulants these days that we can't handle those seconds?
Before it was a marketing term, "Hacker" meant someone who tried to understand things at a deep level, and pushed that understanding into new techniques, uses, and better performance.
If you have an old setup, you can get matching speeds with dhcpcd and the following options added to the bottom of debian /etc/dhcpcd.conf:
ipv6rs #ipv6ra_own #ipv4only noipv4ll noarp
The latest 2 options are the helpful ones for speed. The rest is for my IPv6 setup - can't remember if the Macs give you a dhcpv6 lease that fast.
Since it's bragging time, my debian+systemd thinkpad x60 laptop using coreboot want to say it resumes from suspend in about 1 second, and it boots its kernel in 0.7s and the userspace tools in 0.5s (add another second for X thanks to SNA, and another second for LXDE/conky/etc.)
What is the boot time for mac already?
EDIT: linux laptops properly resuming is still impressive for me as I remember a time not so long ago when one had to use some kernel patches, tweak the drivers or at least do a few rmmod, and even then a proper resume could take several seconds due to the drivers reinitializing.
Linux has come a long way since them. What might have been surprising before is now taken for granted (as it should be!).
EDIT2: I see downvotes. I guess some fanboy is quite sad that linux can do the same or better.