Agree wholeheartedly. One thing that is hard to measure but very relevant is, the slower the system becomes, it actually costs much more of developers time. This is exactly the expensive thing you try to avoid at the first place. In my old place they run a Spring server on a super performant cloud but when run locally it takes more than 3mins to start up. No the hot swap does not always work. It sucks
i have to say, refusing/unable/lazy/ to learn is the most terrible thing ever happened to a developer. I'm always excited about new things even I have to learn them hard, even they start with poor quality, but I believe these are what make developers happy
I feel you are concentrating on a very small and negative part of the post in a very disingenuous manner.
I read the article more as: "I have found throughout the years I've become a decent programmer in iOS and Android, I'd rather be a good or excellent iOS developer."
The end just briefly describes why he decided to go with iOS instead of Android briefly, but that's about it.
You're clearly still young and haven't yet burned out on learning new technologies that (a) are just poorly thought out rehashes of existing technologies, with everything renamed to sound new, and (b) go obsolete within a few months, flushing your precious time investment down the toilet with them as they go.
Eh, I think it is a natural process though in one's professional life. Especially with something like phones where the complexity has just exploded within each platform. The number of ways to deploy code on iOS alone (ObjC, Swift, Cordova, React native, etc.) has increased rapidly, and the size of the system apis alone has increased massively. Some form of specialization is likely inevitable. There was a time when the web only had webmasters, and they did everything from devops to backend to frontend. Gradually it became handy for a lot of people to focus their professional career on one area they were particularly good at, especially to move to the absolute peak of their potential.
By all means people should try out things, and keep up with things, but for a lot of professional reasons it makes sense to have a core competency. Of course it also makes sense to move that as needed.
I have little patience for learning the same thing over and over but only slightly different. It's just a big waste of time. There's no enjoyment or happiness down that path.
Learning totally different and interesting stuff, that's a whole different story.
OP is not lazy to learn, it is just that he finds out that learning android development is a poor investment. He would rather learn swift instead, which I think is correct.
That's a loaded question, and I'm only qualified to answer from my perspective and experience. My biggest gripe with it has always been that it is alpha-quality software, even today, that has a central role in an otherwise mature OS ecosystem. It has been widely adopted (some would say forced or tricked into adoption by a few distros) and therefore all the major Linux distributions are now running at an alpha level while its creators try to figure out exactly what they want it to be. That was the state of Linux in the late 90s, a state that it overcame during the 2000s, but now it's regressing again.
First it was "just an init to replace SysV", something I could get behind, and back in 2012 or so I was actually excited about it. Then it started growing, replacing individual components of GNU/Linux with a monolithic mega-app that has more in common with Windows NT based OSes than with anything UNIX-like. Gone is the philosophy of "do one thing and do it well", replaced with "do everything no matter the quality of the results".
I've always been a Slackware user since I started messing with Linux in the late 90s, and these days I find it getting faster and better while mainstream Linux distros slow down and grow more and more bugs. One of my benchmark systems for observing the growing bloat of modern OSes is an Atom based netbook from around 2010. It shipped with Windows 7 Starter, which it ran acceptably but not great.
Recently I tested Windows 10, Slackware 14.2, Ubuntu 14.04, Ubuntu 16.04, Debian unstable, OpenBSD, and Elementary OS Loki on it. Slackware was the fastest OS on it by a wide margin, followed by OpenBSD, then Debian, Ubuntu 14.04, Elementary, Windows, and Ubuntu 16.04 dead last. Guess which of those (not counting Windows) do not have systemd? Yep, Slackware and OpenBSD. Maybe it's a coincidence, but given how Ubuntu 16.04 on my modern workstation gets progressively slower with each systemd update, whereas Slackware on the same machine continues to chug along with no issues, that's telling.
All of that said, systemd was and maybe still is a good idea, if only they can stop trying to reinvent the wheel and instead fix the spokes they broke along the way. I can't say I'm happy about eroding the UNIX philosophy from Linux, but if systemd is the future of Linux then it damn well needs to be a stable future.
The irony of the eroding the UNIX philosophy from Linux is that most real UNIX systems, meaning AIX, HP-UX, Solaris, NeXTSTep (cough macOS), Tru64,... do have something similar to systemd.
Sometimes shouting "UNIX philosophy" in GNU/Linux forums reminds me of emigrants that keep traditions of their home countries alive that are long out of fashion back home.
The sarcastic irony is, Solaris engineers implemented a fully functional systemd(8) long before systemd(8) by designing and implementing SMF, which went on to break world records with startup and shutdown speed, on what is now an ancient AMD Opteron system (I think it was either v20z or a v40z). I wanted to include the reference to the slashdot's then-article, but try as I might, I can't find it any more.
AIX, HP-UX, Solaris and NeXTStep were not written by the original authors of Unix and its philosophy.
Linux has always been closer to the philosophy than many of these, actually. So much so that it has imported concepts from the successor of Unix, Plan9. Linux's procfs which exposes sysinfo as files within the filesystem is a concept taken from Plan9, which was the OS people like Ken Thompson and Rob Pike envisioned as the future of OSes and replacement for Unix.
Those "traditions" you're speaking of not only are not outdated, but they never were fully realized to their ideal outside of Plan9, which attempted to make everything accessible through file APIs.
The suckless crowds are not about reproducing the original Unix. They are about carrying the torch of that philosophy, and the original unix was just the beginning, not an end in itself.
Here's an example of software from suckless that follows Plan9 :
http://tools.suckless.org/ii/
Actually the only thing I find positive about Plan9 is that it gave birth to Inferno and Limbo, both of which don't have much to do with UNIX philosophy.
Those that worship Plan 9 as the UNIX culture, should actually be aware what the authors think about UNIX.
"I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!
I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy. "
Ubuntu 16.04 is not meant for lightweight machines - for example, the Unity desktop assumes you have 3D acceleration (which sucks for using in a VM). It's not systemd that makes your atom netbook slow (well, assuming you're using Unity...)
Re: systemd itself, I could care less about the bells and whistles, but every time I go back to fiddle with a sysv init script, I yearn for either of upstart or systemd...
Nope, Xfce on all the Linux distros on that machine, except for Elementary. I was surprised to find that Elementary's Pantheon was faster than Xfce on Ubuntu 16.04.
Besides, it wasn't a test of DE performance alone, it was a combination of factors including boot time, script run time, video encode/decode, build from source time, and so on. Yes, DE performance was also a metric, and for fun I did load Unity on both 14.04 and 16.04 just to see what would happen. If I were basing it on DE performance alone and used the default DE for each distro, both Ubuntu versions would be the slowest by far.
Also, 3D acceleration was not an issue, the Intel video hardware in that machine is fully accelerated in Linux and OpenBSD.
> We have been working hard to turn systemd into the most viable set of components to build operating systems, appliances and devices from, and make it the best choice for servers, for desktops and for embedded environments alike. I think we have a really convincing set of features now, but we are actively working on making it even better.
I'm pretty sure I saw others posts, but my googlefu is a bit weak.
So, the expansion was in the plan from nearly the beginning (for good or ill)
Thanks for that. When I had first heard of it back in 2012, it was right after getting my first Raspberry Pi, and a friend had suggested trying to port systemd to it to improve boot speed. At that time, all I was able to find out about systemd was that it was a faster init. There was nothing I saw back then about the authors wanting to replace all of GNU with it. It was several months later, after the update to systemd broke my Arch installation, that I started reading about how it's growing too fast and rather than focus on code quality and stability, the authors were rushing to make it this huge replacement for GNU.
Since then I've followed its progress, and while my overall impression remains slightly negative, I'm hoping it improves to the point that it is stable and mature enough for daily use. Until then, I happily run Slackware for serious work and Windows 10 for games.
When Linux came along in the mid-90s, most commercial Unixes had left behind the Unix philosophy, with their own integrated, object oriented desktop environments and sophisticated administration tools. Only Xenix, the engine that powered many an auto shop's rinky-dink five-user database setup, stuck with the model of text terminals and CLI administration with simple tools.
Of course Linux took off, and it sort of reset everything back to stone knives and bearskins. But systemd itself is modelled on Solaris SMF, which is world-class industrial grade service management for large server deployments.
Appeals to the "Unix Philosophy" are the province of reactionary greybeards. Unix philosophy means nothing in the modern era.
For CLI stuff (compiling, file operations etc) it's the time command, for video decode/encode it's built into ffmpeg, and for graphical stuff it's mostly subjective. There's honestly not a ton of difference on most of the CLI stuff since the hardware is the same, but it is measurable. As for the DE, let's just say that Xfce under Slackware and OpenBSD is quick and peppy while Xfce under Debian-based distros is anything but. Ubuntu seemed to be the slowest for that test, and Elementary's Pantheon desktop is a mixed bag. I have considered running the Phoronix test suite for a more accurate result.
Also note that I did have to tweak OpenBSD a little to get it on par with Slackware on the desktop, though the stock install is still faster than the more "modern" Linuxen for most tasks.
And for those who wonder why I do all of this: It's a hobby. It's more fun than watching TV on my off days, and it keeps me up to date on the latest goings-on in the OS world.
>There's honestly not a ton of difference on most of the CLI stuff since the hardware is the same, but it is measurable.
This is what I was after. I can't imagine ffmpeg running slower just because of systemd or unity. But yeah, if you're running on a 2010 netbook I wouldn't be surprised if it ran better under Xfce.
I am also an Ubuntu LTS user, but more a developer than a system administrator.
I have migrated from 14.04 LTS to 16.04 recently. I am using a NAS drive. After my do-release-upgrade -d, internet was not working anymore because of systemd circularity problem. I had to learn how to create systemd configuration files to describe remote filesystem mounts. It was not easy to find documentation on systemd.
When my computer enters in sleep mode, I can wake it with a press on enter. The next time, it enters in sleep mode, I can not wake it up anymore.
My system used to boot in high resolution. Now, it is using huge fonts that makes boot message impossible to read (25 lines on a 23" screen!). I still do not know how to fix it.
It may not be only the fault of systemd, but migration from 14.04 LTS to 16.04 LTS was a very bad experience for me.
Ubuntu upgrades almost always suck, but the upgrade from 14.04 to 16.04 was the worst I ever saw. Nothing worked, my system was broken beyond rescue. Pulseaudio all over again.
> gets progressively slower with each systemd update
Windows 10 will not be left behind! I mean, ahead!
Microsoft recently pushed out the Anniversary Update, which made at least my Win10 laptop noticeably - as in extra 10 or 15 seconds - slower waking up, and generally more sluggish here and there.
(How convenient, 400 million PCs need an upgrade now. Mwahahaha.)
static linux isn't really a reaction to systemd. What it is a reaction to is exemplified both by what the blurb on its WWW spends most of its time on, and indeed by its very name: dynamic linking.
"Executing statically linked executables is much faster" ...
"Statically linked executables are portable" ...
"Statically linked executables use less disk space" ...
"Statically linked executables consume less memory" -- http://wayback.archive.org/web/20090525150626/http://blog.ga...
> I refuse to believe that disk space is less as it can leverage other libraries in the deps list to load at run time and other can use it too.
If I remember correctly, the argument goes someting like this: modern compilers, i.e. something as recent as the Plan 9 toolchain or a GCC version from this millenium, usually compile in only the necessary code with static linking, and not whole libraries. With dynamic linking, you always have to load the whole library into memory, which supposedly pays off only with heavily used libraries such as libc (e.g. think about how many libraries used by Firefox/Chromium are used by other programs).
So the hope is (combined with a general strive for small programs), since text pages are shared between processes and statically linked programs only include the absolute necessary code you end up with a smaller memory footprint.
(I'm not sure whether you save disk space, but I don't think that would be a problem nowadays. Heck, look at go binaries.)
And I guess, the linker could do more whole-program-optimization on a statically linked program, since all the coude is available.
> For static executable the same dependent library will have to linked to all binaries. Maintenance is a pain in the neck.
Generally you would want to have a proper build system. In case of StaLi, they have one global git repository (/.git). An update is simply "git pull && make install".
I don't know if this process is slower or faster than binary updates, but if they strive for small programs/binaries, then I guess it doesn't matter as much.
Source-based distributions, such as Gentoo, have the advantage that you don't have to wait for someone to publish an upgraded binary, you can compile it yourself, instead. This might give you a slight edge for security vulnerabilities.
> you always have to load the whole library into memory
Not really. You do have to mmap it, but it can be demand-paged (executables are handled this way on most modern systems, which is why compressed executables are usually a bad idea). IIRC, what saves time is mostly not having to do the actual linking part where the references are resolved. This can be precomputed and stashed in the binary (an optimization well-known to Gentoo+KDE users), but that confuses some package managers, breaks some uses of dlopen()/dlsym(), and has issues with ASLR.
I was being partially ironic. It seems that the common assumption is still that you (statically) link in the whole library. Then, of course, binaries get really huge. But when you link in only what's necessary, the overhead is probably relatively small (when was the last time you used all of libc?).
The other thing is (which you can see in this thread, as well), people seem to think that you can do things only the way we are doing them now without ever questioning whether these things are still apropriate and how they originally came into existence. ("There has to be dynamic linking", "we have to use virutal memory", "there have to be at least 5 levels of caches", etc.)
To my knowledge, all the reasons regarding saving space, security, and maintenance were all made up after the fact (and aren't necessarily true, even (or especially) with modern implementations). Originally, dynamic linking was intended for swapping in code at runtime (was it Multics or OS/360?), which you can't do anymore today.
Furthermore, dynamic linking (as it is done today) is really complex. In contrast, static linking is much simpler (=> fewer bugs/security holes). I think we should reconsider if the overhead is worth it or not (do you really care whether your binaries make up 100MB or 200MB on your 1TB HDD?).
For embedded devices: yes, space does matter, but you probably don't run a full fledged Ubuntu desktop on you IoT device, anyway. You use different approaches (e.g. busybox, buildroot, etc.).
Because people like to complain more than they like to actually build a usable alternative.
Edit: here's a great example from one of the links in the other comment:
suckless complaining about "sysv removed" in systemd. Link takes you to this changelog entry:
"The support for SysV and LSB init scripts has been removed
from the systemd daemon itself. Instead, it is now
implemented as a generator that creates native systemd units
from these scripts when needed. This enables us to remove a
substantial amount of legacy code from PID 1, following the
fact that many distributions only ship a very small number
of LSB/SysV init scripts nowadays."
So, code was removed from the init daemon itself and moved into a standalone utility that does one specific job.
Systemd is now both being blamed for bloating init, and for splitting functionality out into a separate tool that does one thing.
> Because people like to complain more than they like to actually build a usable alternative.
More like people have had perfectly usable alternatives but now the hivemind is more or less forcing something else onto them. I don't need to build a new init system, I have one that works, thank you. Please don't give me systemd.
Which is why other platforms started moving from cron to init years ago?
OS X:
> Note: Although it is still supported, cron is not a recommended solution. It has been deprecated in favor of launchd. [1]
Solaris:
> cron has had a long reign as the arbiter of scheduled system tasks on
Unix systems. However, it has some critical flaws that make its use
somewhat fraught. [...] cron also lacks validation, error handling, dependency management, and a
host of other features. [...] The Periodic Restarter is a delegated restarter, at
svc:/system/svc/periodic-restarter:default, that allows the creation of
SMF services that represent scheduled or periodic tasks. [2]
I take the attribution in the first link (references to "Führerbunker" and "Führer") to mean that the author is comparing Lennart Poettering to Hitler. That's not funny, it's just very, very inappropriate.
Hating systemd is like hating Hillary Clinton at this point. It's well past time to suck it up and make peace with your next init system/President because the only viable alternative(s) are far worse.
Eh, arguably from ecosystem effects, other people's votes count a lot too. I don't think I'd want to be the sole user of best init system in the world!
Right now what I'm objecting to with systemd is that this system replaces syslog, has been created and driven by the enterprise linux distro, with full-time experienced linux devs, and has been released and used in production for years...
... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.
That is "replaces" syslog (you can still have it forward to syslog if you insist) is one of the best parts. After getting used to journald I have no desire to ever go back to dealing with syslog.
> ... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.
What are those "dirty hacks"? You can trivially use logstash or similar or you can forward log entries to a remote syslog-compatible endpoint. Incidentally the same that people usually do with syslog.
Come on in, the water's fine here! Honestly, I use FreeBSD/OpenBSD for everything I need and anytime I have to deal with some linux monstrosity it's like taking a day trip from Toronto to Detroit.
I cannot even begin to describe how silly your comment is. Since when were politicians even comparable to programs? Do we "elect" a init system, as one nation united under Torvalds?
I'm hoping I was just trolled by an HN-flavored Markov Chain.
> Since when were politicians even comparable to programs?
Since people learned the power of the metaphor.
> Do we "elect" a init system
For some distros? Sure. By its nature, Linux, GNU and the open source software that goes into the ecosystem allows people to create new distributions, or choose one of the many that exist. This choice is, in some small way, like a vote. If systemd was really that bad, enough people would work around it to make it's adoption much more problematic.
If you want more than that, some distributions literally vote on features like this, and have voted specifically on systemd[1].
> I cannot even begin to describe how silly your comment is. ... I'm hoping I was just trolled by an HN-flavored Markov Chain.
I do retract my complaint about comparing politicians to programs. In its place, I complain about the process of electing a President being different from voting on an init system.
The most important point here is that distributions vote on which init system they elect. We are not all electing one init system to rule them all, across Linux. Distributions are nation-states of varying size that follow similar but sometimes incompatible rules, all derived from the same core tenets and program. So we're electing governors from the same political parties, more or less.
I think telling people to suck it up and just accept systemd as their one true init system is just silly. Regardless about how you feel about Clinton, there are always reasons to use something else.
If you need a barebones system, or something for experimentation, or something that is hardened at the price of flexibility, that is an applicable choice, and one you can make from the comfort of your own home. You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.
And I could go on and on. But you're right; I suppose I could, in the end, begin to describe how silly that comment was. Even if the explanation ended up being really unwieldy and not my best writing. It might not have been wholly constructive either, but we're generally all here to have a good time.
My point is, it's a silly, leaky metaphor. And telling people to suck it up and use an actually useful tool in the comments for a distribution that's written as an elitist hobby project is similarly silly. These people aren't picketing your Debian or Arch systemd parties. They're just doing their own dang thing.
All metaphors and similes are leaky. The point is to focus on the ways it works and doesn't work, because each has the possibility to expand your thinking on a topic. The original comparison could have only worked in a singular facet, yet that would still make it a valid, correct and possibly useful simile. Here you've expanded on some ways the two things are different, which is also generally the point of using an analogy, in that it promotes that thinking as well.
> You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.
Well, you can (in that you can fork the rules and structures), it's just finding the resources (people and location) to make use of this new government is hard, because we are currently resource constrained. In the past, when land was plentiful, this happened. It happened to some extent with the Pilgrims (although it mostly a separation from the prior church, not the government, although I don't doubt it was also viewed as a partial separation from the government due to the distances involved). If we start colonizing Mars at some point, I'm pretty sure there will be some more separatist movements and forking of governments.
Another way to look at this is that you can fork the government right now, you just can't supercede the rights of the current government you are part of. To follow the resource and forking metaphor, you can virtualize governments to your heart's content, but in cases where your rules conflict with the host government, you can emulate the result but you can't enforce it. That is, Ring 0 doesn't care what you think you can do, the rules are the rules.
Yeah, I'm aware, and actually thought of that while writing the comment, and specifically chose metaphor. I think it still worked better to use metaphor because I think that's the more common way to relate the items in question, and being the more abstract of the two, metaphors obviously allow for similes.
In the Linux ecosystem, generally you use whatever the majority supports, or if you use an alternative you assume responsibility for supporting it yourself. Since the majority of distros, and soon the majority of upstream, are supporting systemd, what do you think is going to be used by most commercial Linux deployments?
That's not true. Unlike presidential elections, we don't all have to make the same choice. openrc, runit, s6, nosh, bsdinit... there are plenty of choices that are better.
Because it's a Windows monolithic approach to startup, shutdown and dependency management, as well as being a poor copy of Solaris' service management facility, smf(5).
I think it's a bit unfair to compare Swift with hotspot jdk. Try Android? (don't get me wrong I'm a big fan of Java, I just want to see the real world comparison)