Hacker Newsnew | past | comments | ask | show | jobs | submit | cdcfa78156ae5's commentslogin

> Whether you love or hate Microsoft for $6.99/mo you can get Office, (decently private) email, cloud storage, and Skype. Hate Skype? Don't blame you but from there you can get a phone number that you can give out and keep your personal number just for family/emergencies.

Microsoft compromises the security and privacy of all of their online services, including Skype, Outlook.com, and Hotmail: https://www.theguardian.com/world/2013/jul/11/microsoft-nsa-...

A much better alternative is to give your money to independent telephony providers that run on, and support, Free Software: https://jmp.chat/


> coming to the conclusion that slowing development time was worthy if the resulting product was performant

What I don't get is where these claims of faster development time come from. Toolkits like Qt make it way faster and easier to build UIs than trying to use HTML/CSS garbage.


Snap-on, Knipex, Milwaukee, Bosch


> usenet was actually usable

When did it stop? comp.lang.* is still pretty active.


snippets here and there keep trucking, but I think it's fair to say Usenet in it's entirety is not what it used to be.



Here is a good example of ESR being racist: https://twitter.com/tqbf/status/816449724127608833

Here is an example of ESR being misogynist: http://esr.ibiblio.org/?p=6907


I recall thinking it sounded like BS at the time, but the "misogynist" link seems prescient in hindsight.


This is trolling by people who do not understand the GPLv2, and has happened before. Someone had the "brilliant idea" of "retroactively revoking" the GPLv2 a decade ago (2008): http://www.groklaw.net/article.php?story=2006062204552163

You cannot do it. The only way the right to distribute GPLv2 software gets revoked is for a specific party that distributes a GPLv2 licensed work in violation of the GPLv2 distribution terms.


> If the code is written in a distributed fashion from the start then it can be designed so it's one python/node/R process per core.

That's a really glib dismissal of how hard the problem is. Python and node have pretty terrible support for building distributed systems. With Python, in practice most systems end up based on Celery, with huge long-running tasks. This configuration basically boils down to using Celery, and whatever queueing system it is running on, as a mainframe-style job control system.

The "shell scripts / xargs -P" mentioned by chubot is a better solution that is much easier to write, more efficient, requires no configuration, and has far fewer failure modes. That is because Unix shell scripting is really a job control language for running Unix processes in parallel and setting up I/O redirection between them.


Am I correct in assuming Elixir/Erlang does a much better job at this compared to Node/Python/etc., putting aside (what I understand to be) the rather big problem of their relative weakness for computation?


Erlang can be a good fit -- the concurrency primatives allow for execution on multiple cores, and the in memory database (ets) scales pretty well. Large (approaching 1TB per node) mnesia databases require a good bit of operational know how, and willingness to patch things, so try to work up to it. Mnesia is erlang's optionally persistent, optionally distribution database layer that's included in the OTP distribution. It's got features for consensus if you want that, but I've almost always run it in 'dirty' mode, and used other layers to arrange so all the application level writes for a key are sent to a single process which then writes to mnesia -- this establishes an ordering on the updates and (mostly) eliminates the need for consensus.


I believe with the combination of native functions ("NIFs") in rust and some work on the nif interface (to avoid making it so easy to take down the whole beam VM on errors) - you might get more of best of both worlds today - than you used to. As you say erlang itself is rather slow wrt compute.


Thankfully it's not an issue for me. Elixir/Erlang is pretty much perfect for most of my use-cases :). But I foresee a few projects where NIFs or perhaps using Elixir to 'orchestrate' NumPy stuff might be useful. Most of my work would remain on the Elixir side though.


Yes. Erlang is built around distributed message sending and serialization. Python does not have any such things; even some libraries like Celery punt on it by having you configure different messaging mechanisms ("backends" like RabbitMQ, Redis, etc.) and different serialization mechanisms (because built-in pickle sucks for different uses in different ways). Node.js does not come with distributed message sending and serialization either.


> With Python, in practice most systems end up based on Celery, with huge long-running tasks.

Oh dear... Yeah, that's a terrible distributed system. Interestingly, all the distributed systems I've worked on with Python haven't had Celery as any kind of core component. It's just poorly suited for the job, as it is more of a task queue. A task queue is really not a good spine for a distributed system.

There are a lot of python distributed systems built around in memory stores, like pycos or dask, or built around existing distributed systems like Akka, Casandra, even Redis.


> There are a lot of python distributed systems built around in memory stores, like pycos or dask, or built around existing distributed systems like Akka, Casandra, even Redis.

Cassandra and Redis just mean that you have a database-backed application. How do you schedule Python jobs? Either you build your own scheduler that pulls things out of the database, or you use an existing scheduler. I once worked on a Python system that scheduled tasks using Celery, used Redis for some synchronization flags, and Cassandra for the shared store (also for the main database). Building a custom scheduler for that system would have been a waste of time.


> Cassandra and Redis just mean that you have a database-backed application.

Oh there's a lot more to it than that. CRDT's... for example.


Well, Celery uses Rabbit-MQ and typically Redis underneath. Rabbit-MQ to pass messages, and Redis to store results.

You can scale up web servers to handle more requests, which then uses Celery to offload jobs to different clusters.


Yeah, but fundamentally Celery is a task queue. You don't build a distributed system around that.


I think the intention was that if you're gently coerced into working with a single thread, like with node, then you're also coerced into writing your code in a way that's independent from other threads. In theory, it's easier reasoning about doing parallel work when you start from this point - I've certainly noticed this effect before.

I don't think any reasonable developer would dismiss concurrency/parallelism as easy problems.


> Cryptography isn't perfect; someone could always guess your private key.

Cryptography is a branch of mathematics, and cryptographic systems can be formally proved to have certain properties, such as being unable to derive the private key from the content of the encrypted message. That the private key can be guessed is a trivial observation, and a bad argument for dismissing formal proofs. ASLR is a hack on a hack that does not tell you anything about the formal properties of the system.


> Cryptography is a branch of mathematics, and cryptographic systems can be formally proved to have certain properties, such as being unable to derive the private key from the content of the encrypted message.

A small correction: All those proofs (if they exist) are relative to complexity-theoretical conjectures that are (ideally) widely believed to be true, but open. The only system that I am aware of where an "absolute" security proof exists is OTP, but this is hardly suitable to use in practice.


The author is Philip Greenspun, who in the 1980s worked with the people that created all of the things you listed: https://en.wikipedia.org/wiki/Philip_Greenspun

There is nothing myopic about his perspective.


It's fair to mention that he is well known, though in fact I'm one of the old guard that remembers when he had a higher profile.

But, as with a new Paul Graham essay, surely we can critique the blog post on its merits instead of falling back on an assessment based on some kind of appeal to authority/"expertise by association". Philip Greenspan doesn't need to be treated with kid gloves as if he was the pope.

John Ousterhout made comments that touch on some similar (though not identical) distinctions in programming practices. That was years ago, and he was then a much more credible figure in software than Greenspun. All the same, his essay was heavily criticised. That's what serious intellectual discussion should involve.

https://en.wikipedia.org/wiki/Ousterhout%27s_dichotomy

http://www.tcl.tk/doc/scripting.html


If you care about this, the best thing to do is to start using IPv6, and insisting on IPv6 support in any interactions with service providers (SaaSS, hosting, data providers, etc.) that you have.


All my sites support IPv6. My router is horribly old and doesn't. But, once it kicks the bucket, I'll be sure to get an IPv6-compatible one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: