Hacker Newsnew | past | comments | ask | show | jobs | submit | throwbsidbdk's commentslogin

Or just pull my old Ubuntu box out of the closet. No webcam, no mic, no problem.

Still, this is hilarious because only the truly desperate will put up with this crap, everyone else will just get their jobs somewhere else.


I can see why they might do this, anyone that can land a job with similar benefits somewhere else probably doesn't even apply, leaving them with the worst to choose from. Their climate gets even tougher because they "can't hire enough good engineers" and the cycle repeats.


Whatever happened to progressive jpg? You can get graceful degradation for free by just encoding your images that way


I donno man, the longer I use typed languages the more I hate untyped ones, if only for the IDE help alone.

All the newer static languages use type inference anyways so its not that common to need to specify explicitly.

Besides, if you want to turn off the type system you can just cast to object or use "any" in the case of typescript.

JavaScript's untyped insanity is also the main reason it's around 10x slower than c# or Java.

Again js is a familiar example, but untyped "numbers" are HELL if you're trying to do any real math or implement a mathematical algorithm. I decided against porting one of my libs to JS for that reason alone.

A lot to lose and nothing to gain with dynamic typing.


People vote with their feet, and while I'm really sympathetic to everything you're saying about static typing, it's hard to argue that reality is producing a lot of really popular dynamically typed languages. JS being exhibit #1, but also python, ruby, clojure, and on and on.

Take a look at how fast javascript mutates, I'd argue that rate of mutation is one of the things you can do in a dynamically typed language. Java doesn't mutate like that, despite having many of the same ecosystem pressures, such as the pressure to "be good at everything" because it's so widely used.

That mutation often comes with unneeded parts and kludges, but the rate of mutation and change in the JS ecosystem has also been a huge positive. Still, TANSTAAFL and all that, I'm certainly not saying it's all good -- but I think you overstate it considerably when you say "A lot to lose and nothing to gain with dynamic typing"


I'm in the opposite camp, it feels like js is mutating quickly to try to fix the rediculous problems with the language because people are forced to use it.

You don't see such sweeping changes in other languages that have been around much longer and used more widely than js.

JavaScript just got a package manager, the module system was hacked on a few years ago and still isn't standardized, the build tools change every other year and still largely suck, it still doesn't support multithreading, the object system is super wonky, it doesn't support specific float/integers which seems to break math constantly. None of these things are an issue in any other language I can think of, these features are in language V1.0

I expect developers to jump ship immediately when a replacement without these core problems reaches critical mass.

Google has been looking into this for a while, and Dart was designed with the assumption that JavaScript fundamentally sucks and can't be fixed.

This has happened before... Look what happened the perl when Python started to gain steam. It used to be pretty much the #1 way to write websites, now you would be a fool to use perl over Python on almost anything.


I see the proliferation of JavaScript tools as a massive failure of the vanilla js standard library and ECMA. Most of the stuff out there just rebuilds basic language features that would be standard in other languages.

We wouldn't need node, npm, typescript, angular, react, web pack, or most of the bazillions of libraries out there if vanilla js had reasonable defaults.

It's causing horrid fragmentations probably won't go away until js is replaced maybe 5 years down the road.

The best we can hope for is a c++ style solution. A new language that's basically a massive preprocessor on top of the original that fixes as much as possible. My hope is in typescript for now


I think node is the funniest example. "Yay we can run js on the server now! This is the future!"

But wait, what other language can't you run on the server? I can't think of a popular language in existence before js to take so damn long just to have a web interface. It took js ten years! Even rust has a web server and it just hit 1.0. Oh and node is still single threaded, something horrific for servers that usually have 16+ cores.


You could run Javascript on the server back in 1996.

Netscape Enterprise Server supported Javascript as a server-side language, similar to VBScript or PHP. They called it Livewire.


Complete lack of understanding of Node


Man so much this. Hope Web Assembly is good!


Fun factoid most have forgotten: regex is perl. The beginnings are elsewhere but regex as we know it was designed as part of the language and the engine was pulled out and reused when people found how useful it was.

Perls regex parser is still far above the features in more modern languages, supporting, among other things, code execution within capture groups. If I remember right the perl regex parser is actually Turing complete


I know you're not trying to say that regular expressions were created as part of Perl, but I think you're giving a bit too much credit to it[1] regarding regexes.

The PCRE library is indeed used all over. And Perl was, I think, the first first-class scripting language that integrated regexes so closely to control structures and other language features in a way that feels truly natural.

There are still a lot of tools out there that use other regex libraries. Don't have it in front of me, but there's a lovely chart in the book _Mastering Regular Expressions_[2] that breaks out regular expression library use by tool. But, generally, I think the diversity of regex libraries actually causes problems for adoption these days, because people who are tempted to use them (thus learn more) tend to run in to other tools where the things they've learns mysteriously don't work anymore, and scares them off.

Anyway, regular expressions in the wild go back to Unix v.4, which included Ken Thompson's grep.

[1] Perl deserves a ton of credit it doesn't get in general, including credit for giving the world PCREs.

[2] In general, if you work with regexes a lot and don't own this book, you're doing yourself a disservice. It is one of my top-10 technical books, not just for density of actionable information, but also for the pure general excellence.


Have a look at grammars in Perl6 and the new regexen. Light years ahead of anything else. Perl6 also does numeric division properly and, if I'm not mistaken, eliminates NPEs so what's not to like?


What you describe happened far earlier. As far as I understand it, regexps were originally a part of ed (having been derived from QED), the original Unix text editor. Its “g” command with a “p” flag, or “g/re/p”, for globally searching for a regexp and printing the matching lines, was later found so useful that it was implemented into a separate utility, “grep”. Many Unix utilities started using regular expressions from then on, including Perl.


Maybe I was a hit too excited about the perl part. Perl perfected regex and the perl regex engine was integrated into other languages until it became a normal language feature.

Regex as we know it was largely a result of the adoption of perl and the flexibility of its regex engine


Thus the PCRE regex library, Perl-Compatible Regular Expressions, for instance.


Note that this library is by Philip Hazel and did not originate in the Perl source code, but in Exim.

Regex facilities for text processing were first implemented by Ken Thompson, long before Perl.

On the topic of implementations, this is important: https://swtch.com/~rsc/regexp/regexp1.html


PCRE is a nice library. I read (on its site, IIRC) that it is used for the regex support in Python and some other languages.

I once worked - as part of new product work in an enterprise company - on building the PCRE library as an object file on multiple Unixes from different vendors (like IBM AIX, HP-UX, Solaris, DEC Ultrix, etc.) and also on Windows (including on both 32-bit and 64-bit variants of some of those OSes), using the C compiler toolchain on each of those platforms. I was a bit surprised to see the amount of variation in the toolchain commands and flags (command-line options) across the tools on all those Unixes. But on further thought, knowing about the Unix wars [1] and configure [2], maybe I should not have been surprised.

[1] https://en.wikipedia.org/wiki/Unix_wars

[2] https://en.wikipedia.org/wiki/Configure_script


grep == global regular expression print and has its origins in ed from pre-vi days if my memory of passed on lore is correct.


I have heard whispers of this, and the mention of common on call hours seems to lend it some credibility


Yes, seriously legacy software (still a lot of Perl in their production web code, including ancient Perl template engines) and the on-calls are truly bad. AWS is more modern, but still not entirely awesome from what I've heard.

On the plus side, hiring people out of there after a year is usually pretty easy.


What are the typical on-calls?


You are paged most nights and often more than once a night during your rotation and you are on rotation a lot.


The failure mode doesn't matter much in practice. You need to track how many inserts you've done on both for practical reasons so with either you'll have a set cutoff before rebuild. Cuckoo filters are more likely to be used in place of counting bloom filters than vanilla ones.

You could always avoid duplicate inserts in cuckoo by checking contains before calling insert again. A modified insert-only-once routine would only have a small performance penalty. You can't use counting or deletion while doing this though, so its a trade-off. This same trade-off happens with counting bloom filters but they are much less space efficient.

Practically the use case for cuckoo filters over bloom probably lies in bi directional communication. Partner nodes can keep each other's state updated without needing to exchange a ton of data. Think distributed caches. So two data nodes exchange cuckoo filters of each other's data initially. As things fall out of their caches they can tell each other to delete those items from each other's filters by sending only the bucket index and fingerprint. Probably much smaller than the piece of data that represented originally. Since each data node independently knows what was in their cache there's no risk of false deletions. You can't really use bloom filters for this because you can't delete


What makes these so easy to detect that they're this secretive about it? There has to be obvious clues in the TCP/IP stack. 4g modems are opaque and proprietary so it's unlikely the fear of discovery lies there.

If I had to guess, they're probably detectable from TCP/IP, easily, in user land.

How? Just thinking about it, fragmented packets could be a possibility. If fragments are sent in the wrong order you need to reassemble them to find the proper destination. This requires keeping a fragment state table on the device doing the transparent forwarding. I've seen many transparent proxies that just drop these packets instead.


The net effect is much larger than the smallish overall effect appears to be. The request is much smaller and this has a surprisingly large effect because modern consumer connections are asymmetrical as much as 20/1. Those requests take 5-20 times as long per byte to send over the wire. Shrinking them this much decreases total request -> response time a lot more than it looks


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: