Hacker Newsnew | past | comments | ask | show | jobs | submit | fractalcat's commentslogin

The article is referring to ear candling[0], an alternative-medicine practice espoused by naturopaths and other such quacks. It's sadly quite common.

[0] https://en.wikipedia.org/wiki/Ear_candling


I've seen one of those things recently, and have been wondering about the mechanism of function. How it sucks the earwax (which really has the consistency of viscous oil to hardened wax) out of the ear canal. I feel dismayed if you're telling me it doesn't?


It doesn't. It's one of those things that sounds good enough in theory that people tried it and of course someone is willing to sell the materials/service if there's money to be made. But yeah, it's along the lines of those "cleanse" drinks that make you crap out what they say is "toxins" that had built up in your intestines but is really just the congealed fiber in the drink (which is mostly just a laxative). In this case, it's candle waxes on the cloth that people claim are ear wax that's been somehow drawn out.


They don't do anything. The "earwax" is literally candle wax.


What?? I can't believe this is really happening, it's the funniest thing I've read this week.


Last time I was in a role which involved on-call rotation:

> expected duties (only answer on-call, do other work, etc)

Only answer pages. My employer did shifts a bit differently from most companies - only six hours per shift, no fixed schedule (decided a week in advance) and only outside of work hours (pages during work hours were handled by whichever sysadmins were on duty), which worked quite well to avoid burning out sysadmins. On-call shifts were paid, and shortages of volunteers were rare.

I'd expect to spend maybe fifteen minutes per shift fixing things, on average (this is in managed hosting, so a page could be any of our customers' services).

> how deep does your on-call dive into code to fix, vs triaging, patching, and creating follow up work to fix permanently?

In my case (sysadmin for a managed hosting company) the code involved was often not under our control; the standard practice was to escalate to the customer if the cause of the outage was a bug in the application. The usual process when suspecting a bug was to track it down if possible (the codebases were usually unfamiliar, so this wasn't always the case), work around it as best we could (e.g., temporarily disable a buggy search indexer which was leaking memory, et cetera), and then get in touch with the customer (by email if the workaround was expected to last until work hours, by phone if not). Occasionally I'd fix the bug in-place and send the customer the patch, but this was technically outside scope.

> priority order of work (on-call tickts vs regular work vs existing on-call tickets vs special queue of simple tasks)

The only priorities were resolving the pages at hand and arranging followup where needed (usually raising a ticket to be followed up during work hours).

> what happens when the current on-call can't complete in time?

Generally the on-call sysadmin would resolve whichever pages they had acknowledged; in the event of an extended outage the acking sysadmin was expected to brief and hand over to the person on the next shift.

> how do you manage for other teams' risk? (ie their api goes down, you can't satisfy your customers)

In practice, we could escalate to anyone in the company for a serious outage we were unable to handle ourselves. This was pretty rare, as a small ops-heavy company, but everyone had access to everyone else's cell phone number and an outage-inducing bug was usually sufficient cause to wake someone up if it couldn't be worked around.


I've run barebones hypervisors in production for years; KVM is head and shoulders above everything else I've tried. libvirt and virsh provide a reasonably nice interface if you don't need web-based tooling, and of course config management makes everything a lot easier to maintain. I haven't used salt, but if it has mature tooling for what you need, KVM would be a no-brainer for me.


oVirt is the web interface for libvirt stuff. There is also virt-manager for a desktop GUI for libvirt stuff.

ganeti is an alternative to libvirt.


I'm not sure what the author thinks 'corollary' means here. (Something to do with correlation? Not the definition I'm used to, anyway.)


A proposition that is logically derived from one already proved, usually immediately before.


Loading this webpage I downloaded ~12KB. For comparison, I went to the NYT homepage and downloaded ~1.2 MB. For people with shitty downlinks like you and I, the absence of exorbitant quantities of Javascript and images makes a lot of difference - a lot more than time-to-first-byte.


It's not just small but the HTML part is also dead simple to render.


Or for a bigger offender, I noticed a couple of days ago that Rally (https://www.rallydev.com/) loaded over 5 MB of JavaScript from 3 scripts...being loaded 3+ times each (each one is over 600 KB)! Each script seemed to contain their own copy of ExtJS. Each script returns a 200, and doesn't use a 304 either on each load :( .


that 1.2Mb gets cached. Infact that will make the site faster because once the .js are downloaded only json/partials will be fetched when visited again.

It must be the backend ,ie when tasked to render the same page ,which backend would respond faster


I have a fast FIOS connection, and the New York Times webpage still takes around 6 seconds to fully load according to the Chrome Network panel. In that time it makes 226 requests and downloads 168 KB. This is after several page refreshes, so I'm fully taking advantage of browser caching. Simply put, there's way more images, fonts, and network callbacks on that site than on HN. It's way more heavyweight. HN, by contrast, downloads all of the content in only SIX requests, at 14.8 KB total, and in under a second.

It's the number of requests that's killing the NYT site. HN is very simple and old school, and doesn't do a single thing that isn't explicitly necessary to render exactly the content you see on the page, which is presented cleanly and without frills.


That 1.2Mb gets partly cached on the desktop. Mobile clients cache very little. Safari on my iPad is currently storing only 6.1Mb of website data and I use it every day on a variety of websites.

Client-side caching is a nothing but a lie bad front-end developers tell themselves to justify producing bad websites.



I don't think that works. It's not remotely browsable or searchable. It would be quite challenging to put these scrapes up, anyway. They're regular wget crawls with a regular directory/file structure, the problem is that there's so much material and so many files that it can be almost impossible to find what you are looking for... (Plus you need to rewrite links into relative links to make everything render properly.)


Hmm. Now I'm thinking that I might end up using your idea (scraping the dark web) and using something like httrack[0] to do exactly that: structure.

[0] https://en.wikipedia.org/wiki/HTTrack


I once tried using HTTrack, but I found it was doing too much magic under the hood and was hard to work with. As dumb as wget is (that blacklist bug is over 12 years old now!), it at least is understandable.


Thanks for saving me the headache :)



No, for the reason outlined by Aaronson in [0] - I don't see anything in the update which addresses this.

[0] http://www.scottaaronson.com/blog/?p=2212


Interesting. I'm inclined to agree but I pause when I read that Scott Aaronson still seems to hold the line on quantum computing not doing anything too useful for NP-complete or a useful subset. The weight of opinion seems against him on quantum though he remains empirically correct for the foreseeable future.


"The weight of opinion seems against him on quantum though he remains empirically correct for the foreseeable future."

Then you're misjudging the weights "for" and "against" him by putting way too much weight on people who don't know what they're talking about and way too little on those who do. It is likely that your "for" group are still operating under the idea that "quantum" works by "trying all possible answers then returning the correct one". This is empirically, mathematically-provably wrong. Anyone operating under this idea deserves for you to weight them at zero.

Despite the fact we still can't build a very big quantum computer, we actually do know quite a bit about what they can do and not do. And as Scott Aaronson points out very frequently, if in fact they prove either able to do something our current theories say they can not or unable to do something that our current theories say they can, either way that will be very interesting, precisely because it will imply that there is something wrong with quantum mechanics, which for all its "woo woo" reputation is one of the most solid math-to-reality theories we have ever had in the history of mankind.

Scott Aaronson isn't "holding" the line... he and his fellow-travelers are drawing the line.

Edit: I'm also unsure on why you think Aaronson believes quantum "won't work"... he's on the optimist side that quantum computers can be made practical. If you mean that he doesn't think "quantum" can solve NP-completeness, well, of course he doesn't... he understands the mathematical proof that it doesn't, so that's hardly surprising.

Edit edit: A positive followup to this negative message: Consider reading Quantum Computing Since Democritus [1], or if you don't want to spend the dough, read through the class notes that turned into that book [2].

[1]: http://www.amazon.com/Quantum-Computing-since-Democritus-Aar...

[2]: http://www.scottaaronson.com/democritus/ , see "Lecture Notes" section.


>If you mean that he doesn't think "quantum" can solve NP-completeness, well, of course he doesn't... he understands the mathematical proof that it doesn't, so that's hardly surprising.

There's no proof BQP ≠ NP (as well as no proof it does).

Scott thinks it doesn't, but doesn't have a proof (if he did, he'd also have a proof of P≠NP (and showing something implies a resolution of P versus NP is a great proxy for "it's not easy and hasn't been solved yet")).


Thanks. I'll have a read.


Yeah, exactly right. Not all opinions deserve to weighted equally.


Quantum computers can't solve NP complete problems any better than classical computers as far as we know. You can at best hope for polynomial speedups. That is the scientific consensus, I don't know what weight of opinion you're talking about.


He seems to be misunderstanding Scott as claiming that no quantum computers will beat classical ones (even for factoring and the like), whereas Scott only makes the claim for NP-complete problems.


Yes. That is right. I misunderstood Scott. He makes sense. BQP covers factorisation which is different to NPC.

That said, it is unclear how far Adiabatic notions will take us but it seems there are limits there too with respect to the spectral issues.


> Scott Aaronson still seems to hold the line on quantum computing not doing anything too useful for NP-complete or a useful subset. The weight of opinion seems against him on quantum

Where are you getting this opinion from? My experience has been exactly the opposite. Could you back it up by referencing examples of this "weight of opinion"?


There's no such thing as a "useful subset" of NP-complete problems. If you solve one of them, you've automatically solved all of them, because they are all reducible to each other.


There could turn out to be a tractable algorithm for some NP-complete problems, but when you do the reduction, it expands and is no longer tractable. There are also other ways in which different NP-complete problems are meaningfully different. See https://cstheory.stackexchange.com/questions/24879/easier-an... for some discussion. Look also at the first page of http://people.orie.cornell.edu/shmoys/or630/notes-06/lecture...


I didn't realize this was ever in question - of course you can't trust Tor exit nodes not to snoop on your traffic. You can't trust your ISP or friendly local intelligence agency not to snoop on your traffic either; this is why end-to-end authentication and encryption is a useful thing. (Not meant as a criticism of Chloe's research, it's certainly valuable data).


Sometimes I wish there was a very basic how-it-works presentation (maybe a kickstarter?) of the internet.

The less magical the internet seems, the easier it'll be for the public to get behind issues of internet security and privacy at the policy level.



exactly, I'd actually argue that Tor exit nodes are, on average, more likely to be untrustworthy than a standard ISP connection, as the incentives are there for people to run them to capture exactly the kind of traffic people want to remain secret, and Tor exit node + root CA certificate is a great model for government level attackers to hoover up data which is likely to be sensitive.


To analyze if ISP's are more likely to be malicious than Tor exit node, you need to list all the number of attacks and determine which is more likely.

An ISP employee know whom either side of an connection are and can pick and chose targets in a very selective way. As gate keepers they can also be influenced by outsiders to target specific users and attack them. They are however likely to get caught if they do noticeable attacks and risks their job if its unsanctioned, and risk the companies reputation if it is sanctioned.

A Tor operator can not see whom is doing the connection, but they are slightly less likely to get caught if they do try to attack users. They are also only going to lose the nodes ip address reputation if they are caught attacking users.

Third is the backbone networks that unlike the ISP level has great incentives for government level attackers to collect whole nations/continents amount of data. The risk that they are found out is almost zero, and if they are they can still deny it.

All in all, I would summarize in such a way that ISP's has the greater risk of active attacks by both criminal actors and government level actors, backbone networks for passive attacks by government level actors, and tor nodes for passive attacks by criminal actors. In order to protect against all three you got to use end-to-end encryption as the primary security technique and adding tor helps then against meta data attacks.


Heck, my cellular provider was tracking the HTTP connections of their customers by default to sell profiles to marketing companies. (You could opt out, but I believe the fine print was something along the lines of 'we won't sell your information anymore but we will still collect it for later'). Other Internet providers have offered a cheaper plan to opt-in to traffic snooping for marketing profile building/selling. Tor exit nodes and my residential ISPs are on a similar level of distrust for me.

I've since started using 'whole premises VPN' (all traffic is routed through an encrypted tunnel to a VPS) - I have more confidence in my VPS provider than I do in my residential ISPs. At least the VPS company probably won't use my connection data for marketing profiles..


Yes, this seems to be a case of users not understanding how Tor works and malicious exit node owners taking advantage.


Indeed. But this also is a problem with how Tor is being advertised, and presented in the media, imho.

A false sense of security is worse than no security at all.


To be fair, at least the Tor Project itself makes a rather serious effort to be upfront with its own limitations, etc.

For example, when you use the (recommended) Tor Browser Bundle the start page contains a window containing the following headsup

"Tor is NOT all you need to browse anonymously! You may need to change some of your browsing habits to ensure your identity stays safe."

As well as a link to https://www.torproject.org/download/download.html.en#warning.

That same warning is also present on the main download page: https://www.torproject.org/download/download-easy.html.en


Tor also has extensive documentation about the threat model they protect against, and the limitations of that model.

If there were one thing I could change about security discussions, it's that you can't talk about security in the abstract -- only security relative to some threat or foe.

I think a lot of the conversation would change if we could get people to start talking about security that way.


The modern use of the word was spawned by a book by Thích Nhất Hạnh[0] called The Miracle of Mindfulness. It was then codified into clinical practice by (among others) Marsha Linehan, who incorporated the book's ideas (in combination with behavioural therapy) as dialectical behavioural therapy, which was the first evidence-based treatment for borderline personality disorder[1]. Since then it's been a prominent feature of popular psychology (and real psychology, though in a more restricted context).

[0]: https://en.wikipedia.org/wiki/Th%C3%ADch_Nh%E1%BA%A5t_H%E1%B... [1]: Linehan M., Cognitive-Behavioral Treatment of Borderline Personality Disorder


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: