Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DoS attacks that took down big game sites abused Web’s time-synch protocol (arstechnica.com)
43 points by RougeFemme on Jan 9, 2014 | hide | past | favorite | 34 comments


As someone who was considering Black Lotus as a potential service for DDOS protection, they had a number of failures during this whole episode that make it hard to take them quite seriously.

First, and most importantly, they suffered many hours of downtime to many sites hosted on their service, including their own web page, and still to this day have not tweeted or given any public statement about the incident. A lot of Googling got me some responses to customers who complained in private with weak justification about it being their upstream provider's problem and not theirs. The fact that they now estimate that it was 28GBit/s in traffic makes me even less trusting of their service, given that much larger DDOSes are flying around.

If anyone has any other services that are competitive, other than Cloudflare, I'd be interested in hearing about them out of curiosity. It seems there should be more than one company that can do this well.


defense.net and Prolexic come to mind before Black Lotus or CloudFlare. Black Lotus and CloudFlare are bottom of the bucket options compared to defense and Prolexic.

Keep in mind that Prolexic was recently acquired by Akamai, so prices will skyrocket soon.


It only works for certain kinds of sites, but if it's a site you can host on third-party hosting (as opposed to needing DDOS protection of your own network or VPS), Nearlyfreespeech.net has a pretty good track record with mitigating DDOS on sites they host. They also have a strong ethos of not blaming the customer for "provoking" the attack, or terminating them for the inconvenience.


That's how Black Lotus does business. It's like the time when they were caught spamming and were made to look like complete fools on WHT.


Easy DOS to block upstream, thankfully. Time seems like the sort of thing that a CoLo should provided to it's tenants. GPS time is likely good enough for 99% of sites, and, the final 1% who really require super reliable time, and need to be concerned about GPS spoofing/outages, can install a local atomic clock in their cabinet.


How would you block it? Any packet with the source port of NTP should be dropped?

No. ISPs should not get into the business of default block rules. This gets into all sorts of problems. At what point does a block become unreasonable? What if a tenant needs multiple sites in sync -- they must run NTP between them (or PTP) in order to determine their drift from one another.

If you really want upstream blocking of traffic, the best thing to look towards is BGP Flowspec (RFC5575)


"How would you block it? "

The same way all Upstream providers blocks a DDOS - by characterizing the DDOS flow, scrubbing, and dropping any packet which matches that pattern.

The idea here, is that during a DDOS, where you might get a bit more NTP drops than normal, none of the local users should be impacted if they had a quality local time source.

"What if a tenant needs multiple sites in sync"

That's why you have a quality time source. By definition, they are "correct" to within any tolerance that all but CERN physics experiments care about, and they don't use NTP anyways.

See: http://static.googleusercontent.com/media/research.google.co... for an example of using synchronized time globally, without reliance on running NTP between sites.

In particular, check out section 3, "TrueTime"


It's trivial to add a UDP policer to ntp traffic to your ntp hosts. Also, how many people are exposing NTP externally on the same IP's they're running regular content (http/https) on? If you're allowing UDP/123 to your web servers, you need to fire yourself.

I guess what I'm trying to say is, you should already have this ACL'd on your own.


A UDP Policer doesn't help in a DDOS, because you need to block the traffic before it gets onto your circuit. By the time you start inspecting traffic, it's too late, you've already passed the traffic on your (presumably limited size) circuit.

The NTP DDOS doesn't require that the web server (or, indeed, any server) at the target be listening to UDP/123 in order to cripple them. You just need to use up their circuit capacity.

What I was trying (and failing) to suggest, is that when the Upstream starts scrubbing out the NTP DDOS, there is a reasonable chance, that during the event the downstream customers will start to see more NTP drops than they normally would. And that one way that customers who care about NTP greatly could mitigate that possibility, would be by having multiple stratum 0 sources locally - I.E. A GPS Antenna and Atomic Clock. I then took it a step further, and suggested that this would be a great service for the CoLo to provide, because they are downstream of any packet scrubbing, and would therefore be able to provide a reliable time source during a DDOS attack involving rogue NTP traffic, which might end up with some NTP packets being dropped by the upstream ISP who was scrubbing the DDOS off the circuit.


I was suggesting ways to prevent being part of the DDoS, not protect yourself from the DDoS. In other words, lock down your hosts so that they don't participate in the attack.


no major providers that I'm aware of support flowspec. it's also juniper only currently


Correct. nLayer used to but stopped because Juniper's implementation is buggy.


Less buggy, more "Could be broken by idiotic Cloudflare automation engineers."


For smaller webops shops, blocking ntp might not be possible.

For larger ones, having local strata0 timesources are an option. GPS gives you very precise, relativistic time for extremely low cost. (Bias: /me == Frmr Trimble employee.)


Obviously such rules wouldn't automatically be in place; they'd have to explicitly be requested. And yes, it is a reasonable request to ask your upstream to add a few ACL lines.


Really? How many transit providers today will enter custom ACLs for customers? Almost none of them will do this. The only time they will is when it's disrupting the rest of their network and other customers are complaining.

Level3 was the last one I know that would put in an upstream ACL.


There is no other way to prevent a DDOS than to contact your upstream and say, "I'm getting 20 Gigabits of shit traffic that I'm not going to pay for. Make it go away."

Other than buying ports that are greater size than any potential DDOS, that is the only way you can defend against a flood of traffic.


There are services like Prolexic and Defense.net that you advertise your routes to, then they send you clean traffic.

Providers won't care if you asked for the traffic or not. You'll get billed. They won't put in an ACL. Not today. 5 years ago, maybe. Today, no chance. They'll want to sell you their own DDoS mitigation services (at hefty fees).

TWTC (and others) will sell you a "clean pipe" option which includes mitigation, and you only pay for what passes through mitigation. It's more expensive than regular transit, but it's an option if you want to keep things simple.

ACLs are not an option today.


We (Weebly) had 18Gbps of UDP/123 (NTP) traffic sent our way on New Years eve -- definitely one of the larger attacks we've seen recently.


This sound like what brought down the GnuPG main site. Sounds like an Anonymous(TM)-type thing.


Why are ISPs letting spoofed traffic leave their network? I don't work at that level, but I thought this was a solved problem.


I imagine you're suggesting most ISPs implement BCP38? A lot of organizations implement BCP38, but the overhead of BCP38 can actually be a lot, especially once you get far above the DSLAM / CMTS.

Not only is it not free, it is not straightforward. In some more complex network architectures, you don't even know what the sources of downstream packets might be (case: downstream has two transits, and they don't want to bring ingress traffic via transit B, because transit B has ridiculous pricing).


Its value is way higher than its overhead, but the overhead is paid by the guy who essentially doesn't benefit from it.


I agree there's a payer/benefit mismatch, since filtering your own egress only protects other people's networks. But I would've thought it would still end up being incentivized somehow through things like peering or transit contracts. I don't have a good insight into how such contracts work, so I could be way off. My assumption was that in at least some such situations the other party would care if you're feeding them bogus stuff through the link, even if you don't care, so they would be motivated to require that you do egress filtering on your end as a condition of getting the link.


1. Because it costs time and money.

2. It doesn't help yourself. So for many companies the cost can't be justified to the shareholders.


because it's profitable? there are providers that allow this because it gets them a lot of sales.


So both DNS and NTP can be abused in amplification DoS attacks. The obvious question is: which other protocols can potentially be exploited the same way?


Any protocol that rides in UDP and that will reply to an unsolicited n byte packet with a >n byte response. (But if the amplification factor is close to 1, then you might as well have your botnet just attack the target directly.)

Devices that speak SNMP and use "public" for their community could be used in an amplification attack. I don't know what the amplification factor would be, though, and I imagine there are loads more Internet-facing DNS and NTP servers than Internet-facing SNMP agents configured with a "public" community...


SNMP is already being abused for this


SNMP has a history of being horribly insecure, because it was assumed people wouldn't put their mgmt hardware on the public IPs. It should only be deployed in trusted networks. It's about as secure as nonkrb5, unencrypted telnet.


The most common attacks we have seen come from DNS, SNMP, and Chargen. You can get up to a 50k byte packet from a 64 byte response, but this is highly variable.

SNMP is more prevalent than you might think, it is present and unsecured in many firewalls, routers, and printers.


It's funny you should mention this.

So I used to be the lead sysadmin for the largest profit-center dept at one of the most well known universities. I left due to pressure to degrade security of the network for credit card processing and cash registers. It wasn't a protest as much as wanting to keep some appearance of integrity.

On the plus side, the institution managed to deploy departmental firewalls and take Joe Sixpack's and Sally Sue's business desktops and servers off public IPs. If you wonder how many seconds it take to infect an unpatched windows box desktop, it's other the order of 60 s +- 25%. Yes, the desktop staff intentionally tried a few times just for kicks.

It's a cracker's paradise.... fast links and there are like 2 semi-security people. But their duties and tasks are so diffuse, it's almost impossible to catch anything except boxes.

Their organizational resistance even pushed away a couple of brilliant people you might have ran into at hope|ccc|bh.... Instituting a security officer and listening to them + letting them reach out to teach, influence are two different things.


The Web != The Internet.

NTP is not "the Web's time protocol".

Really fed up with tech journalism these days assuming that nobody has quite finished the first few chapters of the "Networking for Dummies" book. Massive gap in the market for informed quality tech journalism.


The Internet is not the Web, and NTP has nothing to do with the latter.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: