Hacker Newsnew | past | comments | ask | show | jobs | submit | beevek's commentslogin

Today's news from NS1 on NetBox: NetBox Cloud - https://techcrunch.com/2021/08/24/ns1-brings-open-source-net...


Is there a way for teams with production Docker deployments to easily experiment with this kind of scanning on their own infra to understand their own situation? Maybe worth writing up a quick description of how operators can do something like that.


Absolutely. Docker and Quay.io both offer scanning for repositories they host, there are open source options like vuls and clair that are a bit more work to set up, and we have a free plan for up to 5 hosts and for open source projects and schools.

Happy to help if you need a hand.


read part 1? ;)


We are using Grafana with OpenTSDB and loving it -- really looking forward to some of the 2.0 features and about to buy a bunch more TVs for big shiny Grafana dashboards in our NOC. Way to go guys!


they are essentially the same thing -- both are "proprietary" names for the same feature, which is behind-the-scenes recursive CNAME chain lookups by the authoritative nameserver, to return A records directly.


http://blog.cloudflare.com/introducing-cname-flattening-rfc-... is a reasonable explanation. fundamentally a CNAME says "when you get queries for this name, go look at this other name instead". among other things, doing a CNAME at the zone apex means resolvers can't then find your NS, MX, or other records at the apex, which is problematic.


hi. i did not mean to spam or kick dnsimple, we know them and they are a great company and service. we are actively receiving inbound queries about this from folks asking for help, so thought it made sense to chime in publicly here. but you're right, i should have kept it on topic to the discussion at hand instead of offering anything else up.

you're not wrong: in this industry you never kick your competitors when they're down, everyone is subject to the same constraints, attacks, and complications. that wasn't my intention and i said so in the post.


dsl (1402 days old, 4664 karma) - beevek (123 days old, 4 karma). beevek you just lost our business.


That seems a little extreme; perhaps what he did was in poor taste, but to use karma and account age as an barometer for your business decisions seems crazy (or an arbiter in an internet catfight).


dns is less easily distributed when fancy features like ALIAS (which dnsimple is widely known for) are in the mix. and wide distribution isn't enough to win vs truly volumetric attacks. it takes a lot of ports and compute to absorb 100Gbps+ attacks which are not uncommon against major providers.


DNSimple is widely know for the ALIAS pseudo-"record" because they invented it[1].

Small wonder that a proprietary syntactical sugar leaves you at the mercy of select vendors?

As for volumetric attacks: your point is correct, but is irrelevant if you're using multiple vendors, and a specific, single vendor is the target, like it appears here. Your other authoritative servers would be unaffected.

1 http://support.dnsimple.com/articles/alias-record/, or http://webcache.googleusercontent.com/search?q=cache:ST1BABj...


good luck finding any major online property or infrastructure that isn't making use of some kind of proprietary syntactical dns sugar. it doesn't mean you can't span providers, but it does mean it takes a lot more work to do so.

anyway, you're not wrong, the best approach to mitigate this kind of thing is to leverage multiple dns networks. but doing so is not easy unless the application is still using dns like it was in 1995, and that is increasingly rarely the case.


Using a WWW subdomain with CNAMEs accomplishes effectively the same thing as using ALIAS on an apex domain name, and doesn't rely on anything out-of-spec or proprietary, making it easier to serve redundantly. (Did you ever wonder why google.com and facebook.com redirect to www?)

(Or is there more to ALIAS than that, which wasn't on the page in GP? Happy to be corrected if so)


you're correct about ALIAS (although practically, it doesn't matter: people are going to use the apex whether it's proper or not at this point). i'm more referring to other complex usually-proprietary capabilities of big dns providers, especially traffic routing features. routing semantics are generally not translatable across providers, and if you're using dns based routing (as most cdns, major web properties, etc are) then doing multi-network dns gets a lot harder. if you're amazon, you write and maintain a bunch of code to span providers. if you're not, the barrier to multi-network is high if you're doing more than static dns.


Yeah, that's a fair point. I'm not sure of a good fix for that, either.


You're right, but people want to get fancy with hosting at the apex (domain.com), even though it kills important functionality (CNAMES) forcing the adoption of hacks (ALIAS and ANAME records).


Most of the time we don't actually operate switches of our own, and our setup is simple enough where we don't do a ton of automation around the config where we do. What we do automate end-to-end is our BGP: our prefixes are actually announced from servers (using exabgp), not routers -- and our config there is managed by ansible, plus some real-time automation around community strings.


awesome, appreciate the quick response.


A simple example might help. Imagine you have a node in California and a node in NY, and a user in NJ. Let's assume for now that geographic proximity is actually a good arbiter for performance. If we make a bad routing decision and send your user to the CA node, we're adding a lot of overhead in their session with your application: every TCP packet (e.g., every HTTP request) takes a long round trip. Even if we spit out a DNS answer really, really fast, if it's the wrong answer, the user has a bad experience.

It's actually even worse than that, because the user isn't the one directly doing the DNS query: there's an intermediary DNS resolver, which will cache the response. If the "wrong" response gets cached, then every user of that resolver will get that wrong response until the cache expires. So not only have we made the original requester's experience bad, but we've negatively impacted every other user of your application that's leveraging the same resolver.

Time to first byte is a combination of a lot of things, and if you're in more than one datacenter, it doesn't much matter how fast you spit out a DNS response if you're giving the wrong one and impacting the rest of the session going forward.

Doesn't mean you shouldn't expect the best of both worlds: sending the user to the "best" endpoint, fast.

On the filter chain question: typical/canonical approach to do any kind of decision making in a DNS system is to add some new proprietary record type, like a geo record or a health checking record. That's kind of the natural thing to do in DNS at first glance.

But if you want to get any kind of complex routing behavior, you're going to need an awful lot of different record types implementing those different behaviors, and what you end up with behind the scenes is what I often call a spaghetti of different DNS records all pointing at each other in some kind of big decision tree -- maybe a geo record, pointing at a bunch of health checking records, pointing at a bunch of CNAMEs, pointing at a bunch of A records. This quickly becomes unmanageable, and every new kind of routing you want to do results in another layer in the decision tree, all to resolve a single hostname.

The Filter Chain is a way to collapse all that down to something much more manageable and performant by bringing all the context into one place and thinking of routing as a collection of simple actions that you're taking on some input data (answers you could give, and details about those answers).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: