Hacker Newsnew | past | comments | ask | show | jobs | submit | slothsarecool's commentslogin

There are no alternatives, and those alternatives that did exist back in the day, had to shut down due to either going out of business or not being able to keep a paygo model.

Not everybody needs cloudflare, but those that need it and aren't major enterprises, have no other option.


Bunny.net? Doesn't have near the same feature set as Cloudflare, but the essentials are there and you can easily pay as you go with a credit card.

Their WAF isn't there yet, the moment it can build the expressions you can build with CF (and allows you to have as much visibility into the traffic as CF does), then it might be a solid option, assuming they have the compute/network capacity.

Lots of people who think they need Cloudflare don't. What are you using it for?

L7 DDoS protection and global routing + CDN, there is not a single paygo provider that can handle the capacity CF can, especially not at this price range (mitigated attacks distributed from approximately 50-90k ips, adding up to about 300-700k rps).

We tried Stackpath, Imperva (Incapsula back in the day), etc but they were either too expensive or went out of business.


> especially not at this price range

pay peanuts, get monkeys


Those are different products. BIC prevents requests such as empty UAs or corrupted HTTP requests to pass CF without a challenge.

Turnstile/Challenges per se don't rely on the UA at all.


Cloudflare is actually pretty upfront about which browsers they support. You can find the whole list right in their developer docs. This isn't some secret they're trying to hide from website owners or users - it's right here https://developers.cloudflare.com/waf/reference/cloudflare-c... - My guess is that there is no response because not one of the browsers you listed is supported.

Think about it this way: when a framework (many modern websites) or CAPTCHA/Challenge doesn't support an older or less common browser, it's not because someone's sitting there trying to keep people out. It's more likely they are trying to balance the maintenance costs and the hassle involved in allowing or working with whatever other many platforms there are (browsers in this case). At what point is a browser relevant? 1 user? 2 users? 100? Can you blame a company that accommodates for probably >99% of the traffic they usually see? I don't think so, but that's just me.

At the end, site owners can always look at their specific situation and decide how they want to handle it - stick with the default security settings or open things up through firewall rules. It's really up to them to figure out what works best for their users.


They do not support major browsers. They support "major browsers in default configuration without any extensions" (which is of course ridiculous proposition), forcing people to either abandon any privacy/security preserving measures they use, or to abandon the websites covered by CF.

I use uptodate Firefox, and was blocked from using company gitlab for months on end simply because I disabled some useless new web API in about:config way before CF started silently requiring it without any feature testing or meningful error message for the user. Just a redirect loop. Gitlab support forum was completely useless for this, just blaming the user.

So we dropped gitlab at the company and went with basic git over https hosting + cgit, rather than pay some company that will happily block us via some user hostile intermediary without any resolution. I figured out what was "wrong" (lack of feature testing for web API features CF uses, and lack of meaningful error message feedback to the user) after the move.


Although I sometimes have problems with Cloudflare, it does not seem to affect GitHub nor Gitlab for me, although they have other problems, which I have been able to work around.

Some things that I had found helpful when working with Gitlab is to add ".patch" on the end of commit URLs, and changing "blob" to "raw" in file URLs. (This works on GitHub as well.) It is also possible to use API, and sometimes the data can be found within the HTML the server sends to you without needing any additional requests (this seems to work on GitHub more reliably than on Gitlab though).

You could also clone the repository into your own computer in order to see the files (and then use the git command line to send any changes you make to the server), but that does not include issue tracker etc, and you might not want all of the files anyways, if the repository has a lot of files.


I think this is the same issue as is being discussed here: https://gitlab.com/gitlab-org/gitlab/-/issues/421396

It sometimes blocks me on fairly major browsers, such as google chrome ( but on an older Ubuntu ).


I think they protect only the login page.


Not exactly. They say:

"Challenges are not supported by Microsoft Internet Explorer."

Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers, as defined by Cloudflare presumably. That wouldn't sound too legal, honestly.

Below that: "Visitors must enable JavaScript and cookies on their browser to be able to pass any type of challenge."

These conditions are met.


> * If your visitors are using an up-to-date version of a major browser * > * they will receive the challenge correctly. *

I'm unsure what part of this isn't clear, major browsers, as long as they are up to date, are supported and should always pass challenges. Palemoon isn't a major browser, neither are the other browsers mentioned on the thread.

> * Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers *

Challenge pages is what your browser is struggling to pass, you aren't seeing a block page or a straight up denying of the connection, instead, the challenge isn't passing because whatever update CF has done, has clearly broken the compatibility with Palemoon, I seriously doubt this was on purpose. Regarding those annoying challenge pages, these aren't meant to be used 24/7 as they are genuinely annoying, if you are seeing challenge pages more often than you are on chrome, its likely that the site owner is actively is flagging your session to be challenged, they can undo this by adjusting their firewall rules.

If a site owner decides to enable challenge pages for every visitor, you should shift the blame on the site owners lack of interest in properly tunning their firewall.


So.. no new browsers should ever be created? Or only created by people with enough money to get CloudFlare onboard from the start? Nothing new will ever become major if they're denied access to half the web.


You can create a new browser, there are plenty of modern new browsers that aren't considered major and work just fine because they run on top of recent releases of chromium.

There are actually hundreds of smaller chromium forks that add small features, such as built-in adblock and have no issues with neither Cloudflare nor other captchas.


I think it's pretty clear this is about browser engines. If your view holds then Servo (currently #3 story in front page) will never make it.


Fair enough, but... if Cloudflare's challenge bugs out who is going to fix it? Aren't they responsible for their own critical tools?

Because in the end, the result is connection denial. I don't want to connect to Cloudflare, I want to connect to the website.

I read that part. They still do not indicate what may happen, or what is their responsibility -if any- for visitors with non-major browsers.

Not claiming this is "on purpose" or a conspiracy, but if these legitimate protests keep getting ignored then yes, it becomes discrimination. If they can't be bothered, they should clearly state that their tool is only compatible with X browsers. Who is to blame for "an incorrectly received challenge"? The website? The user who chooses a secure, but "wrong" browser not on their whitelist?

Cloudflare is there for security, not "major browser approval pass". They have the resources to increase response times, provide better support and deal with these incompatibility issues. But do they want to? Until now, they did.


I think the issue is that Cloudflare tends to be a toggle-and-forget, it's very easy to use and it works for most people.

The problem with this setup, is that it sacrifices on both security (because it needs to keep false positives at a minimum, even if that means allowing some known bots) and user experience (because situations like the one you have will occur from time to time). When you enable a challenge page on CF, it will work as-is and you have no ruling over it, the most you can do is skip the page for the browsers having false positives.

If CF gave site owners a clearer view of what they are blocking and let them choose which rules to enforce (within the challenge page), it would be much easier to simply say that the customer running CF doesn't want you visiting their page/doesn't care about few false positives.


So you're saying that which browsers are supported on the Internet should be determined by a single, for-profit company? That's a very interesting and shorthsighted take.

I love how so many of these apologists talk about stuff like "maintenance costs", as though it's impossible to write code that's clean and works consistently across platforms / browsers. "Oh, no! Who'll think of the profits?!?"

If you had any technical knowledge, you'd know that "maintenance costs" are only a thing when you code shittily or intentionally target specific cases. A well written, cross-browser, cross-platform CAPTCHA shouldn't have so many browser specific edge cases that it needs constant "maintenance".

In other words, imagine you're arguing that a web page with a picture doesn't load on a browser because nobody bothered to test with that browser. Now imagine you're making the case for that browser being so obscure that nobody would expend the time and money. Instead, why aren't you pondering why any web site with a picture wouldn't be general enough to just work? What does that say about your agenda, and about the fact that you want to make excuses for this huge, striving-to-be-a-monopoly, for-profit company?


I think it's pretty clear you have never worked on fraud protections or bot detections, otherwise you'd understand the struggles of supporting many environments with a single solution, you already have an opinion on this and by the way your messages are typed, it doesn't seem like any rational will change your ideas.

This is the internet and everybody is a field expert the moment they want to win an argument, best of luck with that.


Indeed. Software can be written like math. 1 + 1 = 2, holds true for now and for all time, past and present.


BunnyCDN DDoS protection is made to protect their servers and the customers, it's not meant to serve your service as a shield against attacks.

This is a common misconception with many providers, they have DDoS protection to ensure that an attack against them won't cause your website/service being unavailable, however, if an attack targets your service, it most likely won't be filtered by their system.


It usually does cover volumetric attacks since those usually bring everything down with it.

As to layer 7 or other types of attacks, it’s a tough call. You need specialized services. Cloudflare does great for its price. It’s not like the big cloud providers reliably solve this problem either.


Banking sites and anybody who suffers from any sort of attack, whether it's scraping, DDoS, bots, bruteforcing...

Does everybody get those attacks? Probably not, however, Cloudflare centralizes the attacks into a single IP reputation database so, if at some point, a certain node was abused on x site that uses Cloudflare, anybody who is routed through that node will have a poor experience browsing CF sites.

This approach of centralizing IP reputations has its own flaws and benefits, Tor Nodes aren't inherently given a bad reputation, it just happens that if 90 people are using the tool for all the good things, 2 assholes can abuse the IPs and have them blacklisted on almost any website, whether it's Cloudflare, Imperva, Akamai, PX, you name it. Cloudflare is the most known name but there are tons of other E2E/B2B providers that don't show up as often.


We report each DDoS attack our company receives to a special department our police has, your country likely has something similar and I guess it doesn't hurt reaching out to them.

From my experience they will get back to you quickly (usually in <1-2 hour) and they can try helping out if you are still under attack / need some consultation.

Will we ever get compensated for the wasted engineering time to stop these attacks? probably not, but if the police ever finds them and they have extra logs of companies that reported issues, its likely an aggravation of the case.


You're right, I guess I'm still thinking on a few experiences I had way in the past when the Internet was still early and contacting them was a waste of time: they couldn't understand you nor had the time to do so. It's true they now have many more resources and experts in their departments and, as you say, may at least give some good advice on what to do during the panic stage to try and at least mitigate it. Providing them with logs and proof would have been a good idea too.

Oh my, the attack caused so much wasted time and stress that it's still haunting me and the team, specially when thinking that it may not stop there and the attacker/s is just waiting for the next chance to hit us. The days after the attack the first thing I did after waking up was check the servers to see everything was safe. And our roadmap was severely affected too, prioritizing many security features we had in the backlog.

Thank you so much.


Things are significantly better now, I can't comment on how good the aid is if you are under attack since we always had a team ready to handle DDoS, however, their follow-up has always been fast.

Regarding security features, if you are on a cloud such as GCP, AWS or Azure things are complicated since you can't easily route the traffic elsewhere(you can have BGP connections to DDoS mitigation inside GRE/L2TP tunnels only when attacks occur and it would be cheap to rent on a monthly/yearly basis). Voxility is an example that comes to mind and they are very affordable in general terms.

HTTP or HTTPs attacks are easier to handle with Cloudflare, however, there are other interesting solutions such as Stackpath.


We were under a DDoS attack about a month ago too, but were lucky that it didn't manage to affect our business. With that in mind, we took it as a (precious) learning experience - how often do you get the chance to learn about DDoS defence 1st hand?

I realize we were lucky that the attacker didn't find any of the soft spots (or at least none that hurt us). We do prioritize security though, always.

I hope all goes well for you and that in time this is just another learning experience. Maybe next time you'll smile when an attack is thwarted because of what you've all learned.


We get attacked several times a month, we rely on Cloudflare & Corero to mitigate attacks. Cloudflare handles HTTP/s attacks and Corero handles network level attacks.

Both require tweaking and are far from being 1-click setup tools (despite some marketing attempts that try to make it seem that way), however, if you can manage them, they are very powerful and considerably cheaper than other alternatives.


Thank you, I didn't know about Corero, will check them out. CF we use, and as you said, they are a tool. Plenty of ways they could be better, but they are still the best (in moderate price range) we know.


This is what hCaptcha is currently doing, they are switching the image category every 24-72 hours. How useful is it? Not very. Modern ML models such as mobilenet, resnet or yolo require only a few hundred images for it to be accurate to solve those captchas.

You don't need few million samples, with 500-700 images per category you are more than ready to solve current captchas.


btw hCaptcha has an accessiblity page for you to sign up and never solve a hCaptcha again.

here is the link https://dashboard.hcaptcha.com/signup?type=accessibility

*edited typo


I tried it doesn’t work


you need to enable cookies


Yep, the cost of keeping the model up to date would be negligible compared to the hosting bill.


Ever since ML has reached the "general public", developing models against hearing or vision based CAPTCHAS has become trivial.

Sure, you have to emulate or simulate the client JS challenges but when bots are running browsers in the background you can only do so much.

I wonder what the future of captchas, if any, will look like.


It's identity, which is why Google shows "Your computer or network may be sending automated queries" message on recaptcha if you trigger too many heuristic and IP reputation signals to be classified as a bot. That's why, for Google, you get to carry around your reputation in the form of your Google Account, and for Cloudflare, they have private access tokens[0] (which might be the only reason you don't get blocked by every CF site on iCloud Private Relay), and otherwise Cloudflare's big ambition is "human attestation" via WebAuthn credentials[1,2].

0: https://blog.cloudflare.com/eliminating-captchas-on-iphones-...

1: https://cloudflarechallenge.com/

2: https://blog.cloudflare.com/introducing-cryptographic-attest...


However, that's not a solution but a patch.

Google accounts give you a good score and tend to deliver easy captchas while dealing with Recaptcha; however, for this reason, google accounts are being sold and bought constantly.

People have tried similar fight tactics in the past. SMS and phone verification have failed because the return on investment is far greater than the price barrier it adds to get any of those "virtual identities".

iPhones might work but then, for how long? If you guarantee that an IPhone won't get captchas, it's a good investment to buy many old(or new) ones and sell token access to skip any captcha.

Many farms already have thousands of phones scrolling through youtube videos to get views, likes, and other stats for videos/channels.

The same "logic" applies to yubikeys and similar auth hardware; attackers can exploit it similarly.

Companies will tell you that they have abuse policies and actively fight abuse/bot farms, but again, they are not solving a problem but solving the problem with tape.

ReCAPTCHA was very useful for a while, it did genuinely stop bots reasonably well, but none of the "newer" versions seem as efficient as the older versions used to be. Progress stopped after V2.


...which really sucks when you try to use any of those sites via tor (no cookies, "bad" IP) or at a place with a shared external IP (public access points).

Open google.. captcha... every page has a 5 second cloudflare page before opening the page itself.

Bots have the time, they can wait and do other stuff in the meantime, but we, humans get bothered by that.


I've also wondered about the more speculative future of CAPTCHas - e.g. how to prove you are human when ML get better and better. Would be fun to add to the near future sci-fi I'm sometimes writing. I'd imagine CAPTCHAs could go towards social proofs ("Carl is asking you to verify he is human, are you sure?", doing things in the physical world ("Go out and make <this gesture> to the Google satellite") or being asked more and more difficult world reasoning questions, those that GPT (so far) struggle with.


You do not get attacked from Cloudflare with TCP attacks. Somebody is spoofing the IP header and make it seem like Cloudflare is DDoSing you.

The only way for somebody to DDoS from Cloudflare would be using workers, however, this isn't practical as workers have a very limited IP Range.


The reason people do this, by the way, is because it's common if you're hosting via CF to whitelist their IPs and block the rest. This allows their SYN flood to bypass that.


I run a fairly popular service and have received DDoS attacks from Cloudflare's IP range (~20gbps). I can confirm they respond to SYN+ACK with an ACK to complete the TCP handshake. Through some investigating it seems like a botnet using Cloudflare WARP (their VPN service).


Why are you assuming amplification attacks aren't a thing?

I think you're probably right about the spoofing but it comes off a little dismissive when the possibility of a site that queries other sites, could be tricked into doing something it shouldn't, is always going to be in the realm of a possibility.


Adding raw TCP is a big deal, it skips all the existing security stack that focuses on HTTP/S. There is Spectrum and Transit to provide network level protection but... only a few can afford that.

Does this mean that TCP workers would be exposed to network level attacks or would it use transit/spectrum? If it turns out to be protected; I'd say there would be little to no reason to use Spectrum unless the pricing turns out to be atrocious for long lived connections (which is kind of the point of having TCP workers in the first place).

I hope I did not come out as rude; I'm genuinely curious about what's the plan behind all of this.

Edit: I pointed out there would be no use for spectrum since one could "easily" build a reverse proxy with a tcp worker.


The exact details aren't all nailed down but I'd expect for incoming connections Workers would integrate directly with Spectrum. I don't know what that might mean for pricing, but I imagine we'd find a solution where cost doesn't block people from building cool things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: