Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could have afforded VPS hosting in the beginning (starts from $20 per month for decent specs), and immediately switched to dedicated hosting once you outgrew the VPS (from $100 per month). AWS was never the cheapest for any given level of performance; you've always paid a huge premium for rarely needed flexibility.


I would argue the "rarely needed flexibility" - our traffic is very spiky, and we scale from 3 to 11 or even 13 front-end instances between our lowest and peak usage. If I had to maintain all that hardware capacity for 10% of the day, it would be more costly.

Then there are the ancillary benefits - I can lose a server and another is in its place, attached properly to the stack within 60 seconds. I can take advantage of compute power when I need. I don't have to run nginx or haproxy for load balancing (and I don't have to manage the load balancer servers) and spinning up an identical stack in Europe or Asia is a single command line statement away.

Also I'm not a server admin by trade.

So I realize there are scenarios where bare metal could be cheaper, but the opportunity and admin costs need to be factored as well.


Traffic that spiky is extremely unusual. But you don't need to maintain all that hardware capacity for 10% of the day.

There are a number of colo providers that also provide managed hosting and cloud services (even if you for some reason couldn't deal with the latency of simply tieing EC2 instances into a colocated or managed hosting stack via a tunnel).

In fact, combine, and you can cut the hosting costs even more than with colo services alone, since you don't need to plan for a low duty cycle for the hardware - you can run things on 90%+ load (or wherever your sweet spot turns out to be) under normal circumstances and fire up cloud instances for the spikes.

That method handles the loss of servers etc. as well just fine, again further cutting the cost of a colo/managed hosting setup because you can plan for less redundancy.

Personally I've yet to work on any setup where the admin costs of an AWS setup has been nearly enough lower than colo or managed hosting to eat much into the premium you're paying. You have 90% of the sys-admin hassles anyway, and you're left with far less possibility of tuning your setup to what works best with your setup.

Most of the setups I've worked on come out somewhere between 1/3 to 1/2 of the cost of running the same setup purely on AWS. Sometimes the gap is smaller if you have really spiky traffic or have lots of background processing or one off jobs where you can e.g. use spot instances extensively.

I do understand people wanting to pay to not have to think about these things, though. But you're almost certainly paying a steep premium for it, even with your traffic spikes.


The idea of straddling two separate data centers seems far more complex, cost-wise, and time-wise, than simply going with AWS and using their flavor of elasticity. Given that his hosting costs are half a percent of his yearly revenue, "premium" really seems like the wrong word here.


It may seem more complex, but it really isn't, and it typically ends up so much cheaper than EC2 it's not even funny.

And you don't need to straddle data centres as most data centre operators these days have their own cloud offerings - often cheaper than EC2.

I don't know his margins, so maybe it isn't worth it for him specifically, but I know plenty of businesses with small enough margins that the opportunity to halve a half-a-percent-of-revenue cost like that could easily add 10% to profits.


I wasn't referring specifically to the physical aspect with something like colocation, although that's another potential facet of complexity. The complexity I was referring to is literally having two distinct environments interoperate seamlessly.

You now have one environment trying to talk to a database in another location, for example. So, some requests are artificially slower than others. You could mitigate that with caching, and probably already do, but now your cache is fragmented across two, or more, environments. On and on and on and on.

Configuration, security, duplication of resources that can't be easily shared. These aren't unsolvable problems, but they're relatively more complex than sticking everything in a single environment.

And yeah, the money aspect could easily be worked in either of our favor. It really depends on the specific situation.


What providers do you recommend for a situation such as this one? Thanks!


I mostly do colo's, but I've used iWeb and Softlayer in the past. But pretty much any data centre operator has their own cloud offerings today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: