Hacker Newsnew | past | comments | ask | show | jobs | submit | deepjoy's commentslogin

Recently came across a project on HN front page that was developed on Github with a public repo. https://github.com/steveyegge/gastown/graphs/contributors 2000 commits over 20 days +497K/-360K lines

I'm not affiliated with Claude or the project linked.


Anthropic must be loving this.

> Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.

https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...


The author has written an evangelical book about vibe coding.

https://www.amazon.com/Vibe-Coding-Building-Production-Grade...

He also has some other agent-coordination software. https://github.com/steveyegge/vc

Don't know whether it's helpful, or what the difference is.


Not sure if anyone else noticed. The first commit on that repository was just about 3 weeks ago. https://github.com/steveyegge/gastown/commit/4c782bc59de8cba...

Has to be close for the shortest time from first commit to HN front page.


"New generation server parts significantly reduce operating costs and without a huge price rise, they will also drop TCO more than a little"

Looking at some outside evaluations of their capex, it does not seem to represent a significant drop since CPU cost is only a tiny part of operating a datacenter.

http://continuance.co/revisiting-capex/


Hence _T_otal _C_ost of _O_wnership. Running a data center is all about getting direct power consumption (servers) and indirect power consumption (cooling) down.


I've heard the assumption that energy costs outweigh cpu costs for TCO, but after some rough guesstimation I'm not so sure.

1MWh of electricity is enough to power a cpu running 114W on average for a year. A 24 core (48 thread) xeon has a 165w tdp, meaning that even such a beast is likely to use less on average (most applications don't have 100% cpu load 100% of the time, and virtualizing up to that load causes serious issues when several clients happen to simulatneously peak). In essence, this should run pretty much any single CPU, and even if you're doing something weird to actually get that 100% load, we're not far off.

1MWh of electricity is around 40$ at wholesale prices (https://www.eia.gov/electricity/monthly/update/wholesale_mar...). Even if google pays 100, then it looks to me as if straightforward server cpu energy usage is unlikely to be a huge issue.

Of course, there are power supply losses and cooling overheads, but even if that raises the power consumption by a factor 5 (doubtful), then a several-thousand dollar server cpu is still going to outweigh energy costs for many years.

Furthermore, energy costs are trending downwards and cpu's are still getting more efficient; it's a somewhat risky investment thus to buy a more expensive (but more efficient) chip if that investment needs years to pay off.

So, it looks to me as though plain old cpu costs certainly are far from irrelevant; they may indeed exceed energy costs over the chips lifetime if the energy is acquired at close to wholesale prices and cooling is efficient (say 3x overhead?).


“Good design, when it’s done well, becomes invisible. It’s only when it’s done poorly that we notice it.” – Jared Spool


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: