And that's just my born in Computer Shopper Pavlovian response to articles about new more powerful CPU's. I have no idea what I would do with eight fifteen core CPU's and six terabytes of RAM other than write checks to Alabama Power and think of ways to mention it in casual conversations. I mean a 911 Turbo? at least I could use it to pick up hamburger buns.
At a deeper level, I always wonder, what [besides mining bitCoin] would other people hack up with a monstrous amount of computing power on the order of a data center in a container?
Depends on what are you doing. For any programmer working on a bigger project, where compilation time is greater than 2s, (let's say) 4x more power is superb.
If your code compiles in ~16 seconds, reducing that to 4 would add comfort.
If your code compiles 60 seconds, reducing that to 15 would allow not losing concentration.
If your code compiles in 60 minutes, reducing that to 15 minutes would actually allow to do any work on that (not that it's not possible, but it's really not pleasant).
That's assuming it's sufficiently parallizable. If you only have a single thread, an ordinary Intel desktop CPU is still faster than any Xeon: https://www.cpubenchmark.net/singleThread.html
I thought about 4-core vs newer 8-core desktop CPUs when upgrading, but decided that I more often do work on less than 4 cores (with 1 you get the "Turbo" capacity). The only multi-threaded program is PyCharm which can use up all those cores when recalculating its static type checking of Python code (which it seems to do quite excessively).
And luckily, code compilation tends to be one of those things that can actually be effectively parallelized.
Another thing useful for a programmer is being able to benchmark things - with a multicore machine you can tie off one core (or multiple cores!) exclusively for a benchmark, which can really help reproducibility.
Render 3d graphics really fast? That's all I do that would begin to leverage that sort of power. Typically the two taxing things I do are play around with games and do 3d graphics/animation in Cinema 4D and AfterEffects. Games aren't built to make use of that many threads/cores or that much RAM but for graphics stuff, it would be like having my own little render farm in one workstation.
Well, actually making games requires that much power. I work at a games studio, run an 8-core(16-thread) Xeon + 64GB of ram + 1TB SSD and everything I do daily is just sloooow. Running the game in debug mode uses up all my ram, compilation times are between 20-40 minutes for the whole project even when using distributed build systems, and that 1TB SSD is not helping that much if it's nearly full all the time. If we could have our workstations upgraded to something like 16-core Xeons + 256GB of ram it would be a godsend.
> At a deeper level, I always wonder, what [besides mining bitCoin] would other people hack up with a monstrous amount of computing power on the order of a data center in a container?
I wouldn't call it "hacked up", but large-area surveillance radar systems like AEGIS and JSTARS use very high density systems like this, because the size, power, and cooling available for the signal and display processing are severely limited. Same can be said for systems like the mobile Doppler radars the National Weather Service and various universities use for studying severe weather.
You can build Android/Cyanogen or Chrome from scratch -- it easily uses dozens of cores and takes hours on a slow machine. A lot of other build systems won't use the machine because they don't parallelize. IIRC, the OpenWrt build doesn't parallelize.
Another idea: you can also do some big linear algebra problems. I'm doing this now at work and it's kind of interesting that a single cloud machine is like a supercomputer from a decade or so ago (32 cores, 128 GB of RAM). I guess that is obvious but once you start using the whole machine it becomes impressed upon you.
I actually bought a big Dell machine to do secure builds... thinking of getting something like this though :)
Along with the other answers here, things like distributed processing projects tend to be embarrassingly parallel. I was running distributed.net OGR for a while, and that runs quite a bit better on many modest cores. Almost all my other programs bottleneck on disk / RAM speed, or RAM size... The OGR project in particular is also far more efficient on CPUs than on GPUs, since it is an entirely integer workload.
And that's just my born in Computer Shopper Pavlovian response to articles about new more powerful CPU's. I have no idea what I would do with eight fifteen core CPU's and six terabytes of RAM other than write checks to Alabama Power and think of ways to mention it in casual conversations. I mean a 911 Turbo? at least I could use it to pick up hamburger buns.
At a deeper level, I always wonder, what [besides mining bitCoin] would other people hack up with a monstrous amount of computing power on the order of a data center in a container?