These specs look enormously cheaper than doing it with dell servers. The last quote I had for a bog standard dell server was $50k and only if bought in the next few days or so. The prices are going up weekly.
These are "unsupported" configurations. Nvidia/AMD discourage running multiple gaming/workstation cards and encourage customers to buy $500K SXM/OAM servers.
But how will I make ad-supported youtube videos about how I automated my life with OpenClaw using a $10M boutique AI server to make a few thousand in ad revenue while burning tens of thousands per month on API cost.
DGX Spark is a fantastic option at this price point. You get 128GB VRAM which is extremely difficult to get at this price point. Also it’s a fairly fast GPU. And stupidly fast networking - 200gbps or 400gbps mellanox if you find coin for another one.
I’m not very well versed in this domain, but I think it’s not going to be “VRAM” (GDDR) memory, but rather “unified memory”, which is essentially RAM (some flavour of DDR5 I assume). These two types of memory has vastly different bandwidth.
I’m pretty curious to see any benchmarks on inference on VRAM vs UM.
I’m using VRAM as shorthand for “memory which the AI chip can use” which I think is fairly common shorthand these days. For the spark is it unified, and has lower bandwidth than most any modern GPU. (About 300 GB/s which is comparable to an RTX 3060.)
So for an LLM inference is relatively slow because of that bandwidth, but you can load much bigger smarter models than you could on any consumer GPU.
Meh. DGX is Arm and CUDA. Strix is X86 and ROCm. Cuda has better support than ROCm . And x86 has better support than Arm.
Nowadays I find most things work fine on Arm. Sometimes something needs to be built from source which is genuinely annoying. But moving from CUDA to ROCm is often more like a rewrite than a recompile.
> But moving from CUDA to ROCm is often more like a rewrite than a recompile.
Isn't everyone* in this segment just using PyTorch for training, or wrappers like Ollama/vllm/llama.cpp for inference? None have a strict dependency on Cuda. PyTorch's AMD backend is solid (for supported platforms, and Strix Halo is supported).
* enthusiasts whose budget is in the $5k range. If you're vendor-locked to CUDA, Mac Mini and Strix Halo are immediately ruled out.
Most everything starts as PyTorch. (Or maybe Jax.) But the inference engines all use hand tuned CUDA kernels - at least the good ones do. You have to do that to optimize things.
I'm certain inference engines don't use hand-tuned CUDA on Radeon or Mac Mini chips. My statement holds: those engines have no strict dependency on CUDA, or they'd be Nvidia-only.
> Windows touches more people’s lives than almost any technology on Earth.
Thankfully Ballmer failed and this isn’t even close to true. I, like a lot of highly technical professionals, have been Windows sober for many years now.
Not OP, but it is probably either "Average Hold Time" or "Average Handle Time". I supposed the usage here indicated some call center metric that management was expecting in a certain range but the new tool skewed it in a different direction.
Exactly. Consider the current conflict in Iran. They have thousands of drones that cost $50k each. The US’s only real defense against one of these drones is to fire a million dollar missile at it. That assymmetry can win or lose a war.
This design is pretty clearly optimized for weaponry. Eg the foldable fins - necessary if you want to keep a magazine of these things stored compactly before firing. Totally unnecessary for funsies.
What nonviolent application are you imagining for a gps-guided rocket that is launched by pulling a gun trigger on a hand held mount?
Launching model rockets with a controlled landing (less likelihood of property damage or fires). Learning about the components. Folding fins make it easier to transport without snapping one (hopefully). Trigger vs button launch isn't that big of deal, although might have better safety options compared to standard model rocket launch buttons.
> A launcher for a climbing rope or grappling hook. Have you ever tried getting a rope up over a branch on a very tall tree?
You might look into arborist's throw line launchers and line guns. They come in slingshot and pneumatic varieties. With a little (mostly fun) practice, they can be pretty accurate and reach limbs over 100' up.
The article is actually IMHO overly conservative. This kind of migration is not a theoretical risk, but well established. BPA is a small molecule, not covalently bound to the plastic. It absolutely goes into the skin. Heat, water, and acidity (sweat is slightly acidic) all accelerate the absorption.
Plus absorption through the skin is worse than oral. Because when you eat it your liver breaks a lot of it down. When it goes in the skin it bypasses all that.
Good point. And diving in I realize my fear was mostly unfounded. Compared to typical background exposure (what we can infer people are getting through other sources by looking at their urine) this is insignificant, except for the very worst headphones. The headline is unsurprisingly alarmist, because by their own data 68% have acceptably very low bpa. But the very few with the worst amounts drag up the arithmetic average to something scary sounding.
To estimate how much gets into the body
https://oehha.ca.gov/sites/default/files/media/downloads/crn...
Is a good reference. Interpolating from that, a typical pair of in-ear buds works out to something like a fraction of a nanogram per kg bodyweight per day, versus 30–130 ng/kg/day from background. So totally negligible. Even the worst case - highest measured concentration, assuming over ear headphones (much more contact area), and a hot sweaty workout, you’re looking at maybe 5ng/kg/day - still in the range of dietary background, but not good.
It truly boggles the mind how bad public schools can be. When you read something like this or see things in person, it’s genuinely difficult to imagine how a system could do this when staffed with well intentioned humans with brains. And yet. It happens.
For liberals this is a good reminder of why conservatives don’t trust government.
reply