Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intel is close. Good history with software.

If they started shipping GPUs with more RAM, I think they'd be in a strong position. The traditional disruption is to eat the low-end and move up.

Silly as it may sound, but a Battlemage where one can just plug in DIMMs, with some high total limit for RAM, would be the ultimate for developers who just want to test / debug LLMs locally.



Silly as it may sound, but a Battlemage where one can just plug in DIMMs, with some high total limit for RAM, would be the ultimate for developers who just want to test / debug LLMs locally.

Reminds me of this old satire video: https://www.youtube.com/watch?v=s13iFPSyKdQ


Intel is run by fools. I don't see them coming back. They just don't have the willingness to compete and offer products with USPs. Intel today is just MBAs and the cheapest outsourced labor the MBAs can find.


Plug in DIMMs just doesn’t make sense. Too much compromise on performance. And that difference is just going to get bigger as packaging technology continues to improve. RAM wants to have a connection with as many traces as possible, with the shortest possible length, to the processor.

Making some kind of super niche card for a small fraction of developers who might want the option to upgrade RAM, just doesn’t make financial sense. It’ll end up being more expensive for everyone compared to just buying the card with the amount of RAM you need soldered in.

I mean, check the Framework Desktop. If it doesn’t even make sense for Framework, who can justify a bit of extra cost for modularity, it doesn’t make sense for any company.


I should also mention: In terms of financial sense, one lesson learned in the engineering industry is that _giving away expensive things_, even at high cost, to engineers makes financial sense. If an engineer puts what you gave away in a product, you might sell a million more units.

If Intel could give away a Battlemage to every PyTorch developer, every game developers, and every deep learning developer, even tossing in a check for free cash, in return for it becoming their primary video card, the ROI would be astronomical.

If Intel gave _me_ a Battlemage, and I actually used it (I wouldn't; I have an NVidia), the expected ROI would likely be >$1M. For myself alone.

The key gap between AMD's $170B market cap and NVidia's $2.92T market cap is software and ecosystem.

Making a card specifically for developers makes a ton of sense, since what those developers develop will then have many, many orders of magnitude more __users__.


That makes sense.

Until you do the math.

128GB of the slowest possible DDR4 in 8 DIMMs is still faster than 16GB of the fastest DDR5 in one DIMM. 12.8GB/s*8 = 102.4GB/s > 74GB/s

It requires many more traces, but you still get more throughput overall with more, cheaper memory further away.

The packaging costs on the GPU do go up for the extra traces.

More critically, no one would design it like that. There's a memory hierarchy. You would likely be a small amount of high-speed soldered-on RAM next to the GPU, and more slightly further out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: