Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

anyone have some experience on this? I'm in the market for an AMD laptop capable of running standard sci-kit/pytorch/etc but these seem optimized for NVIDIA cards. I'm curious about the outlook for these trending AMD cards.


Generally if you want to do ML tasks you want nVidia. They put the work in early to build the tooling so now the default assumption is that you're on nVidia hardware. It is possible to do some stuff on AMD cards, but you'll be on the cutting edge for that platform re-solving problems that were already solved on the nVidia side.


yes that was my conclusion after some research on this. there is an open issue on github for AMD support on pytorch, and looks like something works on arch linux, but really sounds like support is still in the hacking stage and far from production mode.

https://github.com/pytorch/pytorch/issues/10657


Support for PyTorch on ROCm is fairly good. I have built it from the dev branch without much trouble for all year. There has been ROCm CI even longer, my patch to print ROCm system information for bug reports was merged last week.

If you know where to look (i.e. it's public but unannounced) you can see that nightly wheels have been built for the last few days. So I would expect that some time between now and the Developer Day in November we'll see ROCm appear on PyTorch's "get started" page.


I'm almost positive that if you're talking about AMD GPU's your going to be out of luck. For Deep Learning especially NVIDIA is really the only serious option.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: