Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is PyTorch actually faster than TensorFlow with the C++ static compilation?


I've tried XLA and observed no measurable benefit for most models, small benefit for some, and worse performance for others. That was a few months ago, so maybe things have changed since then. It won't be a huge difference no matter what they do, because the vast majority of time is spent in CUDNN and gemm/gemv anyway


What I'm interested in is AMD support, actually, w.r.t. GPU performance.


There's currently an open PR on GitHub for AMD GPU support https://github.com/pytorch/pytorch/pull/2365




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: