Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not exceptional and impressing for comparing 14nm/7nm GPUs. (yes it's fanless.)


A 1060 is 200mm² of pure GPU at ~100W, an A12X's GPU is ~26mm² (approximated) running at ~10W.

Of course it's impressive. Even if that was somehow all down to the better node, their next generation SoCs will still use a newer node than NVIDIA's next generation GPUs.


You comparing 1060's entire die size including NVEnc, memory controller, PCIe controller, and so on vs A12X's GPU core only die size. And I expect 1060 is optimized for perf but A12X is optimized for power(temp).

There's no reason NVIDIA should use a process behind Apple forever because both uses TSMC (and Samsung) and process improves slowing down.


Cropping to the inner part still gives ~140mm², or comfortably over 5x the size of Apple's GPU.

I don't really get your argument. Apple customers buying Apple Silicon Macs—which, again, will probably have a GPU over three times as fast as the A12X—aren't going to let hypotheticals detract from their powerful and power-efficient GPUs. ‘But NVIDIA didn't optimize for efficiency’ and ‘but NVIDIA hypothetically could have used a newer node than they did’ don't count for squat.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: