Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(If you want a Mac,) Apple silicon has the advantage of the unify memory, and with llama.cpp, they can run those models locally and quickly. I’d say start with the largest model you want to run, run it through llama.cpp which will tell you the amount of memory needed. And buy the Mac with at least that amount of memory that you can afford. If you have more budget, prioritize more memory because you may want to be able to run larger model later.

If not Mac, follow other advice with NVidia GPU. in term of the software ecosystem, NVidia >> Apple >> AMD > Intel. (I think I got the ordering right, but the magnitude of difference might be subjective.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: