Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not disproving OP's comment; OpenAI is, in my opinion, making it untenable for a regular Joe to build a PC capable of running local LLM model. It's an attack on all our wallets.


Why do you need a LLM running locally so much that's the inflated RAM prices are an attack on your wallet? One can always opt not to play this losing game.

I remember when the crypto miners rented a plane to deliver their precious GPUs.


Some models are useful; using whisper.cpp comes to mind to create subtitles for, for example, family videos or a lecture you attended without sending your data to an untrusted or unreliable company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: