Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a good idea. What setup do you use and how does the model perform?
 help



AMD 395+ w/128B. Qwen3-coder-next running from llama.cpp; opencode seems to have the best depth. It can get to around 80k context before pooping out.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: