Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or maybe models that are much more task-focused? Like models that are trained on just math & coding?


isn't that what the mixture of experts trick that all the big players do is? Bunch of smaller, tightly focused models


Not exactly. MoE uses a router model to select a subset of layers per token. This makes them faster but still requires the same amount of RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: