Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost all Chinese models are open weight research models.

My theory is that these models serve the purpose of being relatively easy to run/tweak for researchers, and mainly serve to demonstrate the effectiveness of new techniques in training and inference, as well as the strength of AI labs that created them.

They are not designed to be state of the art commercial models.

By choosing bigger model sizes, running more training epochs, and drilling the models a bit more on benchmarking questions, I'm sure the Chinese could close the gap, but that would delay these models, make them more expensive and harder to run without showing any tangible research benefit.

Also my 2c: I was perfectly happy with Sonnet 3.7 as of a year ago, if the Chinese have a model really as good as that (not only one that benchmarks as well), I'd definitely like to try it.



It is arguable that the new Minimax M2.1 and GLM4.7 are drastically above Sonnet 3.7 in capabilities.


Could you share some impressions of using them? How do they feel like compared to OAI models or Claude?


Minimax has been great for super high speed web/js/ts related work. It compares in my experience to Claude Sonnet, and at times gets stuff similar to Opus. Design wise it produces some of the most beautiful AI generated page I've seen.

GLM-4.7 like a mix of Sonnet 4.5 and GPT-5 (the first version not the later ones). It has deep deep knowledge, but it's often just not as good in execution.

They're very cheap to try out, so you should see how your mileage varies.

Ofcourse for the hardest possible tasks that GPT 5.2 only approaches, they're not up to scratch. And for the hard-ish tasks in C++ for example that Opus 4.5 tackles Minimax feels closer, but just doesn't "grok" the problem space good enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: