Hacker Newsnew | past | comments | ask | show | jobs | submit | steveharing1's commentslogin

nice work!

I really appreciate open source community for moving this fast

yes. I've used it a lot. its very simple and good

You can try Open WebUI. Its genuinely useful when it comes to running open models locally with a clean interface

Yep, couple Open WebUI for general chats and OpenCode for software-specific tasks and it feels close to Claude Desktop and Claude Code.

even calling it roll of the dice is an assumption. Can you point anything you find as mistake?

You expect people to read every single excretion, which can be generated faster than I can read,just to find the rare gem that might exist?

The problem is that in the past it took multiple times more effort and hours to write something than it took to read. That served two purposes:

1. Lazy people just looking for an audience were effectively gatekept from drowning the world with their every vapid thought.

2. Because supply was many times slower than consumption it was viable to give most articles a chance: the author could not drown me in a deluge even if they wanted to.

Having the criteria now that the author should spend at least as much effort creating the piece as they expect the reader expend reading it is a damn useful bar: instead of reading 1000 AI articles just to find the one good one, I can simply read 10 human authored articles and be certain that 9 of them have something worthwhile.


No, because I'm not going to spend a bunch of my time fact-checking obvious AI slop.

Then don't complain.


Thanks for letting me know

Yea, No doubt Qwen 3.6 open weights are far more strong

Why no doubt?

No comparison with competitor models other than the previous granite version strongly implies that it does not compete well with other comparable models. At least this is the most reasonable assumption until data comes out to the contrary

Qwen 36 is effectively a pocket sized frontier model. It's really surprising for me anyway

Because Qwen 3.6 pushes way above its weight. Granite 8B is impressive, but Qwen still wins on raw capability, especially for coding.

You just asserted the same thing again. Why do you say this is the case?

Having tried it.

Qwen is really good.

Also, generally, it makes sense. 8B models are generally not very good^.

That this 8B model is decent is impressive, but that it could perform on par with a good model 4 times as large is a daydream.

^ - To be polite. The small models + tool use for coding agents are almost universally ass. Proof: my personal experience. Ive tried many of them.


It's not that surprising that an 8B dense model would compete with a 35B-A3B MoE model.

The geometric mean rule of thumb for MoE models is that the intelligence level of an MoE model with T total parameters and A active parameters is roughly equivalent to that of a dense model with sqrt(A*T) parameters. For qwen3.6-35B-A3B, that equivalent size is 10.24B, spitting distance of an 8B model. Good training can make up the 28% difference in size.


So it’s just like, your opinion, man?

edit: It was a play on The Big Lebowski, folks.


College SAT scores do not tell you how the dev applying for your open back end systems engineering job is going to do once they're in your workplace harness.

Nor do class standings, nor hackerrank and the like.

What will tell you is asking them to fix a thing in your codebase. Once you ask an LLM to do that, a dozen times, I'd argue it's no longer "just your opinion man", it's a context-engineered performance x applicability assessment.

And it is very predictive.

But it's also why someone doing well at job A isn't necessarily going to be great at B, or bad at A doesn't mean will necessarily be bad at B.

I've often felt we should normalize a sort of mutual try-buy period where job-change seeker and company can spend a series of days without harming one's existing employment, to derisk the mutual learning. ESPECIALLY to derisk the career change for the applicant who only gets one timeline to manage, opposed to company that considers the applicant fungible.

But back to the LLM, yeah, the only valid opinion on whether it works for you is not benchmark, it's an informed opinion from 'using it in anger'.


> So it’s just like, your opinion, man?

Yes.

That is how you empirically evaluate tools; not by reading stupid benchmarks. By actually using the tools, for hours and hours. Doing real work.

Did you try using it? For hours? Do you use qwen?

How about you tell us about your experience with your great 8B models that you use daily. What coding agent harness do you have then hooked up to? What context size can you get before they lose track of whats happening? Do you swap between models for different coding tasks?

Or, have you not, actually, even actually tried any of this stuff, yourself?


Work pays for copilot, so I use copilot. I will never spend a penny of my own money on this stuff. If it is free, I'll use it.

I'll never use any free opensource anything from china ever, so fuck no I haven't used qwen.


the (dead) internet is full of opinions exactly like this

you tried qwen3.6 and you think it is not good?

I do not have high opinions of any ai model.

Qwen scores above sonnet in coding benchmarks. Runs locally. In personal use it's really good. Anecdotally others have used it to vibe code or agentic code successfully. Not toy problems. Not a toy model.

Qwen3.6 raises the bar for models of its size. There really isn't a comparison in my opinion.


Maybe you could tell him what you want instead of making him guess.

Way above its weights.

Nanobanana for scale.

I think github is at a point that its too hard to ignore just like google is even though we might not like what they are doing now but we were the one made them this big.

Lets see when the community come up with more quantized versions. Waiting for Unsloth's version

It was released around a week ago but Xiaomi Open Sourced it Now & its MIT Licensed and available om HuggingFace

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: