Hacker Newsnew | past | comments | ask | show | jobs | submit | jmorgan's commentslogin

That's not good, sorry. I work on Ollama - shoot me an email ([email protected]) and we can help debug

It's available (with tool parsing, etc.): https://ollama.com/library/glm-4.7-flash but requires 0.14.3 which is in pre-release (and available on Ollama's GitHub repo)


Thanks, I stand corrected.


The gpt-oss weights on Ollama are native mxfp4 (the same weights provided by OpenAI). No additional quantization is applied, so let me know if you're seeing any strange results with Ollama.

Most gpt-oss GGUF files online have parts of their weights quantized to q8_0, and we've seen folks get some strange results from these models. If you're importing these to Ollama to run, the output quality may decrease.


We did consider building functionality into Ollama that would go fetch search results and website contents using a headless browser or similar. However we had a lot of worries about result quality and also IP blocking from Ollama creating crawler-like behavior. Having a hosted API felt like a fast path to get results into users' context window, but we are still exploring the local option. Ideally you'd be able to stay fully local if you want to (even when using capabilities like search)


Amazing work. This model feels really good at one-off tasks like summarization and autocomplete. I really love that you released a quantized aware training version on launch day as well, making it even smaller!


Thank you Jeffrey, and we're thrilled that you folks at Ollama partner with us and the open model ecosystem.

I personally was so excited to run ollama pull gemma3:270b on my personal laptop just a couple of hours ago to get this model on my devices as well!


> gemma3:270b

I think you mean gemma3:270m - Its Dos Comas not Tres Comas


Maybe it's 270m after Hooli's SOTA compression algorithm gets ahold of it


Ah yes thank you. Even I still instinctively type B


It should open ollama.com/connect – sorry about that. Feel free to message me [email protected] if you keep seeing issues


Sorry about this. Re-downloading Ollama should fix the error


Thanks for the reply and speedy patch Jeffery. Seems to be working now, except my 4060ti can’t hang lacking enough vram.


Working on adding tool calling support to Magistral in Ollama. It requires a tokenizer change and also uses a new tool calling format. Excited to see the results of combining thinking + tool calling!


This is a great point. apt-get would definitely be a better install experience and upgrade experience (that's what I would want too). Tailscale does this amazing well: https://tailscale.com/download/linux

The main issue for the maintainer team would be the work in hosting and maintaining all the package repos for apt, yum, etc, and making sure the we handle the case where nvidia/amd drivers aren't installed (quite common on cloud VMs). Mostly a matter of time and putting in the work.

For now every release of Ollama includes a minimal archive with the ollama binary and required dynamic libraries: https://github.com/ollama/ollama/blob/main/docs/linux.md#man.... But we could definitely do better


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: