Hacker Newsnew | past | comments | ask | show | jobs | submit | chazeon's commentslogin

Most CA households may be, but obviously not everywhere

My understanding of safety concern around Linux is mainly the security, namely, full disk encryption. Full disk encryption avoid an attacker unplug your harddrive and directly read the data.

While it is possible, it is definitely not easy to setup right, particularly if you want hibernation to disk properly setup. There is certain requirement on the disk layout setup, use LVM, setup TPM, setup bootloader parameters, setup hibernation and wake.... any step is wrong you have to use a boot drive to rescue, and it is very hard to fix if you don't have LVM in the first place. For example, Arch Linux's archinstall won't setup this whole suite for you.

This is really nessary if you are going to take the computer outside and it might get lost stolen. You definitely don't want other people to read the content after it was lost. I think this kind of security is default on Windows / Mac / iOS / Android right now already, but Linux is still so hard.


Well that’s a very misleading thing. If the US immigration policy wasn’t this hostile to populous countries, more Chinese will want to stay.


When are people going to drop the immigration is good at all costs assumption.

We need a well managed set of immigration polices or country WILL take advantage of US. These are our military rivals and we sell our most advanced math, physics and engineering seats to the highest bidder. It’s a self districting disaster and it’s not just on us to treat people better.

Look at the rate of Indian asylum seekers in Canada to see the most extreme case. It happens anywhere you extend naivety and boundless good will.


Nvidia's private driver seems to deliver 4k@120Hz just fine.


Well the seemingly cheap comes with significantly degraded performance, particular for agentic use. Have you tried replacing Claude Code with some locally deployed model, say, on 4090 or 5090? I have. It is not usable.


Deepseek and Kimi both have great agentic performance

When used with crush/opencode they are close to Claude performance.

Nothing that runs on a 4090 would compete but Deepseek on openrouter is still 25x cheaper than claude


> Deepseek on openrouter is still 25x cheaper than claude

Is it? Or only when you don’t factor in Claude cached context? I’ve consistently found it pointless to use open models because the price of the good ones is so close to cached context on Claude that I don’t need them.


Deepseek via their API also has cached context, although the tokens/s was much lower than Claude when I tried it. But for background agents the price difference makes it absolutely worth it.


Yes, if you try using Kilo Code/Cline via Openrouter the cost will be much cheaper using Deepseek/Kimi vs Claude Sonnet 4.5.


Well, those are also extremely limited vram areas that wouldn't be able to run anything in the ~70b parameter space. (Can you run 30b even?)

Things get a lot more easier at lower quantisation, higher parameter space, and there's a lot of people's whose jobs for AI are "Extract sentiment from text" or "bin into one of these 5 categories" where that's probably fine.


Strictly speaking, you have not deployed any model on a 5090 because a 5090 card has never been produced.

And without specifying your quantization level it's hard to know what you mean by "not usable"

Anyway if you really wanted to try cheap distilled/quantized models locally you would be using used v100 Teslas and not 4 year old single chip gaming GPUs.



You can just buy a 5090 now for $3k. Have you confused it with something else?


they took the already ridiculous v3.1 terminus model, added this new deepseek sparse attention thing, and suddenly it’s doing 128k context at basically half the inference cost of the old version with no measurable drop in reasoning or multilingual quality. like, imo gold medal level math and code, 100+ languages, all while sipping tokens at 14 cents per million input. that’s stupid cheap. the rl recipe they used this time also seems way more stable. no more endless repetition loops or random language switching you sometimes got with the earlier open models. it just works. what really got me is how fast the community moved. vllm support landed the same day, huggingface space was up in hours, and people are already fine-tuning it for agent stuff and long document reasoning. i’ve been playing with it locally and the speed jump on long prompts is night and day. feels like the gap to the closed frontier models just shrank again. anyone else tried it yet?


Seem images on GitHub web also not showing


The one thing I wish it has is 3.5mm audio jack. Both Xbox and SONY's dualsense controller have this. But SONY don't support audio via Bluetooth. The Xbox one need a USB adapter but its build is not as good as SONY's. SONY don't have a USB adapter. Given Steam controller is already using an USB puck, it should be able to support it.


Gemini is the only model that can provide consistent solution to theoretical physics problems and output it into LaTeX document.


you can just buy a nanokvm


Aliexpress has them


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: