Hacker Newsnew | past | comments | ask | show | jobs | submit | saltwounds's commentslogin

I haven't come across this technique before. How'd you uncover it? I wonder how it'll work in Claude Code over long conversations

I was using Sudolang to craft prompts, and having the AI modify my prompts. The more it modified them, the more they looked like math equations to me. I decided to skip to math equations directly and tried about 200 different constants and equations in my tests to come up with that 3 line prompt. There are many variations on it. Details in my git repository.

https://github.com/michaelwhitford/nucleus


I like the idea and expiration dates. Got any example pages using either theme?

Glad you like the idea! I just put together a couple of quick examples to show how the themes look:

default: https://mdto.page/1E/ILeVn resume: https://mdto.page/1E/Cxhnf


Hmm. The explanation seems short enough to have written by hand easily. But I suppose that the natural style of AI output has the upside that it demonstrates the Markdown rendering well.

Please use `:prefers-color-scheme` throughout.

Got it

Even with AC air gets trapped between the mattress, the person, and covers. Different materials help circulate air and keep a cooler feel.


True, then you don’t have to use as much electricity too


I connect local models to MCPs with LM Studio and I'm blown away at how good they are. But the issues creep up when you hit longer context like you said.


OpenAI and Anthropic's real moat is hardware. For local LLMs, context length and hardware performance are the limiting factors. Qwen3 4B with a 32,768 context window is great. Until it begins filling up and performance drops quickly.

I use local models when possible. MCPs work well, but their large context injection makes switching to an online provider the no-brainer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: