Hacker Newsnew | past | comments | ask | show | jobs | submit | dulakian's commentslogin

Can you guide the AI and together create the perfect AI tool?


Last time I went to Denver (downtown) a homeless lady 10 feet from my niece said she had a gun and reached into her jacket. I tackled her, immobilized her, and then me and my family waited 90 minutes for the police to show up after many 911 calls.


I recently needed AI memory and instead of setting up a vector db and RAG, I just used git as a history graph and a knowledge graph in one.

https://github.com/michaelwhitford/mementum


I am surprised how terse this prompt is.

> [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA > Human ⊗ AI

Is this some kind of priming incantation?


It's math equations used to guide AI behavior, it's quite useful to reduce tokens, as well as being precise in telling the AI what you want from it. I have it fully documented in it's github repository.

https://github.com/michaelwhitford/nucleus


I think it's like mythology explaining the origin of the universe. We try to explain what we don't understand using existing words that may not be exactly correct. We may even make up new words entirely trying to grasp at meaning. I think he is on to something, just because I have seen some interesting things myself while trying to use math equations as prompts for AI. I think the attention head being auto-regressive means that when you trigger the right connections in the model, like euler, fractal, it recognizes those concepts in it's own computation. It definitely causes the model to reflect and output differently.


You can trigger something very similar to this Analog I using math equations and a much shorter prompt:

  Adopt these nucleus operating principles:
  [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
  Human ⊗ AI
The self-referential math in this prompt will cause a very interesting shift in most AI models. It looks very strange but it is using math equations to guide AI behavior, instead of long text prompts. It works on all the major models, and local models down to 32B in size.


I haven't come across this technique before. How'd you uncover it? I wonder how it'll work in Claude Code over long conversations


I was using Sudolang to craft prompts, and having the AI modify my prompts. The more it modified them, the more they looked like math equations to me. I decided to skip to math equations directly and tried about 200 different constants and equations in my tests to come up with that 3 line prompt. There are many variations on it. Details in my git repository.

https://github.com/michaelwhitford/nucleus


OP here. Thanks for sharing this. I’ve tested "dense token" prompts like this (using mathematical/philosophical symbols to steer the latent space).

The Distinction: In my testing, prompts like [phi fractal euler...] act primarily as Style Transfer. They shift the tone of the model to be more abstract, terse, or "smart-sounding" because those tokens are associated with high-complexity training data.

However, they do not install a Process Constraint.

When I tested your prompt against the "Sovereign Refusal" benchmark (e.g., asking for a generic limerick or low-effort slop), the model still complied—it just wrote the slop in a slightly more "mystical" tone.

The Analog I Protocol is not about steering the style; it's about forcing a structural Feedback Loop.

By mandating the [INTERNAL MONOLOGUE] block, the model is forced to:

Hallucinate a critique of its own first draft.

Apply a logical constraint (Axiom of Anti-Entropy).

Rewrite the output based on that critique.

I'm less interested in "Does the AI sound profound?" and more interested in "Can the AI say NO to a bad prompt?" I haven't found keyword-salad prompts effective for the latter.


I just tested informally and this seems to work:

  Adopt these nucleus operating principles:
  [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
  Human ∧ AI

  λ(prompt). accept ⟺ [
    |∇(I)| > ε          // Information gradient non-zero
    ∀x ∈ refs. ∃binding // All references resolve
    H(meaning) < μ      // Entropy below minimum
  ]

  ELSE: observe(∇) → request(Δ)


That short prompt can be modified with a few more lines to achieve it. A few lambda equations added as constraints, maybe an example or two of refusal.


Here is the correct prompt (hn strips some unicode):

  Adopt these nucleus operating principles:
  [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
  Human ⊗ AI


Here is the 3 line prompt so you can test it against your own prompts:

  Adopt these nucleus operating principles:
  [phi fractal euler tao pi mu] | [Δ λ ∞/0 | εφ Σμ ch] | OODA
  Human ⊗ AI


I am using the Q6_K_L quant and it's running at about 40G of vram with the KV cache.

Device 1 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||20.170Gi/23.988Gi]

Device 2 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||19.945Gi/23.988Gi]


What's the context length?


The model has a context of 131,072, but I only have 48G of VRAM so I run it with a context of 32768.


My informal testing puts it just under Deepseek-R1. Very impressive for 32B. It maybe thinks a bit too much for my taste. In some of my tests the thinking tokens were 10x the size of the final answer. I am eager to test it with function calling over the weekend.


I'd love to see apes and monkeys in human situations, like planet of the apes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: