That's very kind of you - I tried making a dead-simple repro just now with Node 20, and it seemed to run without the problem. I'll try reproducing it in a bit with my original use case of imagemagick and see if the issue still exists.
I wrote a similar x86-16 assembler in < 512 B of x86-16 assembly, and this seems much more difficult <https://github.com/kvakil/0asm/>. I did find a lot of similar tricks were helpful: using gadgets and hashes. Once trick I don't see in sectorc which shaved quite a bit off of 0asm was self-modifying code, which 0asm uses to "change" to the second-pass of the assembler. (I wrote some other techniques here: <https://kvakil.me/posts/asmkoan.html>.)
I considered self-modifying code, but somehow I kept finding more ways to squeeze bytes out. I’m half convinced that you could condense it another 50 ish bytes and add operator precedence or even local vars. But.. frankly.. I was ready to switch my attention to a new project.
In addition, even if a normal number was used, it's far simpler to describe the data by just using a single number alone. For example, a binary encoding of the data (perhaps using a prefix-free code). Using a normal number and two "positions" is just more complicated.
Anecdote: for me the actor model has been the most understandable and useful concurrency primitive I've used. Pi-calculus, which was inspired by the actor model, is similarly elegant.
jsfuck is hardly obfuscation: remove the first 828 bytes (for "eval(") and the last 3 bytes (for ")()"), and then execute the remaining string, and that gives you the original source code.
You would need to be able to dynamically find that 828, I think it is entirely trivial to have a jsfuck2 that produces a non deterministic "eval(" structure of arbitrary length.
> Maksymilian Piskorowski found that if you happen to have a spare eight 9s, you can compute 𝑒 = (9/9 + 9^(-9^9))^(9^(9^9)), which is accurate to a little over 369 million decimal places.
Sure, because 9/9 = 1 and if you take x = 9^9^9, you get back (1 + x^(-1))^x, i.e. the first formula. It's cute, but I don't know if you could call it a "discovery".
From the article: "As an extra bonus, the generated proofs tend to be shorter than the ground truth proofs collected in CoqGym."
This feels a little misleading, the paper itself says that the phenomena "... suggests that theorems with longer proofs are much more challenging for the model." It'd be more interesting to see how the automated theorem proving length compares to the manual length for the same proofs (although I'd expect this to be biased downwards, for the same reason).