This is really nice to know. I remember trying to compile pandoc to Wasm after finding out that ghc had Wasm support, hitting all kinds of problems and then realising that there was no real way to post an issue to Haskell's gitlab repo without being pre-approved.
I guess now with LLMs, this makes more sense than ever, but it was a frustrating experience.
I found Geoffrey Hinton's hypothesis of LLMs interesting in this regard. They have to compress the world knowledge into a few billion parameters, much denser than the human brain, so they have to be very good at analogies, in order to obtain that compression.
I feel this has causality reversed. I'd say they are good at analogies because they have to compress well, which they do by encoding relationships in stupidly high-dimensional space.
Analogies then could sort of fall out naturally out of this. It might really still be just the simple (yet profound) "King - Man + Woman = Queen" style vector math.
This is explained in more detail in the book "Human Being: reclaim 12 vital skills we’re losing to technology", which I think I found on HN a few months ago.
The first chapter goes into human navigation and it gives this exact suggestion, locking the North up, as a way to regain some of the lost navigational skills.
This seems really nice, and looks like something I have been wanting to exist for some time. I will definitely play with it when I have some time.
I know this is a personal project and you maybe didn't want to make it public, but I think the README.md would be better suited with a section about the actual product. I clicked on it wanting to learn more, but with no time to test it for now.
Thanks for the feedback, I did update the README and included all the futures
and also there is https://talimio.com, I think it shows the future in a better way visually
I have been looking for the same thing, either from Meta's SAM 3[1] model, either from things like the OP.
There has been some research specifically in this area with what appears to be classic ML models [2], but it's unclear to me if it can generalize to dances it has not been trained on.
As soon as I found out that this model launched, I tried giving it a problem that I have been trying to code in Lean4 (showing that quicksort preserves multiplicity). All the other frontier models I tried failed.
I used the pro version and it started out well (as they all did), but it couldn't prove it. The interesting part is that it typoed the name of a tactic, spelling it "abjel" instead of "abel", even though it correctly named the concept. I didn't expect the model to make this kind of error, because they all seems so good at programming lately, and none of the other models did, although they did some other naming errors.
I am sure I can get it to solve the problem with good context engineering, but it's interesting to see how they struggle with lesser represented programming languages by themselves.
It depends on whether you define "rare" in terms of language variety or human variety, obviously. In terms of languages, it is a relatively rare phoneme. It occurs more often as an allophone of other phonemes, but in that case the speakers may not be able to distinguish it and will struggle to reproduce it in "unusual" environments.
I guess now with LLMs, this makes more sense than ever, but it was a frustrating experience.
reply