Hacker Newsnew | past | comments | ask | show | jobs | submit | iib's commentslogin

This is really nice to know. I remember trying to compile pandoc to Wasm after finding out that ghc had Wasm support, hitting all kinds of problems and then realising that there was no real way to post an issue to Haskell's gitlab repo without being pre-approved.

I guess now with LLMs, this makes more sense than ever, but it was a frustrating experience.


> no real way to post an issue to Haskell's gitlab repo without being pre-approved.

This is so on brand for Haskell people I love it.


Some were already that and even more, because of other reasons. The Cathedral model, described in "The Cathedral and the Bazaar".

I come to YCombinator, specifically because for some reason, some of the very brightest minds are here.

I found Geoffrey Hinton's hypothesis of LLMs interesting in this regard. They have to compress the world knowledge into a few billion parameters, much denser than the human brain, so they have to be very good at analogies, in order to obtain that compression.


I feel this has causality reversed. I'd say they are good at analogies because they have to compress well, which they do by encoding relationships in stupidly high-dimensional space.

Analogies then could sort of fall out naturally out of this. It might really still be just the simple (yet profound) "King - Man + Woman = Queen" style vector math.


That's essentially the manifold hypothesis of machine learning, right?


This is explained in more detail in the book "Human Being: reclaim 12 vital skills we’re losing to technology", which I think I found on HN a few months ago.

The first chapter goes into human navigation and it gives this exact suggestion, locking the North up, as a way to regain some of the lost navigational skills.


This seems really nice, and looks like something I have been wanting to exist for some time. I will definitely play with it when I have some time.

I know this is a personal project and you maybe didn't want to make it public, but I think the README.md would be better suited with a section about the actual product. I clicked on it wanting to learn more, but with no time to test it for now.


Thanks for the feedback, I did update the README and included all the futures and also there is https://talimio.com, I think it shows the future in a better way visually


Didn't see the website at first. Thank you!


That sounds awesome. But I have two curiosities: What are the problems of httpx? And was pycurl not enough for what you wanted to do?


Httpx/httpcore are abandoned and their connection pooling etc implementations are completely broken https://github.com/encode/httpx/issues/3215 Many PRs trying to fix varying issues https://github.com/encode/httpcore/pulls But maintainer(s) are not intrested in fixing any of the issues.


pycurl doesnt support async, right?


I have been looking for the same thing, either from Meta's SAM 3[1] model, either from things like the OP.

There has been some research specifically in this area with what appears to be classic ML models [2], but it's unclear to me if it can generalize to dances it has not been trained on.

[1] https://ai.meta.com/blog/segment-anything-model-3/

[2] https://arxiv.org/html/2405.19727v1


For small edits, has anybody configured a leader-key scheme? Something like Doom Emacs has with space as a leader.

It seems to me to be the best possible configuration for Emacs on Android (on a phone) and I was wondering if I should invest time in such a solution.

strokes-mode.el would also be very nice, but apparently it doesn't have touchscreen support.


As soon as I found out that this model launched, I tried giving it a problem that I have been trying to code in Lean4 (showing that quicksort preserves multiplicity). All the other frontier models I tried failed.

I used the pro version and it started out well (as they all did), but it couldn't prove it. The interesting part is that it typoed the name of a tactic, spelling it "abjel" instead of "abel", even though it correctly named the concept. I didn't expect the model to make this kind of error, because they all seems so good at programming lately, and none of the other models did, although they did some other naming errors.

I am sure I can get it to solve the problem with good context engineering, but it's interesting to see how they struggle with lesser represented programming languages by themselves.


Made me think of this SMBC comic[1], where there's a debate if being in English or Spanish, each with around a billion speakers, makes it rare or not.

[1] https://www.smbc-comics.com/comic/phonemes


It depends on whether you define "rare" in terms of language variety or human variety, obviously. In terms of languages, it is a relatively rare phoneme. It occurs more often as an allophone of other phonemes, but in that case the speakers may not be able to distinguish it and will struggle to reproduce it in "unusual" environments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: