Hacker Newsnew | past | comments | ask | show | jobs | submit | cmrx64's commentslogin

this is a pretty bad vfs. there are pure “cap manifest” approaches that don’t pull in decades of cruft semantics. don’t build systems that aren’t objectstore native in 2025 (since this work was initiated in december).

meh. i’ll keep using scrapinghub.


can you give Spidra a try. I can also guide you through


https://hellas.ai is building out their category theoretic compiler and protocol for solving this issue


ZKML is a very exciting emerging field, but the math is no where near efficient enough to prove an inference result for an LLM yet. They are probably just trying to sell their crypto token.


RIP


this isn’t a theorem of network science, and is an easily avoided failure mode. plumtree? kad? aodv? you’re wrong :(


It's observed in practice.

Protocols that run over the internet don't count. I meant actual mesh routing.


aodv works great and is “actual mesh.” the ‘overlays’ scale in practice and are topology agnostic, the hierarchical bgp mesh underneath isn’t altering the message or memory complexity, we can talk about them as algorithms. there are 10ks node meshes in the real world that use batman, geography-contrained hub and spoke, etc. guifi has 37k nodes in its heterogenous mesh with a batman fork, freifunk (originator of batman) around 40k.

edit to add: what is observed in practice is that gossip protocols can’t coordinate peers without centralizing. this is natural, and an artifact of the logarithmics in the routing protocols. the appropriate thing to do is model routing as a revocable proof system, and information theory explains the centralizing dynamics (problem I worked on 2018-2019) https://eprint.iacr.org/2022/1478 proves the global lower bound is linear (in route updates, when applied to routing, naively quadratic if distributed), and the trick routing protocols add to the game is locality, which yields the logarithmic advantage that when multiplied across the entire network is substantially subquadratic.


this feels a bit like a bombshell given the other recent works on emergent misalignment. how long have we been lying to models?


This is a deeply unsettling thought. I hope everyone can see this work. We truly have no idea how much resources have been wasted here.


What tasks are in your sights for evaluating the approach? is there anything you are most excited/curious to see the approach demonstrate?


The first evaluation is actually on language, via what I call the Universal Language Manifold (ULM).

You can explore the ULM on r/LanguageManifold


Contrariwise, I was part of the troupe of people that daily picked up these bags along walking trails. One of the few benefits of living in the USA: covert prosocial behavior is extremely common.


smell isn’t integrated in the thalamus with other sensory streams, it does something else entirely


A decade ago I ran several “seven hour roguelikes”, https://web.archive.org/web/20160321153532/http://people.cla... is the documentation from the first one.

The first year I spent six hours writing one of the first ecs crates in Rust and then an hour turning it into a game. lots of fun! you can search “7HRL” on github to find the historical participants not too ashamed to publicize their code at the end. A few dozen people enjoyed this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: