Probably because it's 'political' news — but I don't think this is political as such. The Pentagon is (or should be) an inherently non-political government agency. @dang curious on your thoughts here though.
Politics has always been a lame excuse. If there's an article shitting on California's politics, it doesn't get flagged and there is no talk about "no politics on HN". Just tons of "California Bad" posts. If there's an article praising something political in Austin, TX. Same thing. No complaints about politics. Lots of posts about "California Bad". It's almost like the problem isn't politics. But the "politics" a certain segment of HN doesn't like.
It's completely intertwined to current United States politics and, I understand the hypocrisy of this statement as I'm commenting on it, doesn't really bring out meaningful discussions. The same arguments are made by both sides over and over again. I flagged it because, to me, the discussion is better suited somewhere else.
PLEASE explain "So you can extend x/x holomorphically to full C" to someone with only a BSc in math/cs; something about this thread is giving me an existential crisis right now.
- function extension is defining a function where it is not defined
- <Adj> function extension is an extension that keeps (or gives) Adj property
- extended function is usually treated as originals if extension is good enough. Real analysis starts with defining real numbers and extending familiar functions onto them
- in this particular case we do not need C - even continuous extension on R works and agrees with x/x = 1 at 0
- holomorphic (analytic) extension makes function infinitely differentiable at every point of C
- because of the nature of discontinuity you can’t extend the simple arccosh in any reasonable way on C without introducing multivalued or path-dependent functions
- this continuity makes x/x=1 a reasonable simplification for CAS imo but not for complex functions as in the OP
- many things with point singularities in R have more structure in C, but x/x is not one of them. Even 1/x is of a different nature.
“You do not divide by zero” that forces you to carry x != 0 is more of a high-school construct than a real thing. Physicists ignore even more important stuff, and in the end their formulas work “just fine”.
Thank you, but, now I have 10 further “explain it to me” questions. (I never did analysis so this stuff is entirely over my head. I had one semester of algebraic structures. It was the hardest class I ever had in my life.)
Speed of code-writing was never the issue at Amazon or AWS. It was always wrong-headed strategic directions, out to lunch PMs, dogshit testing environment, stakeholder soup, high turnover, bureaucracy, a pantheon of legacy systems, insane operational burdens, garbage tooling, and last but not last -- designing for inter-system failure modes, which let's be real, AI has no chance of having context for -- and so on...
Imagine if the #1 problem of your woodworking shop is staff injuries, and the solution that management foists on you is higher RPM lathes.
The 20,000x speedup claim doesn't pass a basic sanity check.
The theoretical improvement of DMMSY over Dijkstra is O(log^{2/3} n) vs O(log n). For n = 1M, that's a ratio of ~2.7x. To get even a 10x improvement from asymptotics alone, you'd need n ≈ 2^{1000}, far beyond any graph that fits in memory (or in the observable universe, for that matter). The ~800ns figure for 1M nodes also seems physically implausible. Likely the benchmark is comparing an optimized hot path against a naive Dijkstra, or measuring something other than what it claims.
If you look carefully at the graph on the readme page, you'd see it compares Dijkstra's algorithm, a "DMMSY(res)", and a "DMMSY(opt)".
Presumably the claimed contribution was the optimized version, but note that whatever DMMSY(res) is, it should still be O(m log^{2/3} n) or whatever DMMSY's time complexity is supposed to be.
But DMMSY(res)'s run times follow Dijkstra closely in the graph.
The only conclusion is that something is off -- either the author measured the wrong thing, or he was able to optimize the implementation of the algorithm to the extent that the optimizations over-powers any asymptotic gains. Or the implementation is wrong.
At any rate, as you mentioned, the difference between `m log n` and `m log^{2/3} n` is insignificant at n=1M (and TBH won't be significant at any practical n). If the results and the implementation are both correct, it just means the author simply found a way to reduce the constant time factor of the implementation.
Note that we search graphs that don’t fit in memory all the time. Eg. Looking ahead at chess moves. We just calculate the graph on the fly. It’s really not that out of line to search down 1000 branching binary decisions (in chess this is only a few moves ahead).
A possible explanation for the difference is that the Dijkstra impl being tested is doing multiple heap allocations at every step, which doesn't seem like a fair comparison.
Secondarily I believe in the case of the "opt" version, the compiler might be optimising out the entire algorithm because it has no observable side-effects.
if they're so bad they're good ... they're actually just good. probably because they capture something increasingly rare: the human and personal touch of an artist who's not straight jacketed by "safe mode" marketing, editorial norms, analytics, blah blah blah
Yes, they're amazingly good given they didn't have copies of the original posters, Internet access to get reference images, or even VCRs at home to play the movies themselves.
The clickbait title is about "Africa" and "bad", but it's specifically about Ghana and awesome.
reply