Hacker Newsnew | past | comments | ask | show | jobs | submit | more isaacfrond's commentslogin

The article is a bit misleading. The kind of wearable he means are glucose level measuring devices. This link is a bit better:

https://www.reuters.com/business/healthcare-pharmaceuticals/...

Granted with such a device you still have to do something with the data for it to make you better, but I think this makes a lot more sense. Many people are diabetes 2, or close to it, and they have no idea. A continuous glucose measuring device could very quickly be a real eye-opener, in a way a fitness tracker does not. --> I don't need no Fitbit to tell me, I skipped my 5k again, but I do need a sensor to tell me that my blood glucose is too high.


A glucose monitor is not the best way to tell if you have type 2 diabetes. It varies throughout the day, by a lot.

You judge diabetes by your A1c, which is a cumulative effect over months. That's part of a standard bloodwork panel, which you should have every year -- which is plenty of notice.


That isn't necessarily the wearable RFKJ means. The article also goes on to point out that nobody has measured health benefits from wearables, and that the surgeon general nominatee has a company that makes wearables, a giant conflict of interest. None of this is surprising, in that RFKJ testified under oath in his confirmation hearings that he would keep the vaccine advisory committee in place, but then fired them summarily. He's a known liar.


The article nowhere mentions individual compensation to descendants by Harvard. I suspect the bankruptcy comment is an institutional fear of financial liability tied to large-scale identification of descendants.


How is the institution 'liable' for the slaves owned by the founders? I'm both confused that people expect the University to pay significant reparations to the many descendants of slaves and that it would be afraid of that expectation. Is a preferential entry or special scholarships not enough?


Mathematics isn't monolithic—it depends heavily on the axioms you choose. Change the axioms, and the theorems change. ZFC, ZF¬C, intuitionistic logic, non-Euclidean geometry—each yields a different “math,” all internally consistent. So it’s not right to say math “just is” in some absolute sense. We’re not just discovering math; we’re exploring the consequences of chosen assumptions.

For instance:

Under Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC), every set can be well-ordered, but we do get the Hahn–Banach paradoxes.

Under ZF without Choice, analysis as we know it no longer holds.

In constructive mathematics, which avoids the law of the excluded middle, many classical theorems lose their usual formulations or proofs.

Non-Euclidean geometries arise from altering the parallel postulate. Within their own axioms, they are as internally consistent and "natural" as Euclidean geometry. Do non-intersecting lines exist in this universe? I've no idea.


This just steps one meta level higher. Yes, you can make your object of analysis the axioms and what they lead to and proof theory etc. But now you've just stepped back one level. What are the axioms that allow you to derive that "ZFC leads to Hahn–Banach paradoxes"? Is this claim True and discovered or is it in itself also simply dependent on some axioms and assumptions?

This is part of a broader meta-ization of culture. Philosophers are also much more reluctant to make truth claims in the last century compared to centuries ago. Everything they say is just "To a Hegelian, it is {such and such}. For Descartes, {x, y, z}." If you study theology, they don't teach with conviction that "Statement A". They will teach that Presbyterians believe X while the Anglicans think Y, and the Catholics think it's an irrelevant distinction. Of course when push comes to shove, you do realize that they do have truth claims, and moral claims that are non-negotiable but are shy to come forward with them and explicitly only talk in this "conditional" "if-then" way.

In fact many would argue that math is not too far from theology. People who were obsessed with math limits, like Gödel, were also highly interested in theology.

I guess physics is the closest to still making actual truth claims about reality, though it's also retreating to "we're just making useful mathematical models, we aren't saying that reality is this way or that way".


No, you are wrong. 90% of Philosphy it's bullshit about giving a fake truth status depending of WHO said what. Meanwhile, Math and Science always put FACTS over personas.


About the axioms, not really. Axiom sets is mostly there just as a 'short hand' to quickly describe a context we're talking about, but ultimately you could just do away with them. E.g. if we let A be the set of axioms from some theory (e.g. set theory, number theory etc.) and you have a mathematical statement of the form X => Y within that theory, you could just as well consider the statement "A ^ X => Y" in the purely formal system without any axioms at all, then it is purely a logical question (essentially, if X => Y is a theorem within theory A) and more objectively true than "X => Y" which would be theory-independent.


The overarching point still stands: our formal systems are just models built to describe the patterns we observe. In that sense, math “just is.” The fact that some models aren’t compatible with others doesn’t undermine that—it just shows they’re incomplete or context-dependent views into a larger structure.


> all internally consistent

Well, we hope.


The M functions are the MacMahon’s partition functions (see the paper [1]). They were not known to relate to the sum of divisors. The M_a function counts partitions in a parts but weighing multiplicities in the partion.

[1]: https://arxiv.org/abs/2405.06451


M_1 is obviously just sigma. That's straight from the definition, you can't tell me that wasn't known.

As for the higher ones, I'm having trouble finding a proper citation saying that this was known earlier, but this math.stackexchange answer asserts that MacMahon himself worked some of this out: https://math.stackexchange.com/a/4922496/2884 No proper citation though, annoying.

When you say "this wasn't known", on what basis is that? It's very hard to be sure that something wasn't known unless you're an expert on that particular thing!


Sorry, but M_1 is simply the sum of divisors, and I don't think that was ever a mystery. Specializing the notation from the paper for M_a, to a=1, and writing pythonic with finite bounds for clarity...

  M_1(n) = sum(
    m
    for m in range(1, n+1)
    for s in range(1, n+1)
    if m*s = n
  )


That’s assuming the administration will allow you to administer the exams onsite which is increasingly not the case. Online students bring in more money.


> The claim of inevitability is crucial to technology hype cycles, from the railroad to television to AI.

Well. You know. We still have plenty of railroad, and television has had a pretty good run too. So if that are the models to compare AI to, then I have bad news for how 'hype cycle' AI is going to be.


The article itself lists as successful, even breakthrough, applications of AI: protein folding, weather forecasting, and drug discovery.


That is some cool jazz


No kidding. First AIs were passing the bar exam—now they’re writing it.

I don't really see the scandal though. I pretty much _only_ use AI for tasks that I could do myself.


Some of the Gemini stuff is almost at airport level. I'm surprised. Everything is going so fast.

The odd thing, is that with technical stuff, I'm continually rewriting the LLM's to be clearer and less verbose. While the fiction is almost the opposite--not literary enough.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: