Hacker Newsnew | past | comments | ask | show | jobs | submit | adonovan's commentslogin

It's not an illegal monopoly to be the sole entity capable of a technique. The problems come from manipulating the market to prevent competition.

There was no implying that it was illegal, only that it was balsy (or perhaps foolish) of him to openly declare it. It's not a great PR move, in that the word tends to have a negative connotation for consumers.

The subject of this story is a single proton that you would definitely feel if it hit you: https://www.fourmilab.ch/documents/OhMyGodParticle/

I don't think that is the case. The kinetic energy of these super-energetic particles is often compared to a tennis ball. But that energy isn't released at once, so even if it would interact with yourself, that interaction creates a particle shower that takes most of the energy with it. I don't think we can feel one of our atoms getting violently ripped apart.

There’s Anatoli Bugorski [1] who accidentally put his head into the path of a high energy proton beam.

The injury resembled nothing like being hit by tennis balls.

> He reportedly saw a flash "brighter than a thousand suns" but did not feel any pain.

He’s still alive today, age 83.

[1] https://en.wikipedia.org/wiki/Anatoli_Bugorski


Oh my god i never read this thats so cool

Also weird phrasing: "a staggering 1.8 degrees" begs the reader to think of it as a large number (which in fact it is, as you point out) yet their intent seems to be, ironically and paradoxically, to diminish it.

I felt like that’s more like a rhetorical device for shorthand-saying “one might expect a ten or twenty degree difference based on modern marketing”, and I’m annoyed the article didn’t say that because it’s a pretty good point delivered rather poorly.

A 20* swing in body temp would render you dead…

Yep! That's what makes marketing against the imaginary foil of death so impactful: the alternative, "if not for our technical fabric, you'd have to fluctuate between zero and six layers of fabric based on exertion, humidity, inclement weather, and personal thermal comfort", is a lot less manipulative than "wear our fabric or die before the peak". Sure, it's true that you have to wear something or die (unless you're a statistical anomaly, anyways), but marketing based on glove weight doesn't cause as many sales as marketing based on frostbite.

Yeah, for "real" mountaineering, weights a concern, but not as much as "I don't my limbs to freeze off".

For my use cases (backpacking/bikepacking), it's all about the weight. But, I tend not to camp when it drops below 40*F (I do, but I have a travel trailer for that).


One might expect to be dead if following Modern marketing guidelines.

It would be hilarious if they did find a 10 degree difference. “Old gear keeps you chilly but fine. Modern gear straight up kills you!”

Because a machine wrote it, not a human.

I’m useless at recognizing AI writing sometimes; so, if this is that, email the mods and ask them to flag it off the site. (Explaining why you view it as AI writing will save a round or two of reply.) I’m all for what the twins are doing but AI writing should be purged here.

Thanks, that's a bug. We should never inline a function that directly calls recover. I've filed https://go.dev/issue/78193.

I’ve been an overt AI hater but have found very recently that, though I still hate a great many things about AI, it has become useful for coding.

In 10m Gemini correctly diagnosed and then fixed a bug in a fairly subtle body of code that I was expecting to have to spend a couple hours working on.

I spent much of the past week using Gemini to build a prototype of a clean new (green field) system involving RPCs, static analysis, and sandboxing. I give it very specific instructions, usually after rounds of critical design discussions, and it generates structurally correct code that passes essentially valid tests. Error handling is a notable weakness. I review the code by hand after each step and often make changes, and I expect to go over the over the whole thing very carefully at the end, but it has saved me many hours this week.

Perhaps more valuable than the code has been the critical design conversation, in which it mostly is fluent at the level of an experienced engineer and has been able explain, defend, and justify design choices quite coherently. This saved time I would otherwise have spent debating with coworkers. But it’s not always right and it is easily led astray (and will lead astray), so you need a clear idea in mind, a firm hand, and good judgment.


> This saved time I would otherwise have spent debating with coworkers. But it’s not always right and it is easily led astray (and will lead astray), so you need a clear idea in mind, a firm hand, and good judgment.

The “will lead astray” part is concerning. If you already have a clear idea in mind, you probably don’t need to have the debate with coworkers.

If you are having a debate with coworkers or AI, you would rather that they be knowledgeable enough to not lead you astray.

In cases where I don’t have a clear understanding of some area, yet I don’t have someone knowledgeable to talk to, I have found myself having to discuss the same point with multiple LLMs from multiple angles to tease out the probable right way.

In summary: obviate experts, receive correct guidance, save time —- pick any two.


> The “will lead astray” part is concerning. If you already have a clear idea in mind, you probably don’t need to have the debate with coworkers.

Yeah, I certainly wouldn't trust it to run any distance unattended, and I started this project with strong ideas about the parameters of the design, so I know what I want and what won't fly. But as you say, it can help tease out unexpected pros and cons of certain choices along the way.

> In summary: obviate experts, receive correct guidance, save time —- pick any two.

It's simpler than that: it can't do the first, nor reliably the second, but it has saved me time.


One annoying trope I keep seeing in Gemini output is the punchy invented concept name in a tripartite list:

- “The Pledge”:…

- “The Turn”:…

- “The Prestige”:…

(For this particular example I used real terms from the stage magic world, at least according to Christopher Nolan’s film, as it captures the same meaningless-to-the-uninitiated quality.)


Likewise! I often marvel at the patience of readers of earlier times. Of course, they had more time and fewer distractions, and I suspect that there was a dynamic at work in which both the writer and reader derived a certain satisfaction from long meandering sentences, the writer proving their skill, and the reader proving (to themselves) their stamina.

Nowadays we tend to write in a plainer style demanding a smaller “parser stack”. Some style manuals have excellent examples of sentences of equal length but very different “stack depth” and thus ease of comprehension.


> "what is the role of humans in a scenario where work is no longer necessary?"

People have been fantasizing about this scenario throughout the industrial era--read William Morris' News from Nowhere (1890) for example--but it has failed to come to pass so many times, and the reasons are pretty obvious. The benefits of technology are spread unequally, and increasingly so over time, so only a wealthy few get the option of a post-labor existence. Also, our demands for the products of labor change as labor productivity increases; we prefer (or have been persuaded to act as if we prefer) material riches over lives with less stuff and more time.

We still haven't seen that AI actually replaces labor, as opposed to amplifying it, like a power saw or CNC mill used by a carpenter, so all these discussions about the end of labor seem like unwitting sales pitches for AI.

> “what would be the role of humans in an AI-first society”

The real question is why would anyone want, or want to help build, such an obscenity.


>The real question is why would anyone want, or want to help build, such an obscenity.

Power Saws and CNC mills have no autonomy. They have to be guided every inch or instruction by hand. Autonomous AI agents remove the hand. So if we don't define the role of humans in the process of creation, we get AI building things we didn't ask for or need.

AI is coming regardless. There are advantages that we all accept it can do. But the machine is a 'slave' only if we refuse to be 'masters'.

There is a term called social ecology.

It is based on the conviction that nearly all of our present ecological problems originate in deep-seated social problems. In effect, the way human beings deal with each other as social beings is crucial to addressing ecological crisis.

The point of social ecology emphasizes is not that moral and spiritual persuation and renewal are meaningless or unnecessary; they are necessary and can be educational. But modern capitalism is 'structually amoral' and hence impervious to moral appeals.

Power will always belong to the elite and commanding strata if it is not institutionalized in face-to-face democracies, among people who are fully empowered as social beings to make decisions in new communal assemblies. Power that does not belong to the people invariably belongs to the state and the exploitative interests it represents.

What is obscene is measuring outputs by 19th century standards. As long as we believe that "being born doesn't entitle you to food", we will stay on the hedonic treadmill until the planet or our psyches break.


It's true that in the very early days Google used cheap computers without ECC memory, and this explains the desire for checksums in older storage formats such as RecordIO and SSTable, but our production machines have used ECC RAM for a long time now.


Very interesting. The Go toolchain has an (off by default) telemetry system. For Go 1.23, I added the runtime.SetCrashOutput function and used it to gather field reports containing stack traces for crashes in any running goroutine. Since we enabled it over a year ago in gopls, our LSP server, we have discovered hundreds of bugs.

Even with only about 1 in 1000 users enabling telemetry, it has been an invaluable source of information about crashes. In most cases it is easy to reconstruct a test case that reproduces the problem, and the bug is fixed within an hour. We have fixed dozens of bugs this way. When the cause is not obvious, we "refine" the crash by adding if-statements and assertions so that after the next release we gain one additional bit of information from the stack trace about the state of execution.

However there was always a stubborn tail of field reports that couldn't be explained: corrupt stack pointers, corrupt g registers (the thread-local pointer to the current goroutine), or panics dereferencing a pointer that had just passed a nil check. All of these point to memory corruption.

In theory anything is possible if you abuse unsafe or have a data race, but I audited every use of unsafe in the executable and am convinced they are safe. Proving the absence of data races is harder, but nonetheless races usually exhibit some kind of locality in what variable gets clobbered, and that wasn't the case here.

In some cases we have even seen crashes in non-memory instructions (e.g. MOV ZR, R1), which implicates misexecution: a fault in the CPU (or a bug in the telemetry bookkeeping, I suppose).

As a programmer I've been burned too many times by prematurely blaming the compiler or runtime for mistakes in one's own code, so it took a long time to gain the confidence to suspect the foundations in this case. But I recently did some napkin math (see https://github.com/golang/go/issues/71425#issuecomment-39685...) and came to the conclusion that the surprising number of inexplicable field reports--about 10/week among our users--is well within the realm of faulty hardware, especially since our users are overwhelmingly using laptops, which don't have parity memory.

I would love to get definitive confirmation though. I wonder what test the Firefox team runs on memory in their crash reporting software.


> In some cases we have even seen crashes in non-memory instructions (e.g. MOV ZR, R1), which implicates misexecution: a fault in the CPU (or a bug in the telemetry bookkeeping, I suppose).

Thats the thing. Bit flips impact everything memory-resident - that includes program code. You have no way of telling what instruction was actually read when executing the line your instrumentation may say corresponds to the MOV; or it may have been a legit memory operation, but instrumentation is reporting the wrong offset. There are some ways around it, but - generically - if a system runs a program bigger than the processor cache and may have bit flips - the output is useless, including whatever telemetry you use (because it is code executed from ram and will touch ram).


Good point: I-cache is memory too. (Indeed it is SRAM, so its bits might be even more fragile than DRAM!)


Why would a 6T cell (SRAM) be more fragile than a 1T1C (DRAM) cell?


Because it's SRAM, and therefore it still can lose its electrons because we're working with cells a few atoms thick? The loss is not necessarily in L1 (where it's replaced frequently), but in L3 which now has memory comparable to PCs in the early 2000s (and can have its data "stuck" in the same physical area for minutes).


You might consider adding the CPU temperature to the report, if there's a reasonable way to get it (haven't tried inside a VM). Then you could at least filter out extremely hot hardware.


CPU model / stepping / microcode versions are probably at least as useful as temperature. I'd also try to get things like the actual DRAM timing + voltage vs. what the XMP extensions (or similar) advertise the manufacturer tested the memory at.

I have at least one motherboard that just re-auto-overclocks itself into a flaky configuration if boot fails a few times in a row (which can happen due to loose power cords, or whatever).


Interesting reading - I've occasionally seen some odd crashes in an iOS app that I'm partly responsible for. It's running some ancient version of New Relic that doesn't give stack traces but it does give line numbers and it's always on something that should never fail (decoding JSON that successfully decoded thousands of times per day).

I never dug too deeply but the app is still running on some out of support iPads so maybe it's random bit flips.


Ive been trying to push my boss towards more analytics/telemetry in production that focus on crashes, thanks for sharing.


> Even with only about 1 in 1000 users enabling telemetry

How do you know the number/proportion of users who run without telemetry enabled, since by definition you're not collecting their data?

(Not imputing any malice, genuinely curious.)


Good question. We don't know the true figure, but we extrapolate the denominator from estimates of the total number of Go users and the fraction of Go users that run gopls.


>All of these point to memory corruption.

Actually "dereferencing a pointer that had just passed a nil check" could be from a flow control fault where the branch fails to be taken correctly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: