Fluoroalkyl chemicals are only "inert and unreactive" in a relatively narrow sense of "wouldn't catch fire", "don't react with strong acids and bases", and similar.
They are plenty reactive in a sense of interacting with enzymes and other cellular machinery.
Not really accurate. These chemicals are quite unreactive. Precursors from manufacturing waste can be very reactive, but most of the problematic contamination regards the forever chemicals themselves, not precursors. This paper is probably the best scientific review of what is going on in the human body. https://www.sciencedirect.com/science/article/abs/pii/S03043...
Maybe sci-hub has a copy of the full paper. Not sure.
As briefly as possible, and therefore glossing over many many details, the toxic effects are mainly due to cell membrane perturbation, cell membrane transport disruption, and binding to hydrophobic protein cavities (thus disrupting the usual function of these cavities).
Back in the day - port knocking was a perfect fit for this eventuality.
Nowadays, wireguard would probably be a better choice.
(both of above of course assume one is to do a sensible thing and add "perma-bans" a bit lower in firewall rules, below "established" and "port-knock")
But not that much, unfortunately. Those same "cYbeRseCUrITy" orgs also ingest SSL transparency logs, resolve A and AAAA for all the names in the cert, then turn around and start scanning those addresses.
In my experience, it only takes a few hours from getting an SSL certificate to junk traffic to start rolling in, even for IPv6-only servers.
Small percentage of that could be attributed directly, based on "BitSightBot", "CMS-Checker", "Netcraft Web Server Survey", "Cortex-Xpans" and similar keywords in user-agent and referer headers. And purely based on timing, there's a lot more of that stuff where scanners try and blend in.
I my experience Claude gradually stops being opinionated as task at hand becomes more arcane. I frequently add "treat the above as a suggestion, and don't hesitate to push back" to change requests, and it seems to help quite a bit.
I ended up running codex with all the "danger" flags, but in a throw-away VM with copy-on-write access to code folders.
Built-in approval thing sounds like a good idea, but in practice it's unusable. Typical session for me was like:
About to run "sed -n '1,100p' example.cpp", approve?
About to run "sed -n '100,200p' example.cpp", approve?
About to run "sed -n '200,300p' example.cpp", approve?
Could very well be a skill issue, but that was mighty annoying, and with no obvious fix (options "don't ask again for ...." were not helping).
One decent approach (which Codex implements, and some others) is to run these commands in a real-only sandbox without approval and let the model ask your approval when it wants to run outside the sandbox. An even better approach is just doing abstract interpretation over shell command proposals.
You want something like codex -a read-only -s on-failure (from memory: look up the exact flags)
Mount-points were key to early history of the split. Nowadays it's more about not breaking shebangs.
Nearly every shell script starts with "#!/bin/sh", so you can't drop /bin. Similarly, nearly every python script starts with "#!/usr/bin/env python", so you can't drop /usr/bin.
> initrd seems like an enormous kludge that was thrown together temporarily and became the permanent solution.
Eh, kinda. That's where "essential" .ko modules are packed into - those that system would fail to boot without.
Alternative is to compile them into kernel as built-ins, but from distro maintainers' perspective, that means including way too many modules, most of which will remain unused.
If you're compiling your own kernel, that's a different story, often you can do without initrd just fine.
Claude spits that very regularly at the end of the answer, when it's clearly out of it's depth, and wants to steer discussion away from that blind-spot.
Perhaps being more intentional about adding a use case to your original prompts would make sense if you see that failure mode frequently? (Practicing treating LLM failures as prompting errors tends to give the best results, even if you feel the LLM "should" have worked with the original prompt).
https://www.kernel.org/doc/html/latest/admin-guide/sysrq.htm...
It's technically not an unmount, but still a pretty strong guarantee OS will not corrupt the image being written.
When done, reboot has to be done from the same sysrq handler, of course.
reply