If it is self-referential, it indeed is. The parent comment (this comment's grandparent) is not noise. As with almost any question involving the term "random", the answer depends on what you mean by that term.
Oftentimes, one can leave that implicit because there is a commonly understood distribution. For example, "pick a number between 1 and n" typically implicitly assumes a discrete uniform distribution, "pick a number between 0 and 1" a continuous uniform distribution.
For polynomials of degree n, whether picking a random polynomial by randomly selecting its coefficients is more obvious than doing zo by randomly selecting its zeroes depends on what you pick as the natural representation of a polynomial.
If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to "execvp", which is more efficient.
Passing a shell metacharacter to the system function does indeed then trigger the vurnerability. Thanks as I didn't realize it wasn't calling the shell otherwise.
The lure of "ohhh shiny!" is prevalent in tech. I love to use bleeding edge stuff, play with new toys etc, but that is purely for home and personal stuff. Never for work.
I want my work life to be utterly boring. Nothing breaking. Everything behaving exactly as expected.
I will happily trade a bit of performance and a lack of some shiny features for a system or component that works reliably. The less time I have to spend fighting stuff the more time I can spend tuning, automating or improving what I do have to look after.
XFS performs well. It's reliable, reasonably fast (they trade some speed for durability/reliability), and just works
I'm the OP, so I can shred a bit of light on that: Dell's support suggested a file-level copy when I asked them what they recommended (but I'm not entirely sure they understood the implications). Also, time was not a big issue.
I did keep a log file with the output from cp, and it clearly identified the filenames for the inodes with bad blocks. Actually, I'm not sure how dd would handle bad blocks.
I was about to bet on "read fail repeat skip" cycle for dd's behaviour but, looking into coreutil's source code at https://github.com/goj/coreutils/blob/master/src/dd.c , if I'm not mistaken , dd does not try to be intelligent and just uses a zeroed out buffer so It would return 0's for unreadable blocks.
Glad someone noticed it (I'm the OP). Reading the drives systematically is called "Patrol Read" and is often enabled by default, but you can tweak the parameters.
Not so. Both the sending and the receiving tar processes will need a data structure keeping track of which inodes they've already processed. They can skip inodes with a link count of 1, but if all inodes have multiple links (as in this case), the overhead will be twice that of a single cp process.