Hacker Newsnew | past | comments | ask | show | jobs | submit | rbh42's commentslogin

There's also Jesper Kenn Olsen who ran around the world twice; first eastwards, then southwards/northwards: https://en.wikipedia.org/wiki/Jesper_Olsen_(runner)


If you define polynomials by their roots instead of by their coefficients, then the roots tend to have wherever absolute values you tend to choose!


[deleted]


If it is self-referential, it indeed is. The parent comment (this comment's grandparent) is not noise. As with almost any question involving the term "random", the answer depends on what you mean by that term.

Oftentimes, one can leave that implicit because there is a commonly understood distribution. For example, "pick a number between 1 and n" typically implicitly assumes a discrete uniform distribution, "pick a number between 0 and 1" a continuous uniform distribution.

For polynomials of degree n, whether picking a random polynomial by randomly selecting its coefficients is more obvious than doing zo by randomly selecting its zeroes depends on what you pick as the natural representation of a polynomial.


From "man perlfunc" unser "system":

If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to "execvp", which is more efficient.

So your example does not invoke the shell.


Passing a shell metacharacter to the system function does indeed then trigger the vurnerability. Thanks as I didn't realize it wasn't calling the shell otherwise.


Tested by time (I'm the OP).


> Tested by time (I'm the OP).

This is so important, and so often overlooked.

The lure of "ohhh shiny!" is prevalent in tech. I love to use bleeding edge stuff, play with new toys etc, but that is purely for home and personal stuff. Never for work.

I want my work life to be utterly boring. Nothing breaking. Everything behaving exactly as expected. I will happily trade a bit of performance and a lack of some shiny features for a system or component that works reliably. The less time I have to spend fighting stuff the more time I can spend tuning, automating or improving what I do have to look after.

XFS performs well. It's reliable, reasonably fast (they trade some speed for durability/reliability), and just works


I'm the OP, so I can shred a bit of light on that: Dell's support suggested a file-level copy when I asked them what they recommended (but I'm not entirely sure they understood the implications). Also, time was not a big issue.

I did keep a log file with the output from cp, and it clearly identified the filenames for the inodes with bad blocks. Actually, I'm not sure how dd would handle bad blocks.


Thank you for clarification.

I was about to bet on "read fail repeat skip" cycle for dd's behaviour but, looking into coreutil's source code at https://github.com/goj/coreutils/blob/master/src/dd.c , if I'm not mistaken , dd does not try to be intelligent and just uses a zeroed out buffer so It would return 0's for unreadable blocks.


Glad someone noticed it (I'm the OP). Reading the drives systematically is called "Patrol Read" and is often enabled by default, but you can tweak the parameters.


That's probably what I'll do if I run into a problem like that another time (I'm the OP).


Not so. Both the sending and the receiving tar processes will need a data structure keeping track of which inodes they've already processed. They can skip inodes with a link count of 1, but if all inodes have multiple links (as in this case), the overhead will be twice that of a single cp process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: