Hacker Newsnew | past | comments | ask | show | jobs | submit | orting's commentslogin

> -I don’t know whether their policy differs from country to country

Would not be surprised if it did. I remember one rental in the Netherlands where the english text said free parking, but the dutch text said parking available. The problem was AirBnBs translation and they refunded my parking expenses.


A month is not a well defined unit. There are four different month units: 28, 29, 30, 31 days. If you do calculations respecting this there will be no problem.

Complex numbers and vectors are not a good analogy.


I believe you fail to grok GP's point.

The mapping from "month" units to "days" units is well-defined. It's just some function that happens to be non-constant over the month ordinal.

Heck, if we start down the "precise time definition" rabbit hole, though, then neither does "day" have some philosophically unassailable notion, cf. leap seconds. Even the unit of "second" ends up pulling in a whole heck of a lot of physics machinery just to nail down some semblance of rigor.

Anyway, despite being such an intuitively simple and practically functional concept, the notion of time amd time measurement turns out to be suprisingly subtle and to have a fascinating history. I highly recommend jumping down that rabbit hole. Hehe

Anyway, I'm surprised calc.exe doesn't calculate with and store dates using some kind of epoch time.


This is a good point. From the article "Once the students were selected, the researchers then administered the Major Field Test in Computer Science, an exam that was developed by the U.S. Educational Testing Service and is regularly updated. The exam was translated for the students in China and Russia."

It seems reasonable to expect a test written for a specific education system (either implicitly or explicitly) will be biased against others.

On the other hand, it is not that surprising that the US is a the top. It is a pretty big country with a long tradition for high quality education and a lot of funds going into CS departments. However, I will be surprised if this is not changing towards more dominance by India and China, because both countries are focusing a lot of resources in this area.


There is an interesting version of this with The Netherlands and Belgium The Netherlands: https://www.google.com/maps/place/Baarle-Nassau,+Holland/@51... Belgium: https://www.google.com/maps/place/Baarle-Hertog,+Belgien/@51...


The phase 2 study primarily looked at safety. There was no correction for multiple hypothesis testing on the efficacy endpoints. So it seems that the only conclusion that is warranted is that the study shows no adverse health effects and that "larger clinical trials are warranted to establish the efficacy of hMSCs in this multisystem disorder." as they state in the conclusion.

It is interesting if it works, but lets wait for the next phase before assuming it does.


“In the first trial 15 frail patients received a single MSC infusion collected from bone marrow donors aged between 20 and 45 years old. Six months later all patients demonstrated improved fitness outcomes, tumor necrosis factor levels and overall quality of life.

The second trial was a randomized, double blind study with placebo group. Again no adverse affects were reported and physical improvements were noted by the researchers as "remarkable".

"There are always caveats associated with interpreting efficacy in small numbers of subjects, yet it is remarkable that a single treatment seems to have generated improvement in key features of frailty that are sustained for many months," writes David G. Le Couter and colleagues in a guest editorial in The Journals of Gerontology praising the research.”


I am not criticizing the study. I am highlighting that the conclusion that the paper arrives at i the correct one: That it warrants large scale studies.

This work is a prime candidate for being misrepresented as showing that this stem cell treatment is effective for age related health issues.

There are 30 participants in the phase 2 trial. There are two treatment groups (100M and 200M) with different dose and one placebo group. Each group has 10 participants.

None of the treatment groups showed adverse effects.

There is a difference between asking "Are there any adverse effects?" and "are there positive effects for parameter 1 to n"? If you ask the second kind of question and do not correct for multiple hypothesis testing, you will make many errors.

The small dose treatment group (100M) showed improvement in many parameters vs placebo, whereas the other (200M) showed improvement in fewer parameters vs placebo. Since no corrections where made for testing, this only tell us that there where no statistically significant adverse effects.

As I noted initially, I think it is interesting. Once we have seen the results of a couple of large studies, we can talk about the effects of this treatment.


Thank you for your reply, in particular for the specific issue with the results. I quoted the article to show that the researchers seem to believe that the trial results are more promising than a formal analysis would suggest. I agree that further studies are required.


Basically we can represent any signal as an infinite sum of sinusoids. If you know about Taylor expansion of a function, then you know that the first order term is the most important, then the second and so on. Same principle with the sinusoids. So if we remove the sinusoids with very high frequency we remove the terms with least information.


The first thing I noticed was the ringing, which is an artifact of low-pass filtering so it's a nice opportunity to go into problems with that kind of filtering. Other than that I think it was an ok teaser that gives an idea of how compression is done and what the trade-offs are.


It is from Bellman, Held and Karp [0] and has O*(2^n) complexity for both space and time. It is quite simple and based on the idea that

"Every subpath of a path of minimum distance is itself of minimum distance."

[0] https://en.wikipedia.org/wiki/Held%E2%80%93Karp_algorithm


As with a lot of "simple" math, the trick is to actually write it down and calculate it, because our intuition (at least for some of us) is often not the best when it comes to this kind of calculation.

In this case we write down the contingency table. Assuming that the test perfectly detects what we are looking for we find

True positives: 1

False positive: 5% of 1000 = 50

True negative: 949

False negative: 0

Chance of disease given positive results = 1/51 = 1.96%


Interestingly, there is a discussion in comments of this paper claiming the authors got it wrong.

But as you said, under the assumption that the test perfectly detects, the results are correct.


This reminds me a recent interesting publication. A comment on Science (http://science.sciencemag.org/content/351/6277/1037.2.long) arguing a previous study on Science did the statistical analysis wrong is under critics that the comment itself did statistical analysis wrong.


It really depends on how you define "False Positive Rate". There are at least 2 different ways of looking at this:

- Out of 1000 tests 51 will be positive, 50 of which are incorrect.

- Out of 1000 positive tests, 50 will be wrong, 950 will be correct.

The 2 interpretations give vastly different results.


"False positive" has a precise meaning in statistical analysis. The second thing you're talking about is useful to know, but calling it the false positive rate is just wrong.

"Out of 1000 positive tests, 50 were false positives, 950 were true positives" - valid statement.

"The false positive rate was 50 out of 1000" - abuses a common technical term in a way that sounds valid on the face of it, but which is potentially VERY misleading.

We can't even calculate a valid false positive rate from the above data, since that requires taking the ratio vs. all tests and not just positive tests.


Here is a definition of False Positive from wikipedia: "In medical testing, and more generally in binary classification, a false positive is an error in data reporting in which a test result improperly indicates presence of a condition, such as a disease (the result is positive), when in reality it is not"

I don't see anything here that definitively clarifies which of the 2 scenarios above it can exclusively be applied to.


There are many ways in which you can incorrectly interpret statements on Wikipedia, which is why specialized textbooks and so forth still have use.


That's what a "false positive" is but Wikipedia also has a separate article on "false positive rate", which gives the formula

FP / (FP + TN)

Where FP is number of false positives, and TN is number of true negatives. So it's a third option:

- Out of 1000 actually negative samples, 50 were tested as positive.

So in the case of 1000 samples, 949 correctly testing as negative, 50 incorrectly testing as positive, and 1 correctly testing as positive, the false positive rate is 50 / 999.


Right; this is the definition of the numerator (the number of false positives). The false positive RATE also has a denominator, which is defined as the total number of tests performed (the second case in the parent poster's question).

Dividing by the number of positive tests instead gives what's called the 'false discovery rate' which is pretty rarely used.


The traditional way (that I'm aware of) of defining the false positive rate is derived from the conditional probability of a positive prediction given that the true underlying state is negative:

    False Pos. Rate = P(predict + | state -)
                    = P(predict + and state -) / P(state -)
                    = P(predict + and state -) / (P(predict + and state-) + P(predict - and state -))
                    ~ #FP/samplesize / (#FP/samplesize + #TN/samplesize)
                    = #FP / (#FP + #TN)
The latter quantity is usually given as the definition of false positive rate. Roughly speaking, it's the ratio of how often you predict positive when the state is negative versus how often the state is negative.


The first way is wrong. To show you clearly way: If you apply 1000 tests to a sample in which everyone is ill, there is 0 percent of false positives because a false positive requires the person to be sane.

So to detect false positives in a mathematical way, you should apply the test only to sane people and now the proportion positive/total is an estimation of the false positive rate.


The latter ratio is called precision. There is a handy table at https://en.wikipedia.org/wiki/Precision_and_recall#Probabili...


Are there actually two accepted ways of defining type I error rates? (Genuinely curious, I am not a statistician)


Forcing a nation is what war is about, and it rarely works out very well.

Negotiation is about finding a solution to a problem that leaves all parties better off if they follow the solution than if they don't. It is not always easy and sometimes coercion, in the form of sactions within EU and UN, is used to make one party realize what is best for them - but this also tends to work out not very well.

Not forcing people to do what you want is often a more succesful way of getting what you need.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: