Not true. In frequentist statistics, from the perspective of Bayesians, your prior is a point distribution derived empirically. It doesn't have the same confidence / uncertainty intervals but it does have an unnecessarily overconfident assumption of the nature of the data generating process.
Not true. In frequentist statistics, from the perspective of Bayesians and non-Bayesians alike, there are no priors.
—-
Dear ChatGPT, are there priors in frequentist statistics? (Please answer with a single sentence.)
No — unlike Bayesian statistics, frequentist statistics do not use priors, as they treat parameters as fixed and rely solely on the likelihood derived from the observed data.
There's always priors, they're just "flat", uniform priors (for maximum likelihood methods). But what "flat" means is determined by the parameterization you pick for your model. which is more or less arbitrary. Bayesians would call this an uninformative prior. And you can most likely account for stronger, more informative priors within frequentist statistics by resorting to so-called "robust" methods.
First, there is not such thing as a ‘uninformative’ prior; it’s a misnomer. They can change drastically based on your paramerization (cf change of variables in integration).
Second, I think the nod to robust methods is what’s often called regularization in frequentist statistics. There are cases where regularization and priors lead to the same methodology (cf L1 regularized fits and exponential priors) but the interpretation of the results is different. Bayesian claim they get stronger results but that’s because they make what are ultimately unjustified assumptions. My point is that if they were fully justified, they have to use frequentist methods.
One standard way to get uninformative priors is to make them invariant under the transformation groups which are relevant given the symmetries in the problem.
It’s not true that “there are always priors”. There are no priors when you calculate the area of a triangle, because priors are not a thing in geometry. Priors are not a thing in frequentist inference either.
You may do a Bayesian calculation that looks similar to a frequentist calculation but it will be conceptually different. The result is not really comparable: a frequentist confidence interval and a Bayesian credible interval are completely different things even if the numerical values of the limits coincide.
Frequentist confidence intervals as generally interpreted are not even compatible with the likelihood principle. There's really not much of a proper foundation for that interpretation of the "numerical values".
What does “as generally interpreted” mean? There is one valid way to interpret confidence intervals. The point is that it’s not based on a posterior probability and there is no prior probability there either.
If you want to say that when you do a frequentist analysis which doesn’t include any concept of prior you get a result that has a similar form to the result of a completely different conceptually Bayesian analysis which uses a flat prior (definitely not “a point distribution derived empirically”) that may be correct. It remains true that there is no prior in the frequentist analysis because they are not part of frequentist inference at all.
Priors are not used in construction of frequentist approaches, but that does not mean that the analyses aren't isomorphic in theory.
Point distribution <=> point estimate as a sample from an initially flat distribution. A priori vs a posteriori perspectives, which are equivocal if we are to take your description of frequentist statistics into account ;)
It’s not my description of frequentist statistics. It’s the frequentist statisticians’ description. This is from Wasserman’s All of Statistics:
The statistical methods that we have discussed so far are known as frequentist (or classical) methods. The frequentist point of view is based on the following postulates:
F1 […]
F2 Parameters are fixed, unknown constants. Because they are not fluctuating, no useful probability statements can be made about parameters.
Your statement is one of those "not even wrong" pedantic ploys that falls apart at the lightest sneeze in its direction.
Money is the only way to exert pressure on society and narratives. If you think that has no effect on elections then you are about as antisocial and antipatriotic a person as I can imagine.
> Money is the only way to exert pressure on society and narratives
It’s not. Every piece of state and federal legislation I personally wrote language into passed before I was wealthy. Showing up is incredibly hard for a lot of people. Being decent and eloquent when you do is impossible for the rest.
I’ve donated to get power and gotten involved. The latter absolutely smites the former, to the point that donors are almost being taken for a ride outside a few idiot candidates who unfailingly lose.
You were paid by rich companies to write those laws, or else given access by people with more money and influence than you. These things are often done in ways that result in those same people making more money. It is incentive and reward all in one.
Reinforcing the status quo is one of the primary reasons lobbying is deployed.
You can trust that people with money are frugal and only spend when they expect to see a return.
If the region was going to go that way anyway, then the lobbying was wasted spend. So what would you rather have as your truth: that the money was spent to overturn public will, or that it was a dumb error to spend that money in the first place? What does that say about the people who see the status quo as something worth preserving?
"Open source" means the source code is open to the public for reading and copying. Licenses have complicated the idealistic definition to restrict copying, but that is only within the context of taking credit (ie implicit relicensure). The only winning move is not to play the game at all.
reply