Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And that’s how you know somebody has only second-hand experience of science - having never reflected upon, or examined the (very human) limits of the knowledge they have been given by others.

What scientific procedure would you use to establish whether one working scientific model “matches” reality; while another working model doesn’t? Nobody has direct access to reality outside of their own theoretical paradigm of understanding.

It isn’t just me saying such “crazy” things…

https://en.wikipedia.org/wiki/Model-dependent_realism



>What scientific procedure would you use to establish whether one working scientific model “matches” reality; while another working model doesn’t?

See what model1 gives as a prediction for input X. See what model2 gives as a prediction for input X.

Input X in the world and compare what actually happened.

The models that more closely predict the observations are more likely to reflect reality (aka being closer to the truth).

You don't have direct access to reality. And yet if someone thinks F = ma isn't closer to the truth than F = ma^4 (for the usual symbols and approximations and blah blah, assume I'm aware Einstein existed) then they got way too drunk on other people's philosophies.

Most scientists I met and worked with care if their models are close to reality/truth. Epistemological uncertainty does not mean every model is equally untruthful.


You haven’t really tackled the strongest interpretation of my question head on.

If two different models/theories are observationally equivalent; if two (or more) different curves can be fitted to the same dataset which model is “more true”?

https://en.wikipedia.org/wiki/Observational_equivalence

https://plato.stanford.edu/entries/scientific-underdetermina...

And how do you go from “F=ma for the usual symbols…” to solving the symbol-grounding problem? Symbols are epistemic entities, not ontological entities.

https://en.wikipedia.org/wiki/Symbol_grounding_problem

The model works. That is all there is to be said about it.

If you begin asking questions such as “is the model true or not? Do the terms in the equation correspond to real world entities?” you are no longer doing science, you are doing philosophy.


>If two different models/theories are observationally equivalent; if two (or more) different curves can be fitted to the same dataset which model is “more true”?

You don't know which one is more likely to be true (all other things being equal). However, if all else isn't equal and you are well acquainted with a Mr Bayes you can probably do all sorts of fancy Mathematics to estimate which model is more likely to be true. Or just go with your hunch. Or find yourself a dataset that will invalidate one or both of your models.

Mind you, I don't think this is an issue for most subjects. How often do you get different functions that spew the same output in the all observable and theoretical instances?

>If you begin asking questions such as “is the model true or not? Do the terms in the equation correspond to real world entities?” you are no longer doing science, you are doing philosophy

Yes, people often do that when they are engrossed in a subject. Most scientists I met care about the subject they are working on and philosophise a lot about it. Some don't anymore, possibly because working in the academy should be considered a carcinogenic hazard by the World Health Organization.

When talking about scientists people sometimes forget that they are human, often driven towards the career by just wanting to know. They care if their models match / are close to reality to the best of their knowledge.

Do remember that we are talking about scientists, not science. Almost everyone I met in my short time in the academy fundamentally cared if their models closely matched reality. Most of them didn't get where they ended up at for utilitarian reasons, and lots of them are on a permanent state of annoyance at not being able to fill some forms with "I'm just curious" to get funds.


>However, if all else isn't equal and you are well acquainted with a Mr Bayes you can probably do all sorts of fancy Mathematics to estimate which model is more likely to be true.

But you still need to tell Mr Bayes what your criteria are for model-selection.

So the output of your calculations is some scalar "truth-value" (higher is better) - what's your input?

In simpler terms: the truthfulness of your model is a function of... what ?

>Mind you, I don't think this is an issue for most subjects. How often do you get different functions that spew the same output in the all observable and theoretical instances?

In curve fitting? Practically all the time! It is a mathematical fact that infinitely many curves fit a finite dataset. So we pick curves that fit approximately, not exactly with the obvious underfitting/overfitting optimisation problem that needs solving in conjunction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: