The reason water exists on earth, and not on mars, may be that life on earth saved the water by discovering photosynthesis.
The discovery of photosynthesis caused the "great oxygenation event" which pumped oxygen into the atmosphere. Oxygen reacts with atmospheric hydrogen to form water. Without the oxygen, the very light hydrogen molecules would float to the top of the atmosphere and are easily blown away by solar winds, which is what happened on Mars. But with high oxygen concentrations on Eath, hydrogen molecules react to form heavier water molecules before they have a chance to be blown away, and thus hydrogen and water are retained.
I read about this in the book "Oxygen" by Nick Lane.
That seems like it must be incorrect. Life on earth evolved in the oceans, so the oceans had to exist for eons before life evolved and the eons it took for life to evolve photosynthesis and produce significant amounts of oxygen in the atmosphere. If what you're saying is correct, then the earth either started out with much more water than it has now, or the hydrogen escaped a lot more slowly than I'd expect it to.
The book was written in 2002, and I just looked up Nick lane's latest 2016 take on it here: [1]
His point in this new article is that instead of one big "oxygenation event" there may have been multiple. But he sticks to his story that the creation of an ozone layer by photosynthesis was the key step in saving the oceans. He argues both Mars and Earth had oceans originally (confirmed by Mars Satellite observations), which were gradually diminished by a process in which ultraviolet light splits atmospheric water, minerals on the surface absorbed the oxygen (rusting, making Mars red) leaving the hygrogen to blow away. But life on earth pumped extra oxygen into the atmosphere, faster than minerals could aborb it, creating the reactive ozone layer which prevented hydrogen from blowing away, thus saving the oceans from their fate on Mars.
I've been in planetary science and never encountered this theory. I have a default position of skepticism towards it for that reason, but I don't have a reason to object on the face of it. The question is not where the oceans came from (that's settled -- it came out of the mantle as the Earth cooled, and ultimately before that from cometary impact), but how the oceans did not boil or evaporate off like they did everywhere else. I could understand a theory that they were kept from boiling by being in the habitable zone AND by some combination of life processes, and that this hydrogen-capture mechanism helped replenish the ocean as hydrogen escaped from natural gas upwellings. But I have no sense of the scales and magnitudes involved to see if inputs approximately match outputs without seeing the underlying paper.
<quote>but how the oceans did not boil or evaporate off like they did everywhere else.</quote>
gravity and magnetic field? if it were hydrogen or helium it would be stripped off by solar wind (storms) like everywhere else, but water is quite heavy due to the oxygen.
also 'everywhere else' means practically mercur, mars and asteroids (moon). no idea about venus.
ice giants keeps their water also due to gravity and far apart on pluto it's frozen like rock.
> Life on earth evolved in the oceans, so the oceans had to exist for eons before life evolved
I can't intelligently contribute to the overall discussion here, but I know one of the most interesting things about the currently understood timeline is the apparent lack of eons between the earliest conditions conducive to life after its initial cooling and the earliest evidence of life. Don't quote me on exact numbers, but within margins of error, as I understand it, it's like in the range of millions of years, not billions (which has all sorts of interesting implications for both the Fermi paradox and religious thinkers) Though, as far as I know, you'd still be correct on the distance to photosynthesis.
Or not; the “land theory” (that it began in shallow, possibly volcanic, terrestrial pools”) and “sea theory” (that it began in oceans, possibly at hydrothermal vents) have been competing theories forever, essentially.
A trick for dealing with periodic coordinates, not discussed in the article, is to convert each periodic coordinate (eg, an angle theta) into a pair of cartesian coordinates (x,y) on the circle, and then compute the distance in cartesian space. Ie, convert to x = cos(theta), y = sin(theta).
In the 2d example in the post, you would rescale the coordinates so the unit cell is from (-pi,pi) in both dimensions, and then the distance formula would be
Folks who use this kind of distance measure might note that it also works well as a metric between angles (i.e. rotations) or between points on the projective line. For instance, see https://www.youtube.com/watch?v=oJAn--vsAzc (unfortunately not too self-contained a source)
It depends on how you define "distance". My initial thought when reading your comment was that you couldn't possibly use the Euclidean distance because you'd be drawing a path between two points on the circle that involves points _not_ on the circle.
Using the Euclidean distance (Cartesian distance? I dunno) works because it's a monotonic underestimate of the distance along the circle. So you know if cartesian_dist(A, B) > cartesian_dist(A, C) then circle_dist(A, B) > circle_dist(A, C).
The original problem, though, is about computing distance, not about comparing distances. Comparing distances is just used as an intermediate step in the initial method given.
"Single Particle Reconstruction" for Cryo-EM reconstruction algorithm, anyone?
SPR is an algorithm already used to reconstruct 3d objects from a set of 2d images produced by an electron microscope, typically of a protein on a flat surface.
I haven't read the paper here, but I suspect a key part of the algorithm is that it can only reconstruct objects with symmetry planes: The airplane, chair, car are all symmetric across an axis. This greatly constrains the possibilities the algorithm has to search through. In em-reconstruction the user often specifies what they think the symmetries are beforehand.
-- Their trades are profitable f = 51% of the time, and they do N = 3 million trades per day.
-- Their net profitability per day is thus (well approximated by) a normal random variable with a mean of f
and a standard deviation of sqrt(f(1-f)/N), or 3e-4
-- The probability of this value being less than 50% is well approximated by norm.cdf(0.5, 0.51, sqrt(f(1-f)/3e6))
which gives 2.4e-263
In other words they only expect one loss per 10^263 days, which is much larger than the age of the universe. They are actually doing much worse than expected because they lost on one day.
An example of what I would consider red-light camera abuse:
Philadelphia has a number of red light cameras which each generate about 10,000 tickets a year. That's about 30 tickets a day for each camera, $100 per ticket. 12 cameras generate $9 million a year [1].
If 30 people are "running" the red light each day, does that say more about the drivers or about the setup of the red light?
My impression is that physicists who say this usually don't know much about the theory and evidence for evolution. (I am trained in physics, I now do biophysics).
What about the beautiful and often very precise linear relationship between radioactive dating of the fossil record, and genetic dating using the molecular clock?
What about the beatiful correspondence often found between the principle components of genetic variation and geographical position (isolation by distance)?
What about all the biochemical discoveries related to DNA function (including the existence of DNA itself), how mutations occur, about heritability?
What about everything we've discovered about genome composition and how it changes over time? (duplicate genes, pseudogenes, transposons, hotspots of various kinds).
Is a DNA sequence less precise than a spectral line?
(Bioinformatics) I wonder if he means not that evolution is false/unfalsifiable/irreproducible, but that it is not really a very predictive theory. I agree there is a lot of evidence that it happened and continues to happen, but predicting how it will happen, i.e., how an organism will evolve, what genes will mutate etc, in a certain environment, is very difficult and really basically impossible.
> What about the beatiful correspondence often found between the principle components of genetic variation and geographical position (isolation by distance)?
Funny you mention this. There is a student I work with trying to observe this with metagenomics data with much less success than you might imagine.
> but predicting how it will happen, i.e., how an organism will evolve, what genes will mutate etc, in a certain environment, is very difficult and really basically impossible.
By that measure physics isn't predictive either. Any moderately complex system and the best we can do is statistical models, often with little to no predictive power.
I agree that the problem is complex systems, not biology per se. But physics is able to be quite predictive because it is able to isolate one basic phenomenon at a time and model it with great precision (gravity, electromagnetism, particle physics, etc). Then, if we want to build devices based on these phenomena from the ground up, we can also do that and predict their behavior with great accuracy (e.g., behavior of a electrical circuit).
This is not currently possible at all in biology because even the most minimal functional, self-reproducing biological system is very complex. Indeed even a single protein is quite complex. I suppose by "complex" in this context I mean: lots of acting entities, and many physical laws operating at once rather than just a few.
Physics does have predictive problems when it is applied to weather, climate, etc, because those are complex systems. But that kind of the thing is a minority of the subject matter in physics.
> Physics does have predictive problems when it is applied to weather, climate, etc, because those are complex systems. But that kind of the thing is a minority of the subject matter in physics.
There is a far greater number of humans working in applied physics than in characterizing isolated aspects of theoretical systems so I'd question how you judged "minority" there :)
Perhaps our disagreement is just in choice of words. The idea that "physics", and all that encompasses, is somehow more predictive than a subset of biology was what triggered my response. If instead you said we have excellent models for simple questions in particle physics, we may have agreed :)
As I mentioned on a sibling comment, a simple question like "how an organism will evolve" is of course enormously complex, and if we're going to evaluate the "squishiness" of our answers to it, it's better compared to our ability to predict specific storms a year in advance or how a protoplanetary disk will evolve into a specific configuration of planets. We don't cite those as squishy because we recognize the complexity of the systems involved (and the relative primitiveness of our models).
Well, "physics" really encompasses just about everything, including biology, so I am implicitly limiting it to things which would not fall into another, more specific field, such as engineering or meteorology. Right, word choice.
Using any definition for "physics" close to this, while the percentage of biological questions that involve complex systems is close to 100%, it is much lower in physics. The subsets of physics problems that do involve complex systems will suffer the same predictive problems.
In fact, this conversation has got me wondering whether "complex system" really means anything more than "a system whose behavior is hard to predict". I know that complex systems have other common attributes, but really the unpredictability seems to be the defining feature.
This is a long way of saying "I agree that we don't really disagree" :)
Statistical models doesn't mean that there is no predictive power.
If we look at a (perfectly random) coin flip we can predict a 50% chance of heads. We can also predict the likelihood of distributions of values over x flips. If the system we are modeling is inherently statistical we would expect our prediction to be statistical.
You are also confusing the fact that the stuff in physics that isn't statistical in nature has extreme precision. Think of how well we know the orbits of planets.
Your overall point is correct, but I would add that there is no such thing as a non-statistical model or prediction in any science or aspect of physical reality IMO. For two reasons: A) reality is inherently statistical at the quantum level, and B) measurement error will always exist.
Thus even our models of planetary orbits are statistical. The inverse-square law, GM1M2/r^2, even if it perfectly describes reality (probably, but not entirely certain! see [1]), will have some degree of measurement error in M1, M2, and r (not to mention G) and so the resulting Fg will be a distribution, not a single number technically speaking.
It seems that the situations where physics can best describe things with very high accuracy is when it can abstract away many relatively homogeneous particles or entities into a bigger "thing" with aggregate properties. For example, in fluid dynamics or gravity, you don't attempt to determine the behavior of individual particles, which would be subject to enormous uncertainty, only the behavior of the system-as-a-whole. By the law of large numbers then the uncertainties decrease dramatically.
Yes, but you're implying "predictive" means 100% accurate. No science, no math, no language, will ever be 100% accurate. We say things have predictive power if we can, to a reasonable degree, if our results reflect our prediction. This is definitely true. And most those equations involve a pi. Pi doesn't have an end. There is ALWAYS and WILL ALWAYS be some uncertainty to our predictions. But is it that big of a deal if we can predict a planet's location down to the nm? Would you even say that it isn't predictive if we were off by 10km? No, you wouldn't. Because it is a planet and if you are looking for a planet and off by 10km you will still find the planet because the error is small. It would also be unreasonable to calculate the location of a planet down to the plank scale.
And to your mention of everything being statistical because quantum, well there's a reason Newton's methods didn't require them to be powerful (useful or predictive). Because the likelihood of quantum like events happening on a macro scale is basically zero. Sure, your hand could quantum tunnel through a wall, but would we ever expect to see it within the lifetime of the universe?
We're talking about the relativity of wrong here[1]. Physics wouldn't have become so popular if it wasn't predictive. We don't need to be 100% to be predictive nor useful. Accuracy and predictiveness are two different things.
> Yes, but you're implying "predictive" means 100% accurate.
No, I'm not. Or I didn't intend to, in fact I intended quite the opposite. I completely agree that "wrongness" is relative. "Wrongness" could be more accurately described as the amount of variance in a predictive model plus that model's divergence from reality.
My point was that all models and predictions are statistical/probabilistic, but not all have even the same order of magnitude of error. For shorthand, we pretend that models with very low variance/error are "exact" solutions, but in actual reality, they are not, they are just solutions that have a negligible error rate for the purpose at hand.
I am not implying anything like "well, psychology and physics both have probabilistic models, so they're equally valid". Their variance and error rate are very far apart. I agree physics is very predictive and has high accuracy but it is still probabilistic.
> My point was that all models and predictions are statistical/probabilistic, but not all have even the same order of magnitude of error.
Definitely not. The models used in undergraduate physics classes, or even to high school physics are not statistical. A good example is ohm's law. When building circuits this is necessary to use. Works just great. Now this is different from any attempts at GUT, but that's a different ball game. And those are different models.
> For shorthand, we pretend that models with very low variance/error are "exact" solutions
Maybe the public, but not the actual scientists. For shorthand we generally say "is" instead of "to an error we can't measure" because it is easier to say. But if you read the research papers errors are always included. But that's just language. Doing otherwise would be pedantic. Yes, the public gets confused, but for all they are concerned with these predictions might as well be "exact". When the public starts venturing out of their realm without learning they get confused with other more important ideas like "observer" and "information". Don't get me started on how many people believe stupid quantum stuff.
> they are just solutions that have a negligible error rate for the purpose at hand.
This demonstrates that you understand my point too. Or that you don't understand what negligible is. But I think you understand. At a certain point we stop worrying. Why would you care if you could predict the location of a planet down to the 10^-40m? I get doing it just for fun and because you want to, but there is no practical purpose. Anything this accurate might as well be exact.
> The models used in undergraduate physics classes, or even to high school physics are not statistical.
You are correct insofar as they are not presented as being statistical. But in reality, they are. Ohm's law is a good example. Resistors in reality do not have the exact resistance specified on the package, but rather are constructed within a certain tolerance, so that the final behavior of the circuit will be, again, a distribution. This would be an example of measurement error. The quantum effects also exist, as Intel will affirm as they are trying to build very small transistors, and the behavior of such transistors is probabilistic.
> Maybe the public, but not the actual scientists...
Ehh, I'm an "actual scientist". I work in bioinformatics & medical research. I don't care about what the public thinks for the purposes of this conversation. Even actual scientists will sometimes use this shorthand if the error is small enough, which is fine by me.
> At a certain point we stop worrying...but there is no practical purpose.
You're right. When we talk about the error rate in predicting planetary orbits, there is no practical purpose. My only point in my original reply was that the "exact" is a special case and a simplification of the statistical model, which is ubiquitous. If we are wanting to be technically correct, however, I stand by my assertion that all physical laws are inherently statistical.
I think we don't really disagree. This all started because you asserted there are phenomena which are "not statistical in nature", which I disagree with at a pedantic level.
I think we'll agree there. Because while you are technically correct you aren't practically.
Like how the Newtonian equations taught to undergrads literally don't have statistics. It isn't that it isn't presented to them that way, it is that they are using a different model. Going through physics (because this is the experience I have) you just keep learning better and better models.
As for Intel, you're confusing micro and macro scales. With the ohm's law you just measure the resistor before applying. This would be common procedure, depending on application. But this conversation is really arguing extremely fine points.
> Statistical models doesn't mean that there is no predictive power
Sure, but that's not what I said.
The orbit of a single planet in isolation is extremely simple. Take the orbit and self-interaction of a protoplanetary disk around a star instead and you'll find that while our models can make some predictions, they will be able to tell you virtually nothing about the configuration of planets that will eventually form from them. We have weather models, which are actually better characterized than our models of planetary formation, but they will tell you nothing about where hurricanes will make landfall next hurricane season.
We can't make predictions about these things, but we don't call the models we do have "not really very predictive" because we recognize the extreme uncertainty in what we're asking in those cases. That was what I was responding to.
The idea that evolutionary theory is "squishy" because we can't figure out "how an organism will evolve" with all the monumental complexity hidden in that simple question is as silly as calling astrophysics "squishy" because it can't answer the above.
"This work was supported by the Army Research Office (ARO) in the form of a Multidisciplinary University Research Initiative (MURI) (grant number: W911NF-13-1-0387)" -- from the acknowledgements of the paper.
Peter Wadham, head of the Polar Ocean Physics Group at University of Cambridge, has repeatedly made claims that Arctic sea ice will melt "next year". Eg in 2012 he predicted it would fully melt in sept 2016. Since that didn't happen, he now predicts it will fully melt in 2017/2018. IPCC estimates it will fully melt in the late 2030s.
Articles about him in legitimate media are [1][2]. Climate deniers get to say things like [3].
Other climate scientists think he is alarmist [4], and the 2013 IPCC report mentions his papers but is super dissmissive: See section 11.3.4 of the IPCC AR5, references to Maslwoski.
Melting Arctic ice also changes the surface albedo so more solar energy is absorbed, it changes the ocean salinity, and no longer cools the winds above it.
Thermal expansion of the ocean contributes almost half of total observed sea level rise. (1.1mm out of 2.8 mm total per year)
A lot of journals are starting to do this. For example PNAS (a top ranking journal) introduced a required "significance statement" in addition to an abstract, which is meant to be understood by a more general audience. So do the PloS journals, and many others.
PLoS says "The goal is to make your findings accessible to a wide audience that includes both scientists and non-scientists", and PNAS says it is meant to "explain the relevance of the work in broad context to a broad readership."
The discovery of photosynthesis caused the "great oxygenation event" which pumped oxygen into the atmosphere. Oxygen reacts with atmospheric hydrogen to form water. Without the oxygen, the very light hydrogen molecules would float to the top of the atmosphere and are easily blown away by solar winds, which is what happened on Mars. But with high oxygen concentrations on Eath, hydrogen molecules react to form heavier water molecules before they have a chance to be blown away, and thus hydrogen and water are retained.
I read about this in the book "Oxygen" by Nick Lane.