I was researching weather prediction not long ago. From my nieve perspective it seems that dispute all the increased gpu computational power and advances in machine learning, there have not been any great advances in weather prediction. Is this true?
Edit: Downvotes for simply asking a question. sigh.
I worked as a forecaster for a bit but never made it to the research world (studied theoretical pde instead of computational)... however at the time huge gains had been made in data assimilation. One fact that has stuck with me was that ~1/3 of the computation time for the UK Met global model run was consumed by data assimilation. I don't remember statistics anymore but data assimilation schemes were a big driver of improved forecast skill.
I also recall the ECMWF had surprisingly accurate long range forecasts based on ensembles. It could predict 500mb heights out two weeks, no sweat.
Re: your comments... My guess is that a gpu isn't suited for use in an operational model due to data access patterns (and possibly not even helpful with the solver). But again, I'm not a computational pde guy. Also, perhaps machine learning would be useful but that would be post-processing or perhaps parameterizing sub-grid phenomenon. There's already a process called model output statistics (MOS) for adjusting raw fields from a weather model.
The physics is pretty well known at this point, and there's only so much you can gain by increasing from second to third order approximation. The errors in the initial conditions are just larger. Most of the action has been on data assimilation and better parameterizations because of that.
I've been out of the field for ten years now, but it's really nice to see improvements to the core physics to this degree.
I'm still skeptical of your supposed two week 500 heights forecast from the ECMWF model. I live near the western Pacific (i.e. the data hole) and it's really easy to find crazy model solutions after 7 days. And I'm pretty sure you weren't looking at the Southern Hemisphere.
> I'm still skeptical of your supposed two week 500 heights forecast from the ECMWF model.
You're probably right to be skeptical, for the record I was only a forecaster for a short period of time over ten years ago... didn't even serve my full four year commitment as I volunteered to get out under the Air Force "force shaping" at the time. I was stationed near Rammstein and we created forecasts for Europe. I was referring to the ECMWF ensemble products, specifically.
I've done computational physics at the grad level. There, PDEs are converted to finite difference which basically leads to giant sparse linear matrices. These are solved using SOR or even more advanced numerical techniques. These techniques tend to be quite GPU friendly.
Well, if you're just doing a standard finite difference method, and you have to keep shuffling your matrices between CPU and GPU because other operations don't work well on GPUs, you actually won't have any speedup.
Where GPUs shine for PDEs is if you have a lot of extra work for each node, for instance if you have complex chemical reactions or thermodynamics, or if you have a high-order method that requires lots of intermediate computations.
If you don't believe me, you can download the PETSc code and test the ViennaCL solvers versus the regular ones.
> A modern 5-day forecast is as accurate as a 1-day forecast was in 1980, and useful forecasts now reach 9 to 10 days into the future (1). Predictions have improved for a wide range of hazardous weather conditions, including hurricanes, blizzards, flash floods, hail, and tornadoes, with skill emerging in predictions of seasonal conditions.
> ... Data from the NOAA National Hurricane Center (NHC) (13) show that forecast errors for tropical storms and hurricanes in the Atlantic basin have fallen rapidly in recent decades.
I don't know where I picked up that particular factoid, but if you look at the trend lines in [1] you can see the improvement claimed.
In [2] there is a slightly different claim, "A modern 5-day forecast is as accurate as a 1-day forecast was in 1980, and useful forecasts now reach 9 to 10 days into the future."
Chart 3.2 of [3] shows this; by 2001 the 5th day forecast improved to be as good as the 3rd day of 1980, establishing the trend line.
Googling about yields a few other studies and articles in a similar vein.
It is important to note that forecast improvement is not linear in effort. It takes more complete and accurate sensor data and far more computation to extend the forecast on the out days due to the chaotic nature of the mechanisms modeled.
This was on here a while ago: https://news.ycombinator.com/item?id=19765700 and says that "[m]odern 72-hour predictions of hurricane tracks are more accurate than 24-hour forecasts were 40 years ago"
edit: I see that neuronexmachina found the same article. It's a good read if you want an overview of how weather prediction has changed.
I have heard the opposite - predictions have been significantly increasing in accuracy over time.
On the other hand, I suspect it might not be noticeable if the forecast you always read just says "40% chance rain, high 80, low 50". It might be more noticable if you look at the hour-by-hour forecast for a specific location and see when the rain is predicted to start and end.
Forecasts have improved dramatically. A 7-day forecast is now somewhat useful when deciding to have your party indoors or outdoors, whereas in the 90s next-day forecasts could hardly compete with the naive assumption of "the weather will stay as it is".
But by mentioning machine learning, I'm guessing you are looking at a different timescale, i. e. "within the last two years". And any progress in the short term will be slow compared to what we have seen in other domains such as image recognition etc.
I'm no expert on weather forecasting, but I believe the explanation may be that forecasts have long been (among the) best financed "big data" problems out there. That means they incorporate lots and lots of domain-specific work. As a result, naive machine learning models currently still lag all the specialised work, which in turn isn't structured in a way to easily take advantage of progress in, say, GPUs.
For hurricane forecasts issued by the NHC, you can see the official error statistics here [0]. Note that 96 and 120-hour forecasts were so poor prior to year 2003 that they were not issued.
Note these error statistics do not represent true model error as the official track and intensity forecast -- while informed by model output -- are determined by human forecasters.
Edit: Downvotes for simply asking a question. sigh.