Message Font: Serif | Sans-Serif
 
UnThreaded | Threaded | Whole Thread (2) | Ignore Thread Prev Thread | Next Thread
Author: LorenCobb Big gold star, 5000 posts Top Favorite Fools Top Recommended Fools Feste Award Nominee! Old School Fool Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: of 59043  
Subject: Prediction variability Date: 2/11/2013 11:28 AM
Post New | Post Reply | Reply Later | Create Poll Report this Post | Recommend it!
Recommendations: 9
Weather and climate prediction methods have at least three sources of variation: natural (intrinsic) variability, measurement error, and something often called "model error" -- the errors that come from using incorrect parameters or even the wrong model. Needless to say, the statisticians who work with weather and climate scientists focus obsessively on all three.

The following abstract of an upcoming talk at NCAR (National Center for Atmospheric Research, in Boulder) shows pretty well the state of current thinking on this complex problem.

STATISTICAL POST-PROCESSING OF TEMPERATURE FORECASTS:
THE IMPORTANCE OF SPATIAL MODELING
Michael Scheuerer
University of Heidelberg

Tuesday, February 12, 2013
Mesa Lab VizLab
12:00 PM

Abstract
In order to represent forecast uncertainty in numerical weather
prediction, ensemble prediction systems generate several different
forecasts of the same weather variable by perturbing initial conditions
and model parameters. The resulting ensemble of forecasts is interpreted
as a sample of a predictive distribution. It offers valuable
probabilistic information, but often turns out to be uncalibrated, i.e.
it suffers from biases and typically underestimates the prediction
uncertainty. Methods for statistical post-processing have therefore been
proposed to re-calibrate the ensemble and turn it into a full predictive
probability distribution.

Weather variables like temperature depend on factors that are quite
variable in space which suggests that post-processing should be done at
each site individually. If a predictive distribution is desired at
locations where no measurements for calibration are available,
post-processing parameters from neighboring stations must be
interpolated. We propose an extension of the non-homogeneous Gaussian
regression (NGR) approach for temperature post-processing that uses an
intrinsically stationary Gaussian random field model for spatial
interpolation. This model is able to capture large scale fluctuations of
temperature, while additional covariates are integrated into the random
field model to account for altitude-related and other local effects.

In the second part of this talk we discuss the modeling of spatial
correlations of forecast errors for temperature. This becomes important
whenever probabilistic forecasts at different sites are considered
simultaneously, or when the interest is in composite quantities like
averages, minima, or maxima of temperature over some domain. Again, we
use Gaussian random field models which are now fitted to forecast errors
in a training data set, and permit the simulation of re-calibrated
temperature fields with corrected spatial correlations.

Both of these spatial aspects are illustrated with temperature forecasts
by the COSMO-DE-EPS, an ensemble prediction system operated by the
German Meteorological Service, and observational data over Germany.


My research group, known as the UC-Denver Data Assimilation Seminar, has been working on these issues for the past four years. In fact, we are trying to hire away one of the NCAR hotshots for our own faculty.

Loren
Post New | Post Reply | Reply Later | Create Poll Report this Post | Recommend it!
Print the post Back To Top
Author: bjchip Big gold star, 5000 posts Old School Fool Add to my Favorite Fools Ignore this person (you won't see their posts anymore) Number: 40819 of 59043
Subject: Re: Prediction variability Date: 2/11/2013 12:24 PM
Post New | Post Reply | Reply Later | Create Poll Report this Post | Recommend it!
Recommendations: 0
Would this not be cyclical? As in do the individuals, then work with their neighbors, then do the individuals again until the "mesh" is stable?

You'd leave out the uncalibrated stations until the calibrated "mesh" has relaxed/stabilized and only then fill them in?

I'm not that keen on statistics, but making a physical numerical model of something actually work is "interesting" :-)

Print the post Back To Top
UnThreaded | Threaded | Whole Thread (2) | Ignore Thread Prev Thread | Next Thread
Advertisement