[Met_help] [rt.rap.ucar.edu #48931] History for verifying model against ACARS profiles

RAL HelpDesk {for John Halley Gotway} met_help at ucar.edu
Tue Aug 16 08:58:26 MDT 2011


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Hello MET_HELP,

I am preparing ACARS profile data for use in MET for verifying upper-air model data more frequently than 12-hourly raobs.
I haven't yet tried running point_stat on the merged raob + acars netcdf files I have created with ascii2nc, but I'm curious about how MET handles the verification.

What does MET do for verifying model data at obs locations that do not have traditional, mandatory pressure-level data such as acars profiles?
For example, if I specified 700-mb as one of the pressure levels to verify at, will MET first interpolate the observed acars profile to obtain a 700-mb obs, or do we have to specify the *exact* pressure level of the non-standard acars obs (which naturally varies widely from profile to profile).

Thank you for the clarification,
Jonathan

--------------------------------------------------------------------------------------------------
Jonathan Case, ENSCO Inc.
NASA Short-term Prediction Research and Transition Center (aka SPoRT Center)
320 Sparkman Drive, Room 3062
Huntsville, AL 35805
Voice: 256.961.7504
Fax: 256.961.7788
Emails: Jonathan.Case-1 at nasa.gov / case.jonathan at ensco.com
--------------------------------------------------------------------------------------------------

"Whether the weather is cold, or whether the weather is hot, we'll weather
  the weather whether we like it or not!"



----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: verifying model against ACARS profiles
From: John Halley Gotway
Time: Mon Aug 15 16:11:46 2011

Jonathan,

Good question.  The Point-Stat tool will *not* interpolate the
observation values to the mandatory pressure levels.  If you specify
in the configuration file that you want to verify at 800mb, it will
only use observations whose pressure level is *exactly* 800mb.

According to your description, it sounds like you're dealing with
observations that do not fall exactly on the mandatory pressure
levels.  In this case, you may want to consider verifying over a range
of pressure levels.  You have two options for how you'd like to do
this:

(1) Interpolate the forecast values vertically to the observation
pressure:
   fcst_field[] = [ "TMP/P775-825" ];
   obs_field[]  = [ "TMP/P775-825" ];

(2) Do not interpolate the forecast values vertically to the
observation pressure:
   fcst_field[] = [ "TMP/P800" ];
   obs_field[]  = [ "TMP/P775-825" ];

In the first setup, we're telling Point-Stat to verify a *range* of
forecast pressure levels against a range observation values.  Suppose
your forecast GRIB files contain temperature at 750, 800, and
850mb.  Point-Stat will read in all levels falling within the range
you specify, plus one above and one below.  In this example, it'll
read in all 3 temperature fields, 800 is in the range, 750 is the
one below, and 850 is the one above.  Then it'll sift through the
observations and only consider ones whose pressure level is between
775 and 825.  For each one it finds, it'll interpolate linearly in
the log of pressure from the temperature forecast above/below it to
the observation pressure level.  The matched pair will then be the
observed value and the interpolated forecast value.

In the second setup, we're telling Point-Stat to verify a *single*
forecast pressure level - 800mb.  But we want to use all observations
falling in the range specified.  In this case, Point-Stat will
not interpolate to the observation pressure level (it'll still
interpolate horizontally to the observed lat/lon).  Instead, it'll
just use the forecast value and observed value, as-is - no vertical
interpolation.

Clear as mud?  I've attached a config file that you could substitute
into the MET test scripts:
   In METv3.0.1/scripts/test_point_stat.sh use the attached version of
PointStatConfig.
This demonstrates the difference nicely.

In this config, I verify temperature both ways described above.  In
both cases, I get 149 matched pairs.  But if you take a look at the
matched pair output lines, you'll notice that while the observed
values are the same in both cases, the forecast values differ
slightly.  That's the difference between interpolating and not
interpolating.  I'll also cut-and-paste at the end of this message
what
Point-Stat writes to the screen for this using -v 3.

As for which way is better, it's up to you to decide.  If you're
verifying a large range of pressure levels, it's probably better to
interpolated.  For +/- 10mb, it may be fine not to.

Just let us know if you need more clarification.

Thanks,
John Halley Gotway
met_help at ucar.edu


[johnhg at rambler]% ./test_point_stat.sh
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073216018766
Forecast File:
/d1/johnhg/MET/MET_releases/METv3.0.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb
Climatology File: none
Configuration File: PointStatConfig
Observation File:
/d1/johnhg/MET/MET_releases/METv3.0.1/out/pb2nc/sample_pb.nc

--------------------------------------------------------------------------------

Reading records for TMP/P825-775.
GRIB Record 22: Init = 20070330_000000, Valid = 20070331_120000, Accum
= 000000, Min = 250, Max = 294
GRIB Record 21: Init = 20070330_000000, Valid = 20070331_120000, Accum
= 000000, Min = 248, Max = 289
GRIB Record 23: Init = 20070330_000000, Valid = 20070331_120000, Accum
= 000000, Min = 251, Max = 298
For TMP/P825-775 found 3 forecast levels and 0 climatology levels.

--------------------------------------------------------------------------------

Reading records for TMP/P800.
GRIB Record 22: Init = 20070330_000000, Valid = 20070331_120000, Accum
= 000000, Min = 250, Max = 294
For TMP/P800 found 1 forecast levels and 0 climatology levels.

--------------------------------------------------------------------------------

Searching 87363 observations from 9394 PrepBufr messages.

--------------------------------------------------------------------------------

Processing TMP/P825-775 versus TMP/P825-775, for observation type
ADPUPA, over region FULL, for interpolation method BILIN(4), using 149
pairs.
Number of matched pairs  = 149
Observations processed   = 87363
Rejected: GRIB code      = 77315
Rejected: valid time     = 0
Rejected: bad obs value  = 0
Rejected: off the grid   = 5
Rejected: level mismatch = 9800
Rejected: message type   = 94
Rejected: masking region = 0
Rejected: bad fcst value = 0

--------------------------------------------------------------------------------

Processing TMP/P800 versus TMP/P825-775, for observation type ADPUPA,
over region FULL, for interpolation method BILIN(4), using 149 pairs.
Number of matched pairs  = 149
Observations processed   = 87363
Rejected: GRIB code      = 77315
Rejected: valid time     = 0
Rejected: bad obs value  = 0
Rejected: off the grid   = 5
Rejected: level mismatch = 9800
Rejected: message type   = 94
Rejected: masking region = 0
Rejected: bad fcst value = 0

--------------------------------------------------------------------------------

Output file: out/point_stat_360000L_20070331_120000V.stat
Output file: out/point_stat_360000L_20070331_120000V_mpr.txt





On 08/15/2011 10:18 AM, RAL HelpDesk {for Case, Jonathan[ENSCO INC]}
wrote:
>
> Mon Aug 15 10:18:54 2011: Request 48931 was acted upon.
> Transaction: Ticket created by jonathan.case-1 at nasa.gov
>        Queue: met_help
>      Subject: verifying model against ACARS profiles
>        Owner: Nobody
>   Requestors: jonathan.case-1 at nasa.gov
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=48931 >
>
>
> Hello MET_HELP,
>
> I am preparing ACARS profile data for use in MET for verifying
upper-air model data more frequently than 12-hourly raobs.
> I haven't yet tried running point_stat on the merged raob + acars
netcdf files I have created with ascii2nc, but I'm curious about how
MET handles the verification.
>
> What does MET do for verifying model data at obs locations that do
not have traditional, mandatory pressure-level data such as acars
profiles?
> For example, if I specified 700-mb as one of the pressure levels to
verify at, will MET first interpolate the observed acars profile to
obtain a 700-mb obs, or do we have to specify the *exact* pressure
level of the non-standard acars obs (which naturally varies widely
from profile to profile).
>
> Thank you for the clarification,
> Jonathan
>
>
--------------------------------------------------------------------------------------------------
> Jonathan Case, ENSCO Inc.
> NASA Short-term Prediction Research and Transition Center (aka SPoRT
Center)
> 320 Sparkman Drive, Room 3062
> Huntsville, AL 35805
> Voice: 256.961.7504
> Fax: 256.961.7788
> Emails: Jonathan.Case-1 at nasa.gov / case.jonathan at ensco.com
>
--------------------------------------------------------------------------------------------------
>
> "Whether the weather is cold, or whether the weather is hot, we'll
weather
>   the weather whether we like it or not!"
>

------------------------------------------------
Subject: verifying model against ACARS profiles
From: John Halley Gotway
Time: Mon Aug 15 16:11:46 2011

////////////////////////////////////////////////////////////////////////////////
//
// Default point_stat configuration file
//
////////////////////////////////////////////////////////////////////////////////

//
// Specify a name to designate the model being verified.  This name
will be
// written to the second column of the ASCII output generated.
//
model = "WRF";

//
// Beginning and ending time offset values in seconds for observations
// to be used.  These time offsets are defined in reference to the
// forecast valid time, v.  Observations with a valid time falling in
the
// window [v+beg_ds, v+end_ds] will be used.
// These selections are overridden by the command line arguments
// -obs_valid_beg and -obs_valid_end.
//
beg_ds = -5400;
end_ds =  5400;

//
// Specify a comma-separated list of fields to be verified.  The
forecast and
// observation fields may be specified separately.  If the obs_field
parameter
// is left blank, it will default to the contents of fcst_field.
//
// Each field is specified as a GRIB code or abbreviation followed by
an
// accumulation or vertical level indicator for GRIB files or as a
variable name
// followed by a list of dimensions for NetCDF files output from
p_interp or MET.
//
// Specifying verification fields for GRIB files:
//    GC/ANNN for accumulation interval NNN
//    GC/ZNNN for vertical level NNN
//    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
//    GC/PNNN for pressure level NNN in hPa
//    GC/PNNN-NNN for a range of pressure levels in hPa
//    GC/LNNN for a generic level type
//    GC/RNNN for a specific GRIB record number
//    Where GC is the number of or abbreviation for the grib code
//    to be verified.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
// Specifying verification fields for NetCDF files:
//    var_name(i,...,j,*,*) for a single field
//    var_name(i-j,*,*) for a range of fields
//    Where var_name is the name of the NetCDF variable,
//    and i,...,j specifies fixed dimension values,
//    and i-j specifies a range of values for a single dimension,
//    and *,* specifies the two dimensions for the gridded field.
//
//    NOTE: To verify winds as vectors rather than scalars,
//          specify UGRD (or 33) followed by VGRD (or 34) with the
//          same level values.
//
//    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
//
// e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB input
// e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ]; for
NetCDF input
//
fcst_field[] = [ "TMP/P775-825", "TMP/P800"     ];
obs_field[]  = [ "TMP/P775-825", "TMP/P775-825" ];

//
// Specify a comma-separated list of groups of thresholds to be
applied to the
// fields listed above.  Thresholds for the forecast and observation
fields
// may be specified separately.  If the obs_thresh parameter is left
blank,
// it will default to the contents of fcst_thresh.
//
// At least one threshold must be provided for each field listed
above.  The
// lengths of the "fcst_field" and "fcst_thresh" arrays must match, as
must
// lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
// thresholds to a field, separate the threshold values with a space.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//
// NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
//       and be preceeded by "ge".
//
// e.g. fcst_thresh[] = [ "gt80", "gt273" ];
//
fcst_thresh[] = [ "le273 gt273", "le273 gt273" ];
obs_thresh[]  = [];

//
// Specify a comma-separated list of thresholds to be used when
computing
// VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
applied to the
// wind speed values derived from each U/V pair.  Only those U/V pairs
which meet
// the wind speed threshold criteria are retained.  If the
obs_wind_thresh
// parameter is left blank, it will default to the contents of
fcst_wind_thresh.
//
// To apply multiple wind speed thresholds, separate the threshold
values with a
// space.  Use "NA" to indicate that no wind speed threshold should be
applied.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//    'NA' for no threshold
//
// e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
//
fcst_wind_thresh[] = [ "NA" ];
obs_wind_thresh[]  = [];

//
// Specify a comma-separated list of PrepBufr message types with which
// to perform the verification.  Statistics will be computed
separately
// for each message type specified.  At least one PrepBufr message
type
// must be provided.
// List of valid message types:
//    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
//    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
//    SFCSHP SPSSMI SYNDAT VADWND
//    ANYAIR (= AIRCAR, AIRCFT)
//    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
//    ONLYSF (= ADPSFC, SFCSHP)
//
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
//
// e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
//
message_type[] = [ "ADPUPA" ];

//
// Specify a comma-separated list of grids to be used in masking the
data over
// which to perform scoring.  An empty list indicates that no masking
grid
// should be performed.  The standard NCEP grids are named "GNNN"
where NNN
// indicates the three digit grid number.  Enter "FULL" to score over
the
// entire domain.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
//
// e.g. mask_grid[] = [ "FULL" ];
//
mask_grid[] = [ "FULL" ];

//
// Specify a comma-separated list of masking regions to be applied.
// An empty list indicates that no additional masks should be used.
// The masking regions may be defined in one of 4 ways:
//
// (1) An ASCII file containing a lat/lon polygon.
//     Latitude in degrees north and longitude in degrees east.
//     By default, the first and last polygon points are connected.
//     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n points:
//          "poly_name lat1 lon1 lat2 lon2... latn lonn"
//
// (2) The NetCDF output of the gen_poly_mask tool.
//
// (3) A NetCDF data file, followed by the name of the NetCDF variable
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.nc var_name gt0.00"
//
// (4) A GRIB data file, followed by a description of the field
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.grb APCP/A3 gt0.00"
//
// Any NetCDF or GRIB file used must have the same grid dimensions as
the
// data being verified.
//
// MET_BASE may be used in the path for the files above.
//
// e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
//                      "poly_mask.ncf",
//                      "sample.nc APCP",
//                      "sample.grb HGT/Z0 gt100.0" ];
//
mask_poly[] = [];

//
// Specify the name of an ASCII file containing a space-separated list
of
// station ID's at which to perform verification.  Each station ID
specified
// is treated as an individual masking region.
//
// An empty list file name indicates that no station ID masks should
be used.
//
// MET_BASE may be used in the path for the station ID mask file name.
//
// e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
//
mask_sid = "";

//
// Specify a comma-separated list of values for alpha to be used when
computing
// confidence intervals.  Values of alpha must be between 0 and 1.
//
// e.g. ci_alpha[] = [ 0.05, 0.10 ];
//
ci_alpha[] = [ 0.05 ];

//
// Specify the method to be used for computing bootstrap confidence
intervals.
// The value for this is interpreted as follows:
//    (0) Use the BCa interval method (computationally intensive)
//    (1) Use the percentile interval method
//
boot_interval = 1;

//
// Specify a proportion between 0 and 1 to define the replicate sample
size
// to be used when computing percentile intervals.  The replicate
sample
// size is set to boot_rep_prop * n, where n is the number of raw data
points.
//
// e.g boot_rep_prop = 0.80;
//
boot_rep_prop = 1.0;

//
// Specify the number of times each set of matched pair data should be
// resampled when computing bootstrap confidence intervals.  A value
of
// zero disables the computation of bootstrap condifence intervals.
//
// e.g. n_boot_rep = 1000;
//
n_boot_rep = 1000;

//
// Specify the name of the random number generator to be used.  See
the MET
// Users Guide for a list of possible random number generators.
//
boot_rng = "mt19937";

//
// Specify the seed value to be used when computing bootstrap
confidence
// intervals.  If left unspecified, the seed will change for each run
and
// the computed bootstrap confidence intervals will not be
reproducable.
//
boot_seed = "";

//
// Specify a comma-separated list of interpolation method(s) to be
used
// for comparing the forecast grid to the observation points.  String
values
// are interpreted as follows:
//    MIN     = Minimum in the neighborhood
//    MAX     = Maximum in the neighborhood
//    MEDIAN  = Median in the neighborhood
//    UW_MEAN = Unweighted mean in the neighborhood
//    DW_MEAN = Distance-weighted mean in the neighborhood
//    LS_FIT  = Least-squares fit in the neighborhood
//    BILIN   = Bilinear interpolation using the 4 closest points
//
// In all cases, vertical interpolation is performed in the natural
log
// of pressure of the levels above and below the observation.
//
// e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
//
interp_method[] = [ "BILIN" ];

//
// Specify a comma-separated list of box widths to be used by the
// interpolation techniques listed above.  A value of 1 indicates that
// the nearest neighbor approach should be used.  For a value of n
// greater than 1, the n*n grid points closest to the observation
define
// the neighborhood.
//
// e.g. interp_width = [ 1, 3, 5 ];
//
interp_width[] = [ 2 ];

//
// When interpolating, compute a ratio of the number of valid data
points
// to the total number of points in the neighborhood.  If that ratio
is
// less than this threshold, do not include the observation.  This
// threshold must be between 0 and 1.  Setting this threshold to 1
will
// require that each observation be surrounded by n*n valid forecast
// points.
//
// e.g. interp_thresh = 1.0;
//
interp_thresh = 1.0;

//
// Specify flags to indicate the type of data to be output:
//    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
//           Total (TOTAL),
//           Forecast Rate (F_RATE),
//           Hit Rate (H_RATE),
//           Observation Rate (O_RATE)
//
//    (2) STAT and CTC Text Files, Contingency Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON)
//
//    (3) STAT and CTS Text Files, Contingency Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Frequency Bias (FBIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Contingency Table Count columns repeated N_CAT*N_CAT
times
//
//    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Accuracy (ACC),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Gerrity Score (GER),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (6) STAT and CNT Text Files, Statistics of Continuous Variables:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//           Forecast Standard Deviation (FSTDEV),
//           Observation Mean (OBAR),
//           Observation Standard Deviation (OSTDEV),
//           Pearson's Correlation Coefficient (PR_CORR),
//           Spearman's Rank Correlation Coefficient (SP_CORR),
//           Kendall Tau Rank Correlation Coefficient (KT_CORR),
//           Number of ranks compared (RANKS),
//           Number of tied ranks in the forecast field (FRANK_TIES),
//           Number of tied ranks in the observation field
(ORANK_TIES),
//           Mean Error (ME),
//           Standard Deviation of the Error (ESTDEV),
//           Multiplicative Bias (MBIAS = FBAR - OBAR),
//           Mean Absolute Error (MAE),
//           Mean Squared Error (MSE),
//           Bias-Corrected Mean Squared Error (BCMSE),
//           Root Mean Squared Error (RMSE),
//           Percentiles of the Error (E10, E25, E50, E75, E90)
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//              = mean(f)
//           Observation Mean (OBAR),
//              = mean(o)
//           Forecast*Observation Product Mean (FOBAR),
//              = mean(f*o)
//           Forecast Squared Mean (FFBAR),
//              = mean(f^2)
//           Observation Squared Mean (OOBAR)
//              = mean(o^2)
//
//    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial Sums:
//           Total (TOTAL),
//           Forecast Anomaly Mean (FABAR),
//              = mean(f-c)
//           Observation Anomaly Mean (OABAR),
//              = mean(o-c)
//           Product of Forecast and Observation Anomalies Mean
(FOABAR),
//              = mean((f-c)*(o-c))
//           Forecast Anomaly Squared Mean (FFABAR),
//              = mean((f-c)^2)
//           Observation Anomaly Squared Mean (OOABAR)
//              = mean((o-c)^2)
//
//    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
//           Total (TOTAL),
//           U-Forecast Mean (UFBAR),
//              = mean(uf)
//           V-Forecast Mean (VFBAR),
//              = mean(vf)
//           U-Observation Mean (UOBAR),
//              = mean(uo)
//           V-Observation Mean (VOBAR),
//              = mean(vo)
//           U-Product Plus V-Product (UVFOBAR),
//              = mean(uf*uo+vf*vo)
//           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
//              = mean(uf^2+vf^2)
//           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
//              = mean(uo^2+vo^2)
//
//   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial Sums:
//           U-Forecast Anomaly Mean (UFABAR),
//              = mean(uf-uc)
//           V-Forecast Anomaly Mean (VFABAR),
//              = mean(vf-vc)
//           U-Observation Anomaly Mean (UOABAR),
//              = mean(uo-uc)
//           V-Observation Anomaly Mean (VOABAR),
//              = mean(vo-vc)
//           U-Anomaly Product Plus V-Anomaly Product (UVFOABAR),
//              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
//           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
Squared (UVFFABAR),
//              = mean((uf-uc)^2+(vf-vc)^2)
//           U-Observation Anomaly Squared Plus V-Observation Anomaly
Squared (UVOOABAR)
//              = mean((uo-uc)^2+(vo-vc)^2)
//
//   (11) STAT and PCT Text Files, Nx2 Probability Contingency Table
Counts:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Row Observation Yes Count (OY_i),
//           Row Observation No Count (ON_i),
//           NOTE: Previous 3 columns repeated for each row in the
table.
//           Last Probability Threshold Value (THRESH_n)
//
//   (12) STAT and PSTD Text Files, Nx2 Probability Contingency Table
Scores:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Base Rate (BASER) with confidence interval limits,
//           Reliability (RELIABILITY),
//           Resolution (RESOLUTION),
//           Uncertainty (UNCERTAINTY),
//           Area Under the ROC Curve (ROC_AUC),
//           Brier Score (BRIER) with confidence interval limits,
//           Probability Threshold Value (THRESH_i)
//           NOTE: Previous column repeated for each probability
threshold.
//
//   (13) STAT and PJC Text Files, Joint/Continuous Statistics of
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Observation Yes Count Divided by Total (OY_TP_i),
//           Observation No Count Divided by Total (ON_TP_i),
//           Calibration (CALIBRATION_i),
//           Refinement (REFINEMENT_i),
//           Likelikhood (LIKELIHOOD_i),
//           Base Rate (BASER_i),
//           NOTE: Previous 7 columns repeated for each row in the
table.
//           Last Probability Threshold Value (THRESH_n)
//
//   (14) STAT and PRC Text Files, ROC Curve Points for
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Probability of Detecting Yes (PODY_i),
//           Probability of False Detection (POFD_i),
//           NOTE: Previous 3 columns repeated for each row in the
table.
//           Last Probability Threshold Value (THRESH_n)
//
//   (15) STAT and MPR Text Files, Matched Pair Data:
//           Total (TOTAL),
//           Index (INDEX),
//           Observation Station ID (OBS_SID),
//           Observation Latitude (OBS_LAT),
//           Observation Longitude (OBS_LON),
//           Observation Level (OBS_LVL),
//           Observation Elevation (OBS_ELV),
//           Forecast Value (FCST),
//           Observation Value (OBS),
//           Climatological Value (CLIMO)
//
//   In the expressions above, f are forecast values, o are observed
values,
//   and c are climatological values.
//
// Values for these flags are interpreted as follows:
//    (0) Do not generate output of this type
//    (1) Write output to a STAT file
//    (2) Write output to a STAT file and a text file
//
output_flag[] = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2 ];

//
// Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
// Coefficients should be computed.  Computing them over large
datasets is
// computationally intensive and slows down the runtime execution
significantly.
//    (0) Do not compute these correlation coefficients
//    (1) Compute these correlation coefficients
//
rank_corr_flag = 0;

//
// Specify the GRIB Table 2 parameter table version number to be used
// for interpreting GRIB codes.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
grib_ptv = 2;

//
// Directory where temporary files should be written.
//
tmp_dir = "/tmp";

//
// Prefix to be used for the output file names.
//
output_prefix = "";

//
// Indicate a version number for the contents of this configuration
file.
// The value should generally not be modified.
//
version = "V3.0.1";

------------------------------------------------
Subject: RE: [rt.rap.ucar.edu #48931] verifying model against ACARS profiles
From: Case, Jonathan[ENSCO INC]
Time: Tue Aug 16 07:29:51 2011

Thanks for the very thorough response!  This gives me a clear path
forward on how to do upper-air verification in MET.
Jonathan
-----Original Message-----
From: RAL HelpDesk {for John Halley
Gotway} [mailto:met_help at ucar.edu] 
Sent: Monday, August 15, 2011
5:12 PM
To: Case, Jonathan (MSFC-VP61)[ENSCO INC]
Subject: Re:
[rt.rap.ucar.edu #48931] verifying model against ACARS profiles
Jonathan,

Good question.  The Point-Stat tool will *not*
interpolate the observation values to the mandatory pressure levels.
If you specify in the configuration file that you want to verify at
800mb, it will only use observations whose pressure level is *exactly*
800mb.

According to your description, it sounds like you're dealing
with observations that do not fall exactly on the mandatory pressure
levels.  In this case, you may want to consider verifying over a range
of pressure levels.  You have two options for how you'd like to do
this:

(1) Interpolate the forecast values vertically to the
observation pressure:
   fcst_field[] = [ "TMP/P775-825" ];
obs_field[]  = [ "TMP/P775-825" ];

(2) Do not interpolate the
forecast values vertically to the observation pressure:
fcst_field[] = [ "TMP/P800" ];
   obs_field[]  = [ "TMP/P775-825" ];
In the first setup, we're telling Point-Stat to verify a *range* of
forecast pressure levels against a range observation values.  Suppose
your forecast GRIB files contain temperature at 750, 800, and 850mb.
Point-Stat will read in all levels falling within the range you
specify, plus one above and one below.  In this example, it'll read in
all 3 temperature fields, 800 is in the range, 750 is the one below,
and 850 is the one above.  Then it'll sift through the observations
and only consider ones whose pressure level is between 775 and 825.
For each one it finds, it'll interpolate linearly in the log of
pressure from the temperature forecast above/below it to the
observation pressure level.  The matched pair will then be the
observed value and the interpolated forecast value.

In the second
setup, we're telling Point-Stat to verify a *single* forecast pressure
level - 800mb.  But we want to use all observations falling in the
range specified.  In this case, Point-Stat will not interpolate to the
observation pressure level (it'll still interpolate horizontally to
the observed lat/lon).  Instead, it'll just use the forecast value and
observed value, as-is - no vertical interpolation.

Clear as mud?
I've attached a config file that you could substitute into the MET
test scripts:
   In METv3.0.1/scripts/test_point_stat.sh use the
attached version of PointStatConfig.
This demonstrates the difference
nicely.

In this config, I verify temperature both ways described
above.  In both cases, I get 149 matched pairs.  But if you take a
look at the matched pair output lines, you'll notice that while the
observed values are the same in both cases, the forecast values differ
slightly.  That's the difference between interpolating and not
interpolating.  I'll also cut-and-paste at the end of this message
what Point-Stat writes to the screen for this using -v 3.

As for
which way is better, it's up to you to decide.  If you're verifying a
large range of pressure levels, it's probably better to interpolated.
For +/- 10mb, it may be fine not to.

Just let us know if you need
more clarification.

Thanks,
John Halley Gotway
met_help at ucar.edu
[johnhg at rambler]% ./test_point_stat.sh
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073216018766
Forecast File:
/d1/johnhg/MET/MET_releases/METv3.0.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb
Climatology File: none
Configuration File: PointStatConfig
Observation File:
/d1/johnhg/MET/MET_releases/METv3.0.1/out/pb2nc/sample_pb.nc
--------------------------------------------------------------------------------
Reading records for TMP/P825-775.
GRIB Record 22: Init =
20070330_000000, Valid = 20070331_120000, Accum = 000000, Min = 250,
Max = 294 GRIB Record 21: Init = 20070330_000000, Valid =
20070331_120000, Accum = 000000, Min = 248, Max = 289 GRIB Record 23:
Init = 20070330_000000, Valid = 20070331_120000, Accum = 000000, Min =
251, Max = 298 For TMP/P825-775 found 3 forecast levels and 0
climatology levels.
--------------------------------------------------------------------------------
Reading records for TMP/P800.
GRIB Record 22: Init = 20070330_000000,
Valid = 20070331_120000, Accum = 000000, Min = 250, Max = 294 For
TMP/P800 found 1 forecast levels and 0 climatology levels.
--------------------------------------------------------------------------------
Searching 87363 observations from 9394 PrepBufr messages.
--------------------------------------------------------------------------------
Processing TMP/P825-775 versus TMP/P825-775, for observation type
ADPUPA, over region FULL, for interpolation method BILIN(4), using 149
pairs.
Number of matched pairs  = 149
Observations processed   =
87363
Rejected: GRIB code      = 77315
Rejected: valid time     = 0
Rejected: bad obs value  = 0
Rejected: off the grid   = 5
Rejected:
level mismatch = 9800
Rejected: message type   = 94
Rejected:
masking region = 0
Rejected: bad fcst value = 0
--------------------------------------------------------------------------------
Processing TMP/P800 versus TMP/P825-775, for observation type ADPUPA,
over region FULL, for interpolation method BILIN(4), using 149 pairs.
Number of matched pairs  = 149
Observations processed   = 87363
Rejected: GRIB code      = 77315
Rejected: valid time     = 0
Rejected: bad obs value  = 0
Rejected: off the grid   = 5
Rejected:
level mismatch = 9800
Rejected: message type   = 94
Rejected:
masking region = 0
Rejected: bad fcst value = 0
--------------------------------------------------------------------------------
Output file: out/point_stat_360000L_20070331_120000V.stat
Output
file: out/point_stat_360000L_20070331_120000V_mpr.txt





On
08/15/2011 10:18 AM, RAL HelpDesk {for Case, Jonathan[ENSCO INC]}
wrote:
> 
> Mon Aug 15 10:18:54 2011: Request 48931 was acted upon.
> Transaction: Ticket created by jonathan.case-1 at nasa.gov
>
Queue: met_help
>      Subject: verifying model against ACARS
profiles
>        Owner: Nobody
>   Requestors: jonathan.case-
1 at nasa.gov
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=48931 
> >
> 
>
> Hello MET_HELP,
> 
> I am preparing ACARS profile data for use in
MET for verifying upper-air model data more frequently than 12-hourly
raobs.
> I haven't yet tried running point_stat on the merged raob +
acars netcdf files I have created with ascii2nc, but I'm curious about
how MET handles the verification.
> 
> What does MET do for
verifying model data at obs locations that do not have traditional,
mandatory pressure-level data such as acars profiles?
> For example,
if I specified 700-mb as one of the pressure levels to verify at, will
MET first interpolate the observed acars profile to obtain a 700-mb
obs, or do we have to specify the *exact* pressure level of the non-
standard acars obs (which naturally varies widely from profile to
profile).
> 
> Thank you for the clarification,
> Jonathan
> 
>
----------------------------------------------------------------------
> ----------------------------
> Jonathan Case, ENSCO Inc.
> NASA
Short-term Prediction Research and Transition Center (aka SPoRT 
>
Center) 320 Sparkman Drive, Room 3062 Huntsville, AL 35805
> Voice:
256.961.7504
> Fax: 256.961.7788
> Emails: Jonathan.Case-1 at nasa.gov
/ case.jonathan at ensco.com
>
----------------------------------------------------------------------
> ----------------------------
> 
> "Whether the weather is cold, or
whether the weather is hot, we'll weather
>   the weather whether we
like it or not!"
>

------------------------------------------------


More information about the Met_help mailing list