[Met_help] [rt.rap.ucar.edu #51928] History for Re: METV3 Issue

Paul Oldenburg via RT met_help at ucar.edu
Wed Dec 14 10:25:19 MST 2011


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Ok,

The data should be there now. With out.nc being the obs and wrf.nc being
the forecast

- Tim

On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com> wrote:

> Hi,
> I have recently been doing some work with WRF and am trying to add the the
> model evaluation tools to our standard model verification system.  I
> started the process by running the pressure interpolation program on a
> single wrfout file, which appeared to finish correctly. I have attached an
> ncdump of the file header to this email it is called
> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a single prebufr
> file from an NCEP repository for the time centered on the forecast period
> and ran PB2NC and this also appeared to finish correctly and output a
> single netcdf file, the header information is also attached (PB2NC.txt).
> Then I attempted to run the point stat utility on these two files but the
> program errors out telling me that there are more forecast field that
> observational fields "ERROR: PointStatConfInfo::process_config() -> The
> number fcst_thresh entries provided must match the number of fields
> provided in fcst_field.". I ran the following command from the terminal to
> run point stat "bin/point_stat wrfout_d02_2011-12-07_00:00:00_PLEV out.nc
> PointStatConfig".  I am not sure what the problem is I have red the
> documentation and it appears to be setup correctly but I am not completely
> sure as I have never used this software before.  What should these namelist
> fields look like using 2 netcdf files (1.Forecast 1.Obs), trying to verify
> 10 meter winds? I appreciate your help!
>
> Also ... I ran the test all scripts after compilation , and the code
> completed successfully with no errors.
>
>
>
> Thanks ,
> Tim
>


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #51928] Re: METV3 Issue
From: Paul Oldenburg
Time: Thu Dec 08 13:42:56 2011

Tim,

Can you please put the input PrepBUFR file that you pass to pb2nc on
the FTP site?  When I look at the contents of
out.nc, it does not appear that there are any observations in that
file.  I would like to run pb2nc myself to see what
is going on.

I made an incorrect assumption in my earlier emails that you were
trying to verify model data in GRIB format.  Now that
I have your data in hand, I see that it is p_interp output, as you
mentioned in your initial email.  MET support for
p_interp is not as robust as for GRIB.  In particular, grid-relative
wind directions in p_interp data files should not
be compared to lat-long relative wind directions in the PrepBUFR obs.
Would it be possible for you to run your WRF
output through the Unified Post Processor (UPP -
http://www.dtcenter.org/wrf-nmm/users/overview/upp_overview.php)
instead of or in addition to p_interp?  That would simplify MET
verification tasks.  Please let me know if you have any
questions.

Thanks,

Paul


On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
>
> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> Transaction: Ticket created by tmelino at meso.com
>        Queue: met_help
>      Subject: Re: METV3 Issue
>        Owner: Nobody
>   Requestors: tmelino at meso.com
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>
>
> Ok,
>
> The data should be there now. With out.nc being the obs and wrf.nc
being
> the forecast
>
> - Tim
>
> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com> wrote:
>
>> Hi,
>> I have recently been doing some work with WRF and am trying to add
the the
>> model evaluation tools to our standard model verification system.
I
>> started the process by running the pressure interpolation program
on a
>> single wrfout file, which appeared to finish correctly. I have
attached an
>> ncdump of the file header to this email it is called
>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a single
prebufr
>> file from an NCEP repository for the time centered on the forecast
period
>> and ran PB2NC and this also appeared to finish correctly and output
a
>> single netcdf file, the header information is also attached
(PB2NC.txt).
>> Then I attempted to run the point stat utility on these two files
but the
>> program errors out telling me that there are more forecast field
that
>> observational fields "ERROR: PointStatConfInfo::process_config() ->
The
>> number fcst_thresh entries provided must match the number of fields
>> provided in fcst_field.". I ran the following command from the
terminal to
>> run point stat "bin/point_stat wrfout_d02_2011-12-07_00:00:00_PLEV
out.nc
>> PointStatConfig".  I am not sure what the problem is I have red the
>> documentation and it appears to be setup correctly but I am not
completely
>> sure as I have never used this software before.  What should these
namelist
>> fields look like using 2 netcdf files (1.Forecast 1.Obs), trying to
verify
>> 10 meter winds? I appreciate your help!
>>
>> Also ... I ran the test all scripts after compilation , and the
code
>> completed successfully with no errors.
>>
>>
>>
>> Thanks ,
>> Tim
>>


------------------------------------------------
Subject: Re: METV3 Issue
From: Tim Melino
Time: Thu Dec 08 14:28:21 2011

I just put the file on the server that I have been using. As far as
running
the UPP software, that is not really possible at the moment. I do not
have
any of that software installed or configured as I have never had a
reason
to use it .

- Tim

On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:

> Tim,
>
> Can you please put the input PrepBUFR file that you pass to pb2nc on
the
> FTP site?  When I look at the contents of
> out.nc, it does not appear that there are any observations in that
file.
>  I would like to run pb2nc myself to see what
> is going on.
>
> I made an incorrect assumption in my earlier emails that you were
trying
> to verify model data in GRIB format.  Now that
> I have your data in hand, I see that it is p_interp output, as you
> mentioned in your initial email.  MET support for
> p_interp is not as robust as for GRIB.  In particular, grid-relative
wind
> directions in p_interp data files should not
> be compared to lat-long relative wind directions in the PrepBUFR
obs.
>  Would it be possible for you to run your WRF
> output through the Unified Post Processor (UPP -
> http://www.dtcenter.org/wrf-nmm/users/overview/upp_overview.php)
> instead of or in addition to p_interp?  That would simplify MET
> verification tasks.  Please let me know if you have any
> questions.
>
> Thanks,
>
> Paul
>
>
> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
> >
> > Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> > Transaction: Ticket created by tmelino at meso.com
> >        Queue: met_help
> >      Subject: Re: METV3 Issue
> >        Owner: Nobody
> >   Requestors: tmelino at meso.com
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >
> >
> > Ok,
> >
> > The data should be there now. With out.nc being the obs and wrf.nc
being
> > the forecast
> >
> > - Tim
> >
> > On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com>
wrote:
> >
> >> Hi,
> >> I have recently been doing some work with WRF and am trying to
add the
> the
> >> model evaluation tools to our standard model verification system.
I
> >> started the process by running the pressure interpolation program
on a
> >> single wrfout file, which appeared to finish correctly. I have
attached
> an
> >> ncdump of the file header to this email it is called
> >> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a
single
> prebufr
> >> file from an NCEP repository for the time centered on the
forecast
> period
> >> and ran PB2NC and this also appeared to finish correctly and
output a
> >> single netcdf file, the header information is also attached
(PB2NC.txt).
> >> Then I attempted to run the point stat utility on these two files
but
> the
> >> program errors out telling me that there are more forecast field
that
> >> observational fields "ERROR: PointStatConfInfo::process_config()
-> The
> >> number fcst_thresh entries provided must match the number of
fields
> >> provided in fcst_field.". I ran the following command from the
terminal
> to
> >> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
> out.nc
> >> PointStatConfig".  I am not sure what the problem is I have red
the
> >> documentation and it appears to be setup correctly but I am not
> completely
> >> sure as I have never used this software before.  What should
these
> namelist
> >> fields look like using 2 netcdf files (1.Forecast 1.Obs), trying
to
> verify
> >> 10 meter winds? I appreciate your help!
> >>
> >> Also ... I ran the test all scripts after compilation , and the
code
> >> completed successfully with no errors.
> >>
> >>
> >>
> >> Thanks ,
> >> Tim
> >>
>
>
>

------------------------------------------------
Subject: Re: METV3 Issue
From: Paul Oldenburg
Time: Thu Dec 08 15:10:55 2011

Tim,

I ran the following pb2nc and point_stat commands using the attached
config files to generate point verification data
with your PrepBUFR obs and p_interp model data.  Note that MET_BASE is
set to the base folder of an instance of
METv3.0.1.  I pulled both config files, with slight modifications,
from $MET_BASE/scripts/config.

$MET_BASE/bin/pb2nc ndas.t12z.prepbufr.tm12.nr ndas.t12z.nc
PB2NCConfig_G212 -v 99

$MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig -outdir .
-v 99

In PointStatConfig, you will see the following settings.  The
fcst_field setting format is due to the fact that fields
in wrf.nc are four dimensional, with the last two dimensions being the
spatial (x,y) dimensions.  The obs_field
specifies surface temperature using a GRIB-style format, because pb2nc
indexes fields in its output by GRIB code.  You
should follow a similar paradigm to verify additional fields beyond
temperature.

fcst_field[] = [ "TT(0,0,*,*)" ];
obs_field[]  = [ "TMP/Z2" ];

fcst_thresh[] = [ "le273" ];
obs_thresh[]  = [];

If you have any questions or problems, please let me know.

Good luck,

Paul




On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>
> I just put the file on the server that I have been using. As far as
running
> the UPP software, that is not really possible at the moment. I do
not have
> any of that software installed or configured as I have never had a
reason
> to use it .
>
> - Tim
>
> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:
>
>> Tim,
>>
>> Can you please put the input PrepBUFR file that you pass to pb2nc
on the
>> FTP site?  When I look at the contents of
>> out.nc, it does not appear that there are any observations in that
file.
>>  I would like to run pb2nc myself to see what
>> is going on.
>>
>> I made an incorrect assumption in my earlier emails that you were
trying
>> to verify model data in GRIB format.  Now that
>> I have your data in hand, I see that it is p_interp output, as you
>> mentioned in your initial email.  MET support for
>> p_interp is not as robust as for GRIB.  In particular, grid-
relative wind
>> directions in p_interp data files should not
>> be compared to lat-long relative wind directions in the PrepBUFR
obs.
>>  Would it be possible for you to run your WRF
>> output through the Unified Post Processor (UPP -
>> http://www.dtcenter.org/wrf-nmm/users/overview/upp_overview.php)
>> instead of or in addition to p_interp?  That would simplify MET
>> verification tasks.  Please let me know if you have any
>> questions.
>>
>> Thanks,
>>
>> Paul
>>
>>
>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
>>>
>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
>>> Transaction: Ticket created by tmelino at meso.com
>>>        Queue: met_help
>>>      Subject: Re: METV3 Issue
>>>        Owner: Nobody
>>>   Requestors: tmelino at meso.com
>>>       Status: new
>>>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>
>>>
>>> Ok,
>>>
>>> The data should be there now. With out.nc being the obs and wrf.nc
being
>>> the forecast
>>>
>>> - Tim
>>>
>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com>
wrote:
>>>
>>>> Hi,
>>>> I have recently been doing some work with WRF and am trying to
add the
>> the
>>>> model evaluation tools to our standard model verification system.
I
>>>> started the process by running the pressure interpolation program
on a
>>>> single wrfout file, which appeared to finish correctly. I have
attached
>> an
>>>> ncdump of the file header to this email it is called
>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a
single
>> prebufr
>>>> file from an NCEP repository for the time centered on the
forecast
>> period
>>>> and ran PB2NC and this also appeared to finish correctly and
output a
>>>> single netcdf file, the header information is also attached
(PB2NC.txt).
>>>> Then I attempted to run the point stat utility on these two files
but
>> the
>>>> program errors out telling me that there are more forecast field
that
>>>> observational fields "ERROR: PointStatConfInfo::process_config()
-> The
>>>> number fcst_thresh entries provided must match the number of
fields
>>>> provided in fcst_field.". I ran the following command from the
terminal
>> to
>>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
>> out.nc
>>>> PointStatConfig".  I am not sure what the problem is I have red
the
>>>> documentation and it appears to be setup correctly but I am not
>> completely
>>>> sure as I have never used this software before.  What should
these
>> namelist
>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs), trying
to
>> verify
>>>> 10 meter winds? I appreciate your help!
>>>>
>>>> Also ... I ran the test all scripts after compilation , and the
code
>>>> completed successfully with no errors.
>>>>
>>>>
>>>>
>>>> Thanks ,
>>>> Tim
>>>>
>>
>>
>>


------------------------------------------------
Subject: Re: METV3 Issue
From: Paul Oldenburg
Time: Thu Dec 08 15:10:55 2011

////////////////////////////////////////////////////////////////////////////////
//
// Default pb2nc configuration file
//
////////////////////////////////////////////////////////////////////////////////

//
// Stratify the observation data in the PrepBufr files in the
following
// ways:
//  (1) by message type: supply a list of PrepBufr message types
//      to retain (i.e. AIRCFT)
//  (2) by station id: supply a list of observation stations to retain
//  (3) by valid time: supply starting and ending times in form
//      YYYY-MM-DD HH:MM:SS UTC
//  (4) by location: supply either an NCEP masking grid, a masking
//      lat/lon polygon or a file to a mask lat/lon polygon
//  (5) by elevation: supply min/max elevation values
//  (6) by report type (typ): supply a list of report types to retain
//  (7) by instrument type (itp): supply a list of instrument type to
//      retain
//  (8) by vertical level: supply min/max vertical levels
//  (9) by variable type: supply a list of variable types to retain
//      P, Q, T, Z, U, V
// (11) by quality mark: supply a quality mark threshold
// (12) Flag to retain values for all quality marks, or just the first
//      quality mark (highest)
// (13) by data level category: supply a list of category types to
//      retain.
//
//      0 - Surface level (mass reports only)
//      1 - Mandatory level (upper-air profile reports)
//      2 - Significant temperature level (upper-air profile reports)
//      2 - Significant temperature and winds-by-pressure level
//          (future combined mass and wind upper-air reports)
//      3 - Winds-by-pressure level (upper-air profile reports)
//      4 - Winds-by-height level (upper-air profile reports)
//      5 - Tropopause level (upper-air profile reports)
//      6 - Reports on a single level
//          (e.g., aircraft, satellite-wind, surface wind,
//           precipitable water retrievals, etc.)
//      7 - Auxiliary levels generated via interpolation from spanning
levels
//          (upper-air profile reports)
//

//
// Specify a comma-separated list of PrepBufr message type strings to
retain.
// An empty list indicates that all should be retained.
// List of valid message types:
//    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
//    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
//    SFCSHP SPSSMI SYNDAT VADWND
//    ANYAIR (= AIRCAR, AIRCFT)
//    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
//    ONLYSF (= ADPSFC, SFCSHP)
//
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
//
// e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
//
message_type[] = [];

//
// Specify a comma-separated list of station ID strings to retain.
// An empty list indicates that all should be retained.
//
// e.g. station_id[] = [ "KDEN" ];
//
station_id[] = [];

//
// Beginning and ending time offset values in seconds for observations
// to retain.  The valid time window for retaining observations is
// defined in reference to the observation time.  So observations with
// a valid time falling in the window [obs_time+beg_ds,
obs_time+end_ds]
// will be retained.
//
beg_ds = -1800;
end_ds =  1800;

//
// Specify the name of a single grid to be used in masking the data.
// An empty string indicates that no grid should be used.  The
standard
// NCEP grids are named "GNNN" where NNN indicates the three digit
grid number.
//
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
//
// e.g. mask_grid = "G212";
//
mask_grid = "G212";

//
// Specify a single ASCII file containing a lat/lon polygon.
// Latitude in degrees north and longitude in degrees east.
// By default, the first and last polygon points are connected.
//
// The lat/lon polygon file should contain a name for the polygon
// followed by a space-separated list of lat/lon points:
//    "name lat1 lon1 lat2 lon2... latn lonn"
//
// MET_BASE may be used in the path for the lat/lon polygon file.
//
// e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
//
mask_poly = "";

//
// Beginning and ending elevation values in meters for observations
// to retain.
//
beg_elev = -1000;
end_elev = 100000;

//
// Specify a comma-separated list of PrepBufr report type values to
retain.
// An empty list indicates that all should be retained.
//
//
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
//
// e.g. pb_report_type[] = [ 120, 133 ];
//
pb_report_type[] = [];

//
// Specify a comma-separated list of input report type values to
retain.
// An empty list indicates that all should be retained.
//
//
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
//
// e.g. in_report_type[] = [ 11, 22, 23 ];
//
in_report_type[] = [];

//
// Specify a comma-separated list of instrument type values to retain.
// An empty list indicates that all should be retained.
//
// e.g. instrument_type[] = [ 52, 87 ];
//
instrument_type[] = [];

//
// Beginning and ending vertical levels to retain.
//
beg_level = 1;
end_level = 255;

//
// Specify a comma-separated list of strings containing grib codes or
// corresponding grib code abbreviations to retain or be derived from
// the available observations.
//
// Grib Codes to be RETAINED:
//    SPFH or 51 for Specific Humidity in kg/kg
//    TMP  or 11 for Temperature in K
//    HGT  or 7  for Height in meters
//    UGRD or 33 for the East-West component of the wind in m/s
//    VGRD or 34 for the North-South component of the wind in m/s
//
// Grib Codes to be DERIVED:
//    DPT   or 17 for Dewpoint Temperature in K
//    WIND  or 32 for Wind Speed in m/s
//    RH    or 52 for Relative Humidity in %
//    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
//    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
//
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
// e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
//
obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
                    "DPT",  "WIND", "RH",   "MIXR" ];

//
// Quality mark threshold to indicate which observations to retain.
// Observations with a quality mark equal to or LESS THAN this
threshold
// will be retained, while observations with a quality mark GREATER
THAN
// this threshold will be discarded.
//
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
//
quality_mark_thresh = 2;

//
// Flag to indicate whether observations should be drawn from the top
// of the event stack (most quality controlled) or the bottom of the
// event stack (most raw).  A value of 1 indicates that the top of the
// event stack should be used while a value of zero indicates that the
// bottom should be used.
//
event_stack_flag = 1;

//
// Space comma-separated list of data level categorie values to
retain,
// where a value of:
//    0 = Surface level (mass reports only)
//    1 = Mandatory level (upper-air profile reports)
//    2 = Significant temperature level (upper-air profile reports)
//    2 = Significant temperature and winds-by-pressure level
//        (future combined mass and wind upper-air reports)
//    3 = Winds-by-pressure level (upper-air profile reports)
//    4 = Winds-by-height level (upper-air profile reports)
//    5 = Tropopause level (upper-air profile reports)
//    6 = Reports on a single level
//        (e.g., aircraft, satellite-wind, surface wind,
//         precipitable water retrievals, etc.)
//    7 = Auxiliary levels generated via interpolation from spanning
levels
//        (upper-air profile reports)
// An empty list indicates that all should be retained.
//
//
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
//
// e.g. level_category[] = [ 0, 1 ];
//
level_category[] = [];

//
// Directory where temp files should be written by the PB2NC tool
//
tmp_dir = "/tmp";

//
// Indicate a version number for the contents of this configuration
file.
// The value should generally not be modified.
//
version = "V3.0";

------------------------------------------------
Subject: Re: METV3 Issue
From: Paul Oldenburg
Time: Thu Dec 08 15:10:55 2011

////////////////////////////////////////////////////////////////////////////////
//
// Default point_stat configuration file
//
////////////////////////////////////////////////////////////////////////////////

//
// Specify a name to designate the model being verified.  This name
will be
// written to the second column of the ASCII output generated.
//
model = "WRF";

//
// Beginning and ending time offset values in seconds for observations
// to be used.  These time offsets are defined in reference to the
// forecast valid time, v.  Observations with a valid time falling in
the
// window [v+beg_ds, v+end_ds] will be used.
// These selections are overridden by the command line arguments
// -obs_valid_beg and -obs_valid_end.
//
beg_ds = -1800;
end_ds =  1800;

//
// Specify a comma-separated list of fields to be verified.  The
forecast and
// observation fields may be specified separately.  If the obs_field
parameter
// is left blank, it will default to the contents of fcst_field.
//
// Each field is specified as a GRIB code or abbreviation followed by
an
// accumulation or vertical level indicator for GRIB files or as a
variable name
// followed by a list of dimensions for NetCDF files output from
p_interp or MET.
//
// Specifying verification fields for GRIB files:
//    GC/ANNN for accumulation interval NNN
//    GC/ZNNN for vertical level NNN
//    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
//    GC/PNNN for pressure level NNN in hPa
//    GC/PNNN-NNN for a range of pressure levels in hPa
//    GC/LNNN for a generic level type
//    GC/RNNN for a specific GRIB record number
//    Where GC is the number of or abbreviation for the grib code
//    to be verified.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
// Specifying verification fields for NetCDF files:
//    var_name(i,...,j,*,*) for a single field
//    var_name(i-j,*,*) for a range of fields
//    Where var_name is the name of the NetCDF variable,
//    and i,...,j specifies fixed dimension values,
//    and i-j specifies a range of values for a single dimension,
//    and *,* specifies the two dimensions for the gridded field.
//
//    NOTE: To verify winds as vectors rather than scalars,
//          specify UGRD (or 33) followed by VGRD (or 34) with the
//          same level values.
//
//    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
//
// e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB input
// e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ]; for
NetCDF input
//

fcst_field[] = [ "TT(0,0,*,*)" ];
obs_field[]  = [ "TMP/Z2" ];

//
// Specify a comma-separated list of groups of thresholds to be
applied to the
// fields listed above.  Thresholds for the forecast and observation
fields
// may be specified separately.  If the obs_thresh parameter is left
blank,
// it will default to the contents of fcst_thresh.
//
// At least one threshold must be provided for each field listed
above.  The
// lengths of the "fcst_field" and "fcst_thresh" arrays must match, as
must
// lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
// thresholds to a field, separate the threshold values with a space.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//
// NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
//       and be preceeded by "ge".
//
// e.g. fcst_thresh[] = [ "gt80", "gt273" ];
//
fcst_thresh[] = [ "le273" ];
obs_thresh[]  = [];

//
// Specify a comma-separated list of thresholds to be used when
computing
// VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
applied to the
// wind speed values derived from each U/V pair.  Only those U/V pairs
which meet
// the wind speed threshold criteria are retained.  If the
obs_wind_thresh
// parameter is left blank, it will default to the contents of
fcst_wind_thresh.
//
// To apply multiple wind speed thresholds, separate the threshold
values with a
// space.  Use "NA" to indicate that no wind speed threshold should be
applied.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//    'NA' for no threshold
//
// e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
//
fcst_wind_thresh[] = [ "NA" ];
obs_wind_thresh[]  = [];

//
// Specify a comma-separated list of PrepBufr message types with which
// to perform the verification.  Statistics will be computed
separately
// for each message type specified.  At least one PrepBufr message
type
// must be provided.
// List of valid message types:
//    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
//    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
//    SFCSHP SPSSMI SYNDAT VADWND
//    ANYAIR (= AIRCAR, AIRCFT)
//    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
//    ONLYSF (= ADPSFC, SFCSHP)
//
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
//
// e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
//
message_type[] = [ "ADPSFC" ];

//
// Specify a comma-separated list of grids to be used in masking the
data over
// which to perform scoring.  An empty list indicates that no masking
grid
// should be performed.  The standard NCEP grids are named "GNNN"
where NNN
// indicates the three digit grid number.  Enter "FULL" to score over
the
// entire domain.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
//
// e.g. mask_grid[] = [ "FULL" ];
//
mask_grid[] = [ "FULL" ];

//
// Specify a comma-separated list of masking regions to be applied.
// An empty list indicates that no additional masks should be used.
// The masking regions may be defined in one of 4 ways:
//
// (1) An ASCII file containing a lat/lon polygon.
//     Latitude in degrees north and longitude in degrees east.
//     By default, the first and last polygon points are connected.
//     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n points:
//          "poly_name lat1 lon1 lat2 lon2... latn lonn"
//
// (2) The NetCDF output of the gen_poly_mask tool.
//
// (3) A NetCDF data file, followed by the name of the NetCDF variable
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.nc var_name gt0.00"
//
// (4) A GRIB data file, followed by a description of the field
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.grb APCP/A3 gt0.00"
//
// Any NetCDF or GRIB file used must have the same grid dimensions as
the
// data being verified.
//
// MET_BASE may be used in the path for the files above.
//
// e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
//                      "poly_mask.ncf",
//                      "sample.nc APCP",
//                      "sample.grb HGT/Z0 gt100.0" ];
//
mask_poly[] = [];

//
// Specify the name of an ASCII file containing a space-separated list
of
// station ID's at which to perform verification.  Each station ID
specified
// is treated as an individual masking region.
//
// An empty list file name indicates that no station ID masks should
be used.
//
// MET_BASE may be used in the path for the station ID mask file name.
//
// e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
//
mask_sid = "";

//
// Specify a comma-separated list of values for alpha to be used when
computing
// confidence intervals.  Values of alpha must be between 0 and 1.
//
// e.g. ci_alpha[] = [ 0.05, 0.10 ];
//
ci_alpha[] = [ 0.05 ];

//
// Specify the method to be used for computing bootstrap confidence
intervals.
// The value for this is interpreted as follows:
//    (0) Use the BCa interval method (computationally intensive)
//    (1) Use the percentile interval method
//
boot_interval = 1;

//
// Specify a proportion between 0 and 1 to define the replicate sample
size
// to be used when computing percentile intervals.  The replicate
sample
// size is set to boot_rep_prop * n, where n is the number of raw data
points.
//
// e.g boot_rep_prop = 0.80;
//
boot_rep_prop = 1.0;

//
// Specify the number of times each set of matched pair data should be
// resampled when computing bootstrap confidence intervals.  A value
of
// zero disables the computation of bootstrap condifence intervals.
//
// e.g. n_boot_rep = 1000;
//
n_boot_rep = 1000;

//
// Specify the name of the random number generator to be used.  See
the MET
// Users Guide for a list of possible random number generators.
//
boot_rng = "mt19937";

//
// Specify the seed value to be used when computing bootstrap
confidence
// intervals.  If left unspecified, the seed will change for each run
and
// the computed bootstrap confidence intervals will not be
reproducable.
//
boot_seed = "";

//
// Specify a comma-separated list of interpolation method(s) to be
used
// for comparing the forecast grid to the observation points.  String
values
// are interpreted as follows:
//    MIN     = Minimum in the neighborhood
//    MAX     = Maximum in the neighborhood
//    MEDIAN  = Median in the neighborhood
//    UW_MEAN = Unweighted mean in the neighborhood
//    DW_MEAN = Distance-weighted mean in the neighborhood
//    LS_FIT  = Least-squares fit in the neighborhood
//    BILIN   = Bilinear interpolation using the 4 closest points
//
// In all cases, vertical interpolation is performed in the natural
log
// of pressure of the levels above and below the observation.
//
// e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
//
interp_method[] = [ "MEDIAN", "DW_MEAN" ];

//
// Specify a comma-separated list of box widths to be used by the
// interpolation techniques listed above.  A value of 1 indicates that
// the nearest neighbor approach should be used.  For a value of n
// greater than 1, the n*n grid points closest to the observation
define
// the neighborhood.
//
// e.g. interp_width = [ 1, 3, 5 ];
//
interp_width[] = [ 1, 3 ];

//
// When interpolating, compute a ratio of the number of valid data
points
// to the total number of points in the neighborhood.  If that ratio
is
// less than this threshold, do not include the observation.  This
// threshold must be between 0 and 1.  Setting this threshold to 1
will
// require that each observation be surrounded by n*n valid forecast
// points.
//
// e.g. interp_thresh = 1.0;
//
interp_thresh = 1.0;

//
// Specify flags to indicate the type of data to be output:
//    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
//           Total (TOTAL),
//           Forecast Rate (F_RATE),
//           Hit Rate (H_RATE),
//           Observation Rate (O_RATE)
//
//    (2) STAT and CTC Text Files, Contingency Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON)
//
//    (3) STAT and CTS Text Files, Contingency Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Frequency Bias (FBIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Contingency Table Count columns repeated N_CAT*N_CAT
times
//
//    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Accuracy (ACC),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Gerrity Score (GER),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (6) STAT and CNT Text Files, Statistics of Continuous Variables:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//           Forecast Standard Deviation (FSTDEV),
//           Observation Mean (OBAR),
//           Observation Standard Deviation (OSTDEV),
//           Pearson's Correlation Coefficient (PR_CORR),
//           Spearman's Rank Correlation Coefficient (SP_CORR),
//           Kendall Tau Rank Correlation Coefficient (KT_CORR),
//           Number of ranks compared (RANKS),
//           Number of tied ranks in the forecast field (FRANK_TIES),
//           Number of tied ranks in the observation field
(ORANK_TIES),
//           Mean Error (ME),
//           Standard Deviation of the Error (ESTDEV),
//           Multiplicative Bias (MBIAS = FBAR - OBAR),
//           Mean Absolute Error (MAE),
//           Mean Squared Error (MSE),
//           Bias-Corrected Mean Squared Error (BCMSE),
//           Root Mean Squared Error (RMSE),
//           Percentiles of the Error (E10, E25, E50, E75, E90)
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//              = mean(f)
//           Observation Mean (OBAR),
//              = mean(o)
//           Forecast*Observation Product Mean (FOBAR),
//              = mean(f*o)
//           Forecast Squared Mean (FFBAR),
//              = mean(f^2)
//           Observation Squared Mean (OOBAR)
//              = mean(o^2)
//
//    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial Sums:
//           Total (TOTAL),
//           Forecast Anomaly Mean (FABAR),
//              = mean(f-c)
//           Observation Anomaly Mean (OABAR),
//              = mean(o-c)
//           Product of Forecast and Observation Anomalies Mean
(FOABAR),
//              = mean((f-c)*(o-c))
//           Forecast Anomaly Squared Mean (FFABAR),
//              = mean((f-c)^2)
//           Observation Anomaly Squared Mean (OOABAR)
//              = mean((o-c)^2)
//
//    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
//           Total (TOTAL),
//           U-Forecast Mean (UFBAR),
//              = mean(uf)
//           V-Forecast Mean (VFBAR),
//              = mean(vf)
//           U-Observation Mean (UOBAR),
//              = mean(uo)
//           V-Observation Mean (VOBAR),
//              = mean(vo)
//           U-Product Plus V-Product (UVFOBAR),
//              = mean(uf*uo+vf*vo)
//           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
//              = mean(uf^2+vf^2)
//           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
//              = mean(uo^2+vo^2)
//
//   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial Sums:
//           U-Forecast Anomaly Mean (UFABAR),
//              = mean(uf-uc)
//           V-Forecast Anomaly Mean (VFABAR),
//              = mean(vf-vc)
//           U-Observation Anomaly Mean (UOABAR),
//              = mean(uo-uc)
//           V-Observation Anomaly Mean (VOABAR),
//              = mean(vo-vc)
//           U-Anomaly Product Plus V-Anomaly Product (UVFOABAR),
//              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
//           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
Squared (UVFFABAR),
//              = mean((uf-uc)^2+(vf-vc)^2)
//           U-Observation Anomaly Squared Plus V-Observation Anomaly
Squared (UVOOABAR)
//              = mean((uo-uc)^2+(vo-vc)^2)
//
//   (11) STAT and PCT Text Files, Nx2 Probability Contingency Table
Counts:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Row Observation Yes Count (OY_i),
//           Row Observation No Count (ON_i),
//           NOTE: Previous 3 columns repeated for each row in the
table.
//           Last Probability Threshold Value (THRESH_n)
//
//   (12) STAT and PSTD Text Files, Nx2 Probability Contingency Table
Scores:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Base Rate (BASER) with confidence interval limits,
//           Reliability (RELIABILITY),
//           Resolution (RESOLUTION),
//           Uncertainty (UNCERTAINTY),
//           Area Under the ROC Curve (ROC_AUC),
//           Brier Score (BRIER) with confidence interval limits,
//           Probability Threshold Value (THRESH_i)
//           NOTE: Previous column repeated for each probability
threshold.
//
//   (13) STAT and PJC Text Files, Joint/Continuous Statistics of
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Observation Yes Count Divided by Total (OY_TP_i),
//           Observation No Count Divided by Total (ON_TP_i),
//           Calibration (CALIBRATION_i),
//           Refinement (REFINEMENT_i),
//           Likelikhood (LIKELIHOOD_i),
//           Base Rate (BASER_i),
//           NOTE: Previous 7 columns repeated for each row in the
table.
//           Last Probability Threshold Value (THRESH_n)
//
//   (14) STAT and PRC Text Files, ROC Curve Points for
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Probability of Detecting Yes (PODY_i),
//           Probability of False Detection (POFD_i),
//           NOTE: Previous 3 columns repeated for each row in the
table.
//           Last Probability Threshold Value (THRESH_n)
//
//   (15) STAT and MPR Text Files, Matched Pair Data:
//           Total (TOTAL),
//           Index (INDEX),
//           Observation Station ID (OBS_SID),
//           Observation Latitude (OBS_LAT),
//           Observation Longitude (OBS_LON),
//           Observation Level (OBS_LVL),
//           Observation Elevation (OBS_ELV),
//           Forecast Value (FCST),
//           Observation Value (OBS),
//           Climatological Value (CLIMO)
//
//   In the expressions above, f are forecast values, o are observed
values,
//   and c are climatological values.
//
// Values for these flags are interpreted as follows:
//    (0) Do not generate output of this type
//    (1) Write output to a STAT file
//    (2) Write output to a STAT file and a text file
//
output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1 ];

//
// Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
// Coefficients should be computed.  Computing them over large
datasets is
// computationally intensive and slows down the runtime execution
significantly.
//    (0) Do not compute these correlation coefficients
//    (1) Compute these correlation coefficients
//
rank_corr_flag = 1;

//
// Specify the GRIB Table 2 parameter table version number to be used
// for interpreting GRIB codes.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
grib_ptv = 2;

//
// Directory where temporary files should be written.
//
tmp_dir = "/tmp";

//
// Prefix to be used for the output file names.
//
output_prefix = "";

//
// Indicate a version number for the contents of this configuration
file.
// The value should generally not be modified.
//
version = "V3.0.1";


------------------------------------------------
Subject: Re: METV3 Issue
From: Tim Melino
Time: Thu Dec 08 16:02:22 2011

Paul ,
Thanks for all your help!!!

- Tim

On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:

> Tim,
>
> I ran the following pb2nc and point_stat commands using the attached
> config files to generate point verification data
> with your PrepBUFR obs and p_interp model data.  Note that MET_BASE
is set
> to the base folder of an instance of
> METv3.0.1.  I pulled both config files, with slight modifications,
from
> $MET_BASE/scripts/config.
>
> $MET_BASE/bin/pb2nc ndas.t12z.prepbufr.tm12.nr
ndas.t12z.ncPB2NCConfig_G212 -v 99
>
> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig -outdir
. -v
> 99
>
> In PointStatConfig, you will see the following settings.  The
fcst_field
> setting format is due to the fact that fields
> in wrf.nc are four dimensional, with the last two dimensions being
the
> spatial (x,y) dimensions.  The obs_field
> specifies surface temperature using a GRIB-style format, because
pb2nc
> indexes fields in its output by GRIB code.  You
> should follow a similar paradigm to verify additional fields beyond
> temperature.
>
> fcst_field[] = [ "TT(0,0,*,*)" ];
> obs_field[]  = [ "TMP/Z2" ];
>
> fcst_thresh[] = [ "le273" ];
> obs_thresh[]  = [];
>
> If you have any questions or problems, please let me know.
>
> Good luck,
>
> Paul
>
>
>
>
> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >
> > I just put the file on the server that I have been using. As far
as
> running
> > the UPP software, that is not really possible at the moment. I do
not
> have
> > any of that software installed or configured as I have never had a
reason
> > to use it .
> >
> > - Tim
> >
> > On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT
<met_help at ucar.edu
> >wrote:
> >
> >> Tim,
> >>
> >> Can you please put the input PrepBUFR file that you pass to pb2nc
on the
> >> FTP site?  When I look at the contents of
> >> out.nc, it does not appear that there are any observations in
that
> file.
> >>  I would like to run pb2nc myself to see what
> >> is going on.
> >>
> >> I made an incorrect assumption in my earlier emails that you were
trying
> >> to verify model data in GRIB format.  Now that
> >> I have your data in hand, I see that it is p_interp output, as
you
> >> mentioned in your initial email.  MET support for
> >> p_interp is not as robust as for GRIB.  In particular, grid-
relative
> wind
> >> directions in p_interp data files should not
> >> be compared to lat-long relative wind directions in the PrepBUFR
obs.
> >>  Would it be possible for you to run your WRF
> >> output through the Unified Post Processor (UPP -
> >> http://www.dtcenter.org/wrf-nmm/users/overview/upp_overview.php)
> >> instead of or in addition to p_interp?  That would simplify MET
> >> verification tasks.  Please let me know if you have any
> >> questions.
> >>
> >> Thanks,
> >>
> >> Paul
> >>
> >>
> >> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
> >>>
> >>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> >>> Transaction: Ticket created by tmelino at meso.com
> >>>        Queue: met_help
> >>>      Subject: Re: METV3 Issue
> >>>        Owner: Nobody
> >>>   Requestors: tmelino at meso.com
> >>>       Status: new
> >>>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
> >>>
> >>>
> >>> Ok,
> >>>
> >>> The data should be there now. With out.nc being the obs and
wrf.ncbeing
> >>> the forecast
> >>>
> >>> - Tim
> >>>
> >>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com>
wrote:
> >>>
> >>>> Hi,
> >>>> I have recently been doing some work with WRF and am trying to
add the
> >> the
> >>>> model evaluation tools to our standard model verification
system.  I
> >>>> started the process by running the pressure interpolation
program on a
> >>>> single wrfout file, which appeared to finish correctly. I have
> attached
> >> an
> >>>> ncdump of the file header to this email it is called
> >>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a
single
> >> prebufr
> >>>> file from an NCEP repository for the time centered on the
forecast
> >> period
> >>>> and ran PB2NC and this also appeared to finish correctly and
output a
> >>>> single netcdf file, the header information is also attached
> (PB2NC.txt).
> >>>> Then I attempted to run the point stat utility on these two
files but
> >> the
> >>>> program errors out telling me that there are more forecast
field that
> >>>> observational fields "ERROR:
PointStatConfInfo::process_config() ->
> The
> >>>> number fcst_thresh entries provided must match the number of
fields
> >>>> provided in fcst_field.". I ran the following command from the
> terminal
> >> to
> >>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
> >> out.nc
> >>>> PointStatConfig".  I am not sure what the problem is I have red
the
> >>>> documentation and it appears to be setup correctly but I am not
> >> completely
> >>>> sure as I have never used this software before.  What should
these
> >> namelist
> >>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying to
> >> verify
> >>>> 10 meter winds? I appreciate your help!
> >>>>
> >>>> Also ... I ran the test all scripts after compilation , and the
code
> >>>> completed successfully with no errors.
> >>>>
> >>>>
> >>>>
> >>>> Thanks ,
> >>>> Tim
> >>>>
> >>
> >>
> >>
>
>
>
>
>
////////////////////////////////////////////////////////////////////////////////
> //
> // Default pb2nc configuration file
> //
>
>
////////////////////////////////////////////////////////////////////////////////
>
> //
> // Stratify the observation data in the PrepBufr files in the
following
> // ways:
> //  (1) by message type: supply a list of PrepBufr message types
> //      to retain (i.e. AIRCFT)
> //  (2) by station id: supply a list of observation stations to
retain
> //  (3) by valid time: supply starting and ending times in form
> //      YYYY-MM-DD HH:MM:SS UTC
> //  (4) by location: supply either an NCEP masking grid, a masking
> //      lat/lon polygon or a file to a mask lat/lon polygon
> //  (5) by elevation: supply min/max elevation values
> //  (6) by report type (typ): supply a list of report types to
retain
> //  (7) by instrument type (itp): supply a list of instrument type
to
> //      retain
> //  (8) by vertical level: supply min/max vertical levels
> //  (9) by variable type: supply a list of variable types to retain
> //      P, Q, T, Z, U, V
> // (11) by quality mark: supply a quality mark threshold
> // (12) Flag to retain values for all quality marks, or just the
first
> //      quality mark (highest)
> // (13) by data level category: supply a list of category types to
> //      retain.
> //
> //      0 - Surface level (mass reports only)
> //      1 - Mandatory level (upper-air profile reports)
> //      2 - Significant temperature level (upper-air profile
reports)
> //      2 - Significant temperature and winds-by-pressure level
> //          (future combined mass and wind upper-air reports)
> //      3 - Winds-by-pressure level (upper-air profile reports)
> //      4 - Winds-by-height level (upper-air profile reports)
> //      5 - Tropopause level (upper-air profile reports)
> //      6 - Reports on a single level
> //          (e.g., aircraft, satellite-wind, surface wind,
> //           precipitable water retrievals, etc.)
> //      7 - Auxiliary levels generated via interpolation from
spanning
> levels
> //          (upper-air profile reports)
> //
>
> //
> // Specify a comma-separated list of PrepBufr message type strings
to
> retain.
> // An empty list indicates that all should be retained.
> // List of valid message types:
> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> //    SFCSHP SPSSMI SYNDAT VADWND
> //    ANYAIR (= AIRCAR, AIRCFT)
> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> //    ONLYSF (= ADPSFC, SFCSHP)
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> //
> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> //
> message_type[] = [];
>
> //
> // Specify a comma-separated list of station ID strings to retain.
> // An empty list indicates that all should be retained.
> //
> // e.g. station_id[] = [ "KDEN" ];
> //
> station_id[] = [];
>
> //
> // Beginning and ending time offset values in seconds for
observations
> // to retain.  The valid time window for retaining observations is
> // defined in reference to the observation time.  So observations
with
> // a valid time falling in the window [obs_time+beg_ds,
obs_time+end_ds]
> // will be retained.
> //
> beg_ds = -1800;
> end_ds =  1800;
>
> //
> // Specify the name of a single grid to be used in masking the data.
> // An empty string indicates that no grid should be used.  The
standard
> // NCEP grids are named "GNNN" where NNN indicates the three digit
grid
> number.
> //
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> //
> // e.g. mask_grid = "G212";
> //
> mask_grid = "G212";
>
> //
> // Specify a single ASCII file containing a lat/lon polygon.
> // Latitude in degrees north and longitude in degrees east.
> // By default, the first and last polygon points are connected.
> //
> // The lat/lon polygon file should contain a name for the polygon
> // followed by a space-separated list of lat/lon points:
> //    "name lat1 lon1 lat2 lon2... latn lonn"
> //
> // MET_BASE may be used in the path for the lat/lon polygon file.
> //
> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
> //
> mask_poly = "";
>
> //
> // Beginning and ending elevation values in meters for observations
> // to retain.
> //
> beg_elev = -1000;
> end_elev = 100000;
>
> //
> // Specify a comma-separated list of PrepBufr report type values to
retain.
> // An empty list indicates that all should be retained.
> //
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
> //
> // e.g. pb_report_type[] = [ 120, 133 ];
> //
> pb_report_type[] = [];
>
> //
> // Specify a comma-separated list of input report type values to
retain.
> // An empty list indicates that all should be retained.
> //
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
> //
> // e.g. in_report_type[] = [ 11, 22, 23 ];
> //
> in_report_type[] = [];
>
> //
> // Specify a comma-separated list of instrument type values to
retain.
> // An empty list indicates that all should be retained.
> //
> // e.g. instrument_type[] = [ 52, 87 ];
> //
> instrument_type[] = [];
>
> //
> // Beginning and ending vertical levels to retain.
> //
> beg_level = 1;
> end_level = 255;
>
> //
> // Specify a comma-separated list of strings containing grib codes
or
> // corresponding grib code abbreviations to retain or be derived
from
> // the available observations.
> //
> // Grib Codes to be RETAINED:
> //    SPFH or 51 for Specific Humidity in kg/kg
> //    TMP  or 11 for Temperature in K
> //    HGT  or 7  for Height in meters
> //    UGRD or 33 for the East-West component of the wind in m/s
> //    VGRD or 34 for the North-South component of the wind in m/s
> //
> // Grib Codes to be DERIVED:
> //    DPT   or 17 for Dewpoint Temperature in K
> //    WIND  or 32 for Wind Speed in m/s
> //    RH    or 52 for Relative Humidity in %
> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
> //
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
> //
> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
>                    "DPT",  "WIND", "RH",   "MIXR" ];
>
> //
> // Quality mark threshold to indicate which observations to retain.
> // Observations with a quality mark equal to or LESS THAN this
threshold
> // will be retained, while observations with a quality mark GREATER
THAN
> // this threshold will be discarded.
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
> //
> quality_mark_thresh = 2;
>
> //
> // Flag to indicate whether observations should be drawn from the
top
> // of the event stack (most quality controlled) or the bottom of the
> // event stack (most raw).  A value of 1 indicates that the top of
the
> // event stack should be used while a value of zero indicates that
the
> // bottom should be used.
> //
> event_stack_flag = 1;
>
> //
> // Space comma-separated list of data level categorie values to
retain,
> // where a value of:
> //    0 = Surface level (mass reports only)
> //    1 = Mandatory level (upper-air profile reports)
> //    2 = Significant temperature level (upper-air profile reports)
> //    2 = Significant temperature and winds-by-pressure level
> //        (future combined mass and wind upper-air reports)
> //    3 = Winds-by-pressure level (upper-air profile reports)
> //    4 = Winds-by-height level (upper-air profile reports)
> //    5 = Tropopause level (upper-air profile reports)
> //    6 = Reports on a single level
> //        (e.g., aircraft, satellite-wind, surface wind,
> //         precipitable water retrievals, etc.)
> //    7 = Auxiliary levels generated via interpolation from spanning
levels
> //        (upper-air profile reports)
> // An empty list indicates that all should be retained.
> //
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> //
> // e.g. level_category[] = [ 0, 1 ];
> //
> level_category[] = [];
>
> //
> // Directory where temp files should be written by the PB2NC tool
> //
> tmp_dir = "/tmp";
>
> //
> // Indicate a version number for the contents of this configuration
file.
> // The value should generally not be modified.
> //
> version = "V3.0";
>
>
>
////////////////////////////////////////////////////////////////////////////////
> //
> // Default point_stat configuration file
> //
>
>
////////////////////////////////////////////////////////////////////////////////
>
> //
> // Specify a name to designate the model being verified.  This name
will be
> // written to the second column of the ASCII output generated.
> //
> model = "WRF";
>
> //
> // Beginning and ending time offset values in seconds for
observations
> // to be used.  These time offsets are defined in reference to the
> // forecast valid time, v.  Observations with a valid time falling
in the
> // window [v+beg_ds, v+end_ds] will be used.
> // These selections are overridden by the command line arguments
> // -obs_valid_beg and -obs_valid_end.
> //
> beg_ds = -1800;
> end_ds =  1800;
>
> //
> // Specify a comma-separated list of fields to be verified.  The
forecast
> and
> // observation fields may be specified separately.  If the obs_field
> parameter
> // is left blank, it will default to the contents of fcst_field.
> //
> // Each field is specified as a GRIB code or abbreviation followed
by an
> // accumulation or vertical level indicator for GRIB files or as a
> variable name
> // followed by a list of dimensions for NetCDF files output from
p_interp
> or MET.
> //
> // Specifying verification fields for GRIB files:
> //    GC/ANNN for accumulation interval NNN
> //    GC/ZNNN for vertical level NNN
> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
> //    GC/PNNN for pressure level NNN in hPa
> //    GC/PNNN-NNN for a range of pressure levels in hPa
> //    GC/LNNN for a generic level type
> //    GC/RNNN for a specific GRIB record number
> //    Where GC is the number of or abbreviation for the grib code
> //    to be verified.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> // Specifying verification fields for NetCDF files:
> //    var_name(i,...,j,*,*) for a single field
> //    var_name(i-j,*,*) for a range of fields
> //    Where var_name is the name of the NetCDF variable,
> //    and i,...,j specifies fixed dimension values,
> //    and i-j specifies a range of values for a single dimension,
> //    and *,* specifies the two dimensions for the gridded field.
> //
> //    NOTE: To verify winds as vectors rather than scalars,
> //          specify UGRD (or 33) followed by VGRD (or 34) with the
> //          same level values.
> //
> //    NOTE: To process a probability field, add "/PROB", such as
> "POP/Z0/PROB".
> //
> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB input
> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ]; for
NetCDF
> input
> //
>
> fcst_field[] = [ "TT(0,0,*,*)" ];
> obs_field[]  = [ "TMP/Z2" ];
>
> //
> // Specify a comma-separated list of groups of thresholds to be
applied to
> the
> // fields listed above.  Thresholds for the forecast and observation
fields
> // may be specified separately.  If the obs_thresh parameter is left
blank,
> // it will default to the contents of fcst_thresh.
> //
> // At least one threshold must be provided for each field listed
above.
>  The
> // lengths of the "fcst_field" and "fcst_thresh" arrays must match,
as must
> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> // thresholds to a field, separate the threshold values with a
space.
> //
> // Each threshold must be preceded by a two letter indicator for the
type
> of
> // thresholding to be performed:
> //    'lt' for less than     'le' for less than or equal to
> //    'eq' for equal to      'ne' for not equal to
> //    'gt' for greater than  'ge' for greater than or equal to
> //
> // NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
> //       and be preceeded by "ge".
> //
> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
> //
> fcst_thresh[] = [ "le273" ];
> obs_thresh[]  = [];
>
> //
> // Specify a comma-separated list of thresholds to be used when
computing
> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
applied to
> the
> // wind speed values derived from each U/V pair.  Only those U/V
pairs
> which meet
> // the wind speed threshold criteria are retained.  If the
obs_wind_thresh
> // parameter is left blank, it will default to the contents of
> fcst_wind_thresh.
> //
> // To apply multiple wind speed thresholds, separate the threshold
values
> with a
> // space.  Use "NA" to indicate that no wind speed threshold should
be
> applied.
> //
> // Each threshold must be preceded by a two letter indicator for the
type
> of
> // thresholding to be performed:
> //    'lt' for less than     'le' for less than or equal to
> //    'eq' for equal to      'ne' for not equal to
> //    'gt' for greater than  'ge' for greater than or equal to
> //    'NA' for no threshold
> //
> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> //
> fcst_wind_thresh[] = [ "NA" ];
> obs_wind_thresh[]  = [];
>
> //
> // Specify a comma-separated list of PrepBufr message types with
which
> // to perform the verification.  Statistics will be computed
separately
> // for each message type specified.  At least one PrepBufr message
type
> // must be provided.
> // List of valid message types:
> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> //    SFCSHP SPSSMI SYNDAT VADWND
> //    ANYAIR (= AIRCAR, AIRCFT)
> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> //    ONLYSF (= ADPSFC, SFCSHP)
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> //
> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> //
> message_type[] = [ "ADPSFC" ];
>
> //
> // Specify a comma-separated list of grids to be used in masking the
data
> over
> // which to perform scoring.  An empty list indicates that no
masking grid
> // should be performed.  The standard NCEP grids are named "GNNN"
where NNN
> // indicates the three digit grid number.  Enter "FULL" to score
over the
> // entire domain.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> //
> // e.g. mask_grid[] = [ "FULL" ];
> //
> mask_grid[] = [ "FULL" ];
>
> //
> // Specify a comma-separated list of masking regions to be applied.
> // An empty list indicates that no additional masks should be used.
> // The masking regions may be defined in one of 4 ways:
> //
> // (1) An ASCII file containing a lat/lon polygon.
> //     Latitude in degrees north and longitude in degrees east.
> //     By default, the first and last polygon points are connected.
> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> //
> // (2) The NetCDF output of the gen_poly_mask tool.
> //
> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> //     to be used, and optionally, a threshold to be applied to the
field.
> //     e.g. "sample.nc var_name gt0.00"
> //
> // (4) A GRIB data file, followed by a description of the field
> //     to be used, and optionally, a threshold to be applied to the
field.
> //     e.g. "sample.grb APCP/A3 gt0.00"
> //
> // Any NetCDF or GRIB file used must have the same grid dimensions
as the
> // data being verified.
> //
> // MET_BASE may be used in the path for the files above.
> //
> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> //                      "poly_mask.ncf",
> //                      "sample.nc APCP",
> //                      "sample.grb HGT/Z0 gt100.0" ];
> //
> mask_poly[] = [];
>
> //
> // Specify the name of an ASCII file containing a space-separated
list of
> // station ID's at which to perform verification.  Each station ID
> specified
> // is treated as an individual masking region.
> //
> // An empty list file name indicates that no station ID masks should
be
> used.
> //
> // MET_BASE may be used in the path for the station ID mask file
name.
> //
> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
> //
> mask_sid = "";
>
> //
> // Specify a comma-separated list of values for alpha to be used
when
> computing
> // confidence intervals.  Values of alpha must be between 0 and 1.
> //
> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> //
> ci_alpha[] = [ 0.05 ];
>
> //
> // Specify the method to be used for computing bootstrap confidence
> intervals.
> // The value for this is interpreted as follows:
> //    (0) Use the BCa interval method (computationally intensive)
> //    (1) Use the percentile interval method
> //
> boot_interval = 1;
>
> //
> // Specify a proportion between 0 and 1 to define the replicate
sample size
> // to be used when computing percentile intervals.  The replicate
sample
> // size is set to boot_rep_prop * n, where n is the number of raw
data
> points.
> //
> // e.g boot_rep_prop = 0.80;
> //
> boot_rep_prop = 1.0;
>
> //
> // Specify the number of times each set of matched pair data should
be
> // resampled when computing bootstrap confidence intervals.  A value
of
> // zero disables the computation of bootstrap condifence intervals.
> //
> // e.g. n_boot_rep = 1000;
> //
> n_boot_rep = 1000;
>
> //
> // Specify the name of the random number generator to be used.  See
the MET
> // Users Guide for a list of possible random number generators.
> //
> boot_rng = "mt19937";
>
> //
> // Specify the seed value to be used when computing bootstrap
confidence
> // intervals.  If left unspecified, the seed will change for each
run and
> // the computed bootstrap confidence intervals will not be
reproducable.
> //
> boot_seed = "";
>
> //
> // Specify a comma-separated list of interpolation method(s) to be
used
> // for comparing the forecast grid to the observation points.
String
> values
> // are interpreted as follows:
> //    MIN     = Minimum in the neighborhood
> //    MAX     = Maximum in the neighborhood
> //    MEDIAN  = Median in the neighborhood
> //    UW_MEAN = Unweighted mean in the neighborhood
> //    DW_MEAN = Distance-weighted mean in the neighborhood
> //    LS_FIT  = Least-squares fit in the neighborhood
> //    BILIN   = Bilinear interpolation using the 4 closest points
> //
> // In all cases, vertical interpolation is performed in the natural
log
> // of pressure of the levels above and below the observation.
> //
> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> //
> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
>
> //
> // Specify a comma-separated list of box widths to be used by the
> // interpolation techniques listed above.  A value of 1 indicates
that
> // the nearest neighbor approach should be used.  For a value of n
> // greater than 1, the n*n grid points closest to the observation
define
> // the neighborhood.
> //
> // e.g. interp_width = [ 1, 3, 5 ];
> //
> interp_width[] = [ 1, 3 ];
>
> //
> // When interpolating, compute a ratio of the number of valid data
points
> // to the total number of points in the neighborhood.  If that ratio
is
> // less than this threshold, do not include the observation.  This
> // threshold must be between 0 and 1.  Setting this threshold to 1
will
> // require that each observation be surrounded by n*n valid forecast
> // points.
> //
> // e.g. interp_thresh = 1.0;
> //
> interp_thresh = 1.0;
>
> //
> // Specify flags to indicate the type of data to be output:
> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
> //           Total (TOTAL),
> //           Forecast Rate (F_RATE),
> //           Hit Rate (H_RATE),
> //           Observation Rate (O_RATE)
> //
> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> //           Total (TOTAL),
> //           Forecast Yes and Observation Yes Count (FY_OY),
> //           Forecast Yes and Observation No Count (FY_ON),
> //           Forecast No and Observation Yes Count (FN_OY),
> //           Forecast No and Observation No Count (FN_ON)
> //
> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> //           Total (TOTAL),
> //           Base Rate (BASER),
> //           Forecast Mean (FMEAN),
> //           Accuracy (ACC),
> //           Frequency Bias (FBIAS),
> //           Probability of Detecting Yes (PODY),
> //           Probability of Detecting No (PODN),
> //           Probability of False Detection (POFD),
> //           False Alarm Ratio (FAR),
> //           Critical Success Index (CSI),
> //           Gilbert Skill Score (GSS),
> //           Hanssen and Kuipers Discriminant (HK),
> //           Heidke Skill Score (HSS),
> //           Odds Ratio (ODDS),
> //           NOTE: All statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table
> Counts:
> //           Total (TOTAL),
> //           Number of Categories (N_CAT),
> //           Contingency Table Count columns repeated N_CAT*N_CAT
times
> //
> //    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table
> Scores:
> //           Total (TOTAL),
> //           Number of Categories (N_CAT),
> //           Accuracy (ACC),
> //           Hanssen and Kuipers Discriminant (HK),
> //           Heidke Skill Score (HSS),
> //           Gerrity Score (GER),
> //           NOTE: All statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> //           Total (TOTAL),
> //           Forecast Mean (FBAR),
> //           Forecast Standard Deviation (FSTDEV),
> //           Observation Mean (OBAR),
> //           Observation Standard Deviation (OSTDEV),
> //           Pearson's Correlation Coefficient (PR_CORR),
> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> //           Number of ranks compared (RANKS),
> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> //           Number of tied ranks in the observation field
(ORANK_TIES),
> //           Mean Error (ME),
> //           Standard Deviation of the Error (ESTDEV),
> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> //           Mean Absolute Error (MAE),
> //           Mean Squared Error (MSE),
> //           Bias-Corrected Mean Squared Error (BCMSE),
> //           Root Mean Squared Error (RMSE),
> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> //           NOTE: Most statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> //           Total (TOTAL),
> //           Forecast Mean (FBAR),
> //              = mean(f)
> //           Observation Mean (OBAR),
> //              = mean(o)
> //           Forecast*Observation Product Mean (FOBAR),
> //              = mean(f*o)
> //           Forecast Squared Mean (FFBAR),
> //              = mean(f^2)
> //           Observation Squared Mean (OOBAR)
> //              = mean(o^2)
> //
> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial Sums:
> //           Total (TOTAL),
> //           Forecast Anomaly Mean (FABAR),
> //              = mean(f-c)
> //           Observation Anomaly Mean (OABAR),
> //              = mean(o-c)
> //           Product of Forecast and Observation Anomalies Mean
(FOABAR),
> //              = mean((f-c)*(o-c))
> //           Forecast Anomaly Squared Mean (FFABAR),
> //              = mean((f-c)^2)
> //           Observation Anomaly Squared Mean (OOABAR)
> //              = mean((o-c)^2)
> //
> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
> //           Total (TOTAL),
> //           U-Forecast Mean (UFBAR),
> //              = mean(uf)
> //           V-Forecast Mean (VFBAR),
> //              = mean(vf)
> //           U-Observation Mean (UOBAR),
> //              = mean(uo)
> //           V-Observation Mean (VOBAR),
> //              = mean(vo)
> //           U-Product Plus V-Product (UVFOBAR),
> //              = mean(uf*uo+vf*vo)
> //           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
> //              = mean(uf^2+vf^2)
> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> //              = mean(uo^2+vo^2)
> //
> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial Sums:
> //           U-Forecast Anomaly Mean (UFABAR),
> //              = mean(uf-uc)
> //           V-Forecast Anomaly Mean (VFABAR),
> //              = mean(vf-vc)
> //           U-Observation Anomaly Mean (UOABAR),
> //              = mean(uo-uc)
> //           V-Observation Anomaly Mean (VOABAR),
> //              = mean(vo-vc)
> //           U-Anomaly Product Plus V-Anomaly Product (UVFOABAR),
> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
> //           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
Squared
> (UVFFABAR),
> //              = mean((uf-uc)^2+(vf-vc)^2)
> //           U-Observation Anomaly Squared Plus V-Observation
Anomaly
> Squared (UVOOABAR)
> //              = mean((uo-uc)^2+(vo-vc)^2)
> //
> //   (11) STAT and PCT Text Files, Nx2 Probability Contingency Table
> Counts:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Row Observation Yes Count (OY_i),
> //           Row Observation No Count (ON_i),
> //           NOTE: Previous 3 columns repeated for each row in the
table.
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (12) STAT and PSTD Text Files, Nx2 Probability Contingency
Table
> Scores:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Base Rate (BASER) with confidence interval limits,
> //           Reliability (RELIABILITY),
> //           Resolution (RESOLUTION),
> //           Uncertainty (UNCERTAINTY),
> //           Area Under the ROC Curve (ROC_AUC),
> //           Brier Score (BRIER) with confidence interval limits,
> //           Probability Threshold Value (THRESH_i)
> //           NOTE: Previous column repeated for each probability
threshold.
> //
> //   (13) STAT and PJC Text Files, Joint/Continuous Statistics of
> //                                 Probabilistic Variables:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Observation Yes Count Divided by Total (OY_TP_i),
> //           Observation No Count Divided by Total (ON_TP_i),
> //           Calibration (CALIBRATION_i),
> //           Refinement (REFINEMENT_i),
> //           Likelikhood (LIKELIHOOD_i),
> //           Base Rate (BASER_i),
> //           NOTE: Previous 7 columns repeated for each row in the
table.
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (14) STAT and PRC Text Files, ROC Curve Points for
> //                                 Probabilistic Variables:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Probability of Detecting Yes (PODY_i),
> //           Probability of False Detection (POFD_i),
> //           NOTE: Previous 3 columns repeated for each row in the
table.
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (15) STAT and MPR Text Files, Matched Pair Data:
> //           Total (TOTAL),
> //           Index (INDEX),
> //           Observation Station ID (OBS_SID),
> //           Observation Latitude (OBS_LAT),
> //           Observation Longitude (OBS_LON),
> //           Observation Level (OBS_LVL),
> //           Observation Elevation (OBS_ELV),
> //           Forecast Value (FCST),
> //           Observation Value (OBS),
> //           Climatological Value (CLIMO)
> //
> //   In the expressions above, f are forecast values, o are observed
> values,
> //   and c are climatological values.
> //
> // Values for these flags are interpreted as follows:
> //    (0) Do not generate output of this type
> //    (1) Write output to a STAT file
> //    (2) Write output to a STAT file and a text file
> //
> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1 ];
>
> //
> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> // Coefficients should be computed.  Computing them over large
datasets is
> // computationally intensive and slows down the runtime execution
> significantly.
> //    (0) Do not compute these correlation coefficients
> //    (1) Compute these correlation coefficients
> //
> rank_corr_flag = 1;
>
> //
> // Specify the GRIB Table 2 parameter table version number to be
used
> // for interpreting GRIB codes.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> grib_ptv = 2;
>
> //
> // Directory where temporary files should be written.
> //
> tmp_dir = "/tmp";
>
> //
> // Prefix to be used for the output file names.
> //
> output_prefix = "";
>
> //
> // Indicate a version number for the contents of this configuration
file.
> // The value should generally not be modified.
> //
> version = "V3.0.1";
>
>
>

------------------------------------------------
Subject: Re: METV3 Issue
From: Tim Melino
Time: Fri Dec 09 08:16:43 2011

Paul,
I tried running again using your configuration settings, but while
running
pointstat I am still receiving errors. The error comes up as the
following
....

[wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
ndas.t12z.ncPointStatConfig -outdir . -v 99
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744071864509006
Forecast File: wrf.nc
Climatology File: none
Configuration File: PointStatConfig
Observation File: ndas.t12z.nc

--------------------------------------------------------------------------------

Reading records for TT(0,0,*,*).


  LongArray::operator[](int) -> range check error ... 4



- Tim


On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:

> Tim,
>
> I ran the following pb2nc and point_stat commands using the attached
> config files to generate point verification data
> with your PrepBUFR obs and p_interp model data.  Note that MET_BASE
is set
> to the base folder of an instance of
> METv3.0.1.  I pulled both config files, with slight modifications,
from
> $MET_BASE/scripts/config.
>
> $MET_BASE/bin/pb2nc ndas.t12z.prepbufr.tm12.nr
ndas.t12z.ncPB2NCConfig_G212 -v 99
>
> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig -outdir
. -v
> 99
>
> In PointStatConfig, you will see the following settings.  The
fcst_field
> setting format is due to the fact that fields
> in wrf.nc are four dimensional, with the last two dimensions being
the
> spatial (x,y) dimensions.  The obs_field
> specifies surface temperature using a GRIB-style format, because
pb2nc
> indexes fields in its output by GRIB code.  You
> should follow a similar paradigm to verify additional fields beyond
> temperature.
>
> fcst_field[] = [ "TT(0,0,*,*)" ];
> obs_field[]  = [ "TMP/Z2" ];
>
> fcst_thresh[] = [ "le273" ];
> obs_thresh[]  = [];
>
> If you have any questions or problems, please let me know.
>
> Good luck,
>
> Paul
>
>
>
>
> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >
> > I just put the file on the server that I have been using. As far
as
> running
> > the UPP software, that is not really possible at the moment. I do
not
> have
> > any of that software installed or configured as I have never had a
reason
> > to use it .
> >
> > - Tim
> >
> > On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT
<met_help at ucar.edu
> >wrote:
> >
> >> Tim,
> >>
> >> Can you please put the input PrepBUFR file that you pass to pb2nc
on the
> >> FTP site?  When I look at the contents of
> >> out.nc, it does not appear that there are any observations in
that
> file.
> >>  I would like to run pb2nc myself to see what
> >> is going on.
> >>
> >> I made an incorrect assumption in my earlier emails that you were
trying
> >> to verify model data in GRIB format.  Now that
> >> I have your data in hand, I see that it is p_interp output, as
you
> >> mentioned in your initial email.  MET support for
> >> p_interp is not as robust as for GRIB.  In particular, grid-
relative
> wind
> >> directions in p_interp data files should not
> >> be compared to lat-long relative wind directions in the PrepBUFR
obs.
> >>  Would it be possible for you to run your WRF
> >> output through the Unified Post Processor (UPP -
> >> http://www.dtcenter.org/wrf-nmm/users/overview/upp_overview.php)
> >> instead of or in addition to p_interp?  That would simplify MET
> >> verification tasks.  Please let me know if you have any
> >> questions.
> >>
> >> Thanks,
> >>
> >> Paul
> >>
> >>
> >> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
> >>>
> >>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> >>> Transaction: Ticket created by tmelino at meso.com
> >>>        Queue: met_help
> >>>      Subject: Re: METV3 Issue
> >>>        Owner: Nobody
> >>>   Requestors: tmelino at meso.com
> >>>       Status: new
> >>>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
> >>>
> >>>
> >>> Ok,
> >>>
> >>> The data should be there now. With out.nc being the obs and
wrf.ncbeing
> >>> the forecast
> >>>
> >>> - Tim
> >>>
> >>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com>
wrote:
> >>>
> >>>> Hi,
> >>>> I have recently been doing some work with WRF and am trying to
add the
> >> the
> >>>> model evaluation tools to our standard model verification
system.  I
> >>>> started the process by running the pressure interpolation
program on a
> >>>> single wrfout file, which appeared to finish correctly. I have
> attached
> >> an
> >>>> ncdump of the file header to this email it is called
> >>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a
single
> >> prebufr
> >>>> file from an NCEP repository for the time centered on the
forecast
> >> period
> >>>> and ran PB2NC and this also appeared to finish correctly and
output a
> >>>> single netcdf file, the header information is also attached
> (PB2NC.txt).
> >>>> Then I attempted to run the point stat utility on these two
files but
> >> the
> >>>> program errors out telling me that there are more forecast
field that
> >>>> observational fields "ERROR:
PointStatConfInfo::process_config() ->
> The
> >>>> number fcst_thresh entries provided must match the number of
fields
> >>>> provided in fcst_field.". I ran the following command from the
> terminal
> >> to
> >>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
> >> out.nc
> >>>> PointStatConfig".  I am not sure what the problem is I have red
the
> >>>> documentation and it appears to be setup correctly but I am not
> >> completely
> >>>> sure as I have never used this software before.  What should
these
> >> namelist
> >>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying to
> >> verify
> >>>> 10 meter winds? I appreciate your help!
> >>>>
> >>>> Also ... I ran the test all scripts after compilation , and the
code
> >>>> completed successfully with no errors.
> >>>>
> >>>>
> >>>>
> >>>> Thanks ,
> >>>> Tim
> >>>>
> >>
> >>
> >>
>
>
>
>
>
////////////////////////////////////////////////////////////////////////////////
> //
> // Default pb2nc configuration file
> //
>
>
////////////////////////////////////////////////////////////////////////////////
>
> //
> // Stratify the observation data in the PrepBufr files in the
following
> // ways:
> //  (1) by message type: supply a list of PrepBufr message types
> //      to retain (i.e. AIRCFT)
> //  (2) by station id: supply a list of observation stations to
retain
> //  (3) by valid time: supply starting and ending times in form
> //      YYYY-MM-DD HH:MM:SS UTC
> //  (4) by location: supply either an NCEP masking grid, a masking
> //      lat/lon polygon or a file to a mask lat/lon polygon
> //  (5) by elevation: supply min/max elevation values
> //  (6) by report type (typ): supply a list of report types to
retain
> //  (7) by instrument type (itp): supply a list of instrument type
to
> //      retain
> //  (8) by vertical level: supply min/max vertical levels
> //  (9) by variable type: supply a list of variable types to retain
> //      P, Q, T, Z, U, V
> // (11) by quality mark: supply a quality mark threshold
> // (12) Flag to retain values for all quality marks, or just the
first
> //      quality mark (highest)
> // (13) by data level category: supply a list of category types to
> //      retain.
> //
> //      0 - Surface level (mass reports only)
> //      1 - Mandatory level (upper-air profile reports)
> //      2 - Significant temperature level (upper-air profile
reports)
> //      2 - Significant temperature and winds-by-pressure level
> //          (future combined mass and wind upper-air reports)
> //      3 - Winds-by-pressure level (upper-air profile reports)
> //      4 - Winds-by-height level (upper-air profile reports)
> //      5 - Tropopause level (upper-air profile reports)
> //      6 - Reports on a single level
> //          (e.g., aircraft, satellite-wind, surface wind,
> //           precipitable water retrievals, etc.)
> //      7 - Auxiliary levels generated via interpolation from
spanning
> levels
> //          (upper-air profile reports)
> //
>
> //
> // Specify a comma-separated list of PrepBufr message type strings
to
> retain.
> // An empty list indicates that all should be retained.
> // List of valid message types:
> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> //    SFCSHP SPSSMI SYNDAT VADWND
> //    ANYAIR (= AIRCAR, AIRCFT)
> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> //    ONLYSF (= ADPSFC, SFCSHP)
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> //
> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> //
> message_type[] = [];
>
> //
> // Specify a comma-separated list of station ID strings to retain.
> // An empty list indicates that all should be retained.
> //
> // e.g. station_id[] = [ "KDEN" ];
> //
> station_id[] = [];
>
> //
> // Beginning and ending time offset values in seconds for
observations
> // to retain.  The valid time window for retaining observations is
> // defined in reference to the observation time.  So observations
with
> // a valid time falling in the window [obs_time+beg_ds,
obs_time+end_ds]
> // will be retained.
> //
> beg_ds = -1800;
> end_ds =  1800;
>
> //
> // Specify the name of a single grid to be used in masking the data.
> // An empty string indicates that no grid should be used.  The
standard
> // NCEP grids are named "GNNN" where NNN indicates the three digit
grid
> number.
> //
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> //
> // e.g. mask_grid = "G212";
> //
> mask_grid = "G212";
>
> //
> // Specify a single ASCII file containing a lat/lon polygon.
> // Latitude in degrees north and longitude in degrees east.
> // By default, the first and last polygon points are connected.
> //
> // The lat/lon polygon file should contain a name for the polygon
> // followed by a space-separated list of lat/lon points:
> //    "name lat1 lon1 lat2 lon2... latn lonn"
> //
> // MET_BASE may be used in the path for the lat/lon polygon file.
> //
> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
> //
> mask_poly = "";
>
> //
> // Beginning and ending elevation values in meters for observations
> // to retain.
> //
> beg_elev = -1000;
> end_elev = 100000;
>
> //
> // Specify a comma-separated list of PrepBufr report type values to
retain.
> // An empty list indicates that all should be retained.
> //
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
> //
> // e.g. pb_report_type[] = [ 120, 133 ];
> //
> pb_report_type[] = [];
>
> //
> // Specify a comma-separated list of input report type values to
retain.
> // An empty list indicates that all should be retained.
> //
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
> //
> // e.g. in_report_type[] = [ 11, 22, 23 ];
> //
> in_report_type[] = [];
>
> //
> // Specify a comma-separated list of instrument type values to
retain.
> // An empty list indicates that all should be retained.
> //
> // e.g. instrument_type[] = [ 52, 87 ];
> //
> instrument_type[] = [];
>
> //
> // Beginning and ending vertical levels to retain.
> //
> beg_level = 1;
> end_level = 255;
>
> //
> // Specify a comma-separated list of strings containing grib codes
or
> // corresponding grib code abbreviations to retain or be derived
from
> // the available observations.
> //
> // Grib Codes to be RETAINED:
> //    SPFH or 51 for Specific Humidity in kg/kg
> //    TMP  or 11 for Temperature in K
> //    HGT  or 7  for Height in meters
> //    UGRD or 33 for the East-West component of the wind in m/s
> //    VGRD or 34 for the North-South component of the wind in m/s
> //
> // Grib Codes to be DERIVED:
> //    DPT   or 17 for Dewpoint Temperature in K
> //    WIND  or 32 for Wind Speed in m/s
> //    RH    or 52 for Relative Humidity in %
> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
> //
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
> //
> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
>                    "DPT",  "WIND", "RH",   "MIXR" ];
>
> //
> // Quality mark threshold to indicate which observations to retain.
> // Observations with a quality mark equal to or LESS THAN this
threshold
> // will be retained, while observations with a quality mark GREATER
THAN
> // this threshold will be discarded.
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
> //
> quality_mark_thresh = 2;
>
> //
> // Flag to indicate whether observations should be drawn from the
top
> // of the event stack (most quality controlled) or the bottom of the
> // event stack (most raw).  A value of 1 indicates that the top of
the
> // event stack should be used while a value of zero indicates that
the
> // bottom should be used.
> //
> event_stack_flag = 1;
>
> //
> // Space comma-separated list of data level categorie values to
retain,
> // where a value of:
> //    0 = Surface level (mass reports only)
> //    1 = Mandatory level (upper-air profile reports)
> //    2 = Significant temperature level (upper-air profile reports)
> //    2 = Significant temperature and winds-by-pressure level
> //        (future combined mass and wind upper-air reports)
> //    3 = Winds-by-pressure level (upper-air profile reports)
> //    4 = Winds-by-height level (upper-air profile reports)
> //    5 = Tropopause level (upper-air profile reports)
> //    6 = Reports on a single level
> //        (e.g., aircraft, satellite-wind, surface wind,
> //         precipitable water retrievals, etc.)
> //    7 = Auxiliary levels generated via interpolation from spanning
levels
> //        (upper-air profile reports)
> // An empty list indicates that all should be retained.
> //
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> //
> // e.g. level_category[] = [ 0, 1 ];
> //
> level_category[] = [];
>
> //
> // Directory where temp files should be written by the PB2NC tool
> //
> tmp_dir = "/tmp";
>
> //
> // Indicate a version number for the contents of this configuration
file.
> // The value should generally not be modified.
> //
> version = "V3.0";
>
>
>
////////////////////////////////////////////////////////////////////////////////
> //
> // Default point_stat configuration file
> //
>
>
////////////////////////////////////////////////////////////////////////////////
>
> //
> // Specify a name to designate the model being verified.  This name
will be
> // written to the second column of the ASCII output generated.
> //
> model = "WRF";
>
> //
> // Beginning and ending time offset values in seconds for
observations
> // to be used.  These time offsets are defined in reference to the
> // forecast valid time, v.  Observations with a valid time falling
in the
> // window [v+beg_ds, v+end_ds] will be used.
> // These selections are overridden by the command line arguments
> // -obs_valid_beg and -obs_valid_end.
> //
> beg_ds = -1800;
> end_ds =  1800;
>
> //
> // Specify a comma-separated list of fields to be verified.  The
forecast
> and
> // observation fields may be specified separately.  If the obs_field
> parameter
> // is left blank, it will default to the contents of fcst_field.
> //
> // Each field is specified as a GRIB code or abbreviation followed
by an
> // accumulation or vertical level indicator for GRIB files or as a
> variable name
> // followed by a list of dimensions for NetCDF files output from
p_interp
> or MET.
> //
> // Specifying verification fields for GRIB files:
> //    GC/ANNN for accumulation interval NNN
> //    GC/ZNNN for vertical level NNN
> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
> //    GC/PNNN for pressure level NNN in hPa
> //    GC/PNNN-NNN for a range of pressure levels in hPa
> //    GC/LNNN for a generic level type
> //    GC/RNNN for a specific GRIB record number
> //    Where GC is the number of or abbreviation for the grib code
> //    to be verified.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> // Specifying verification fields for NetCDF files:
> //    var_name(i,...,j,*,*) for a single field
> //    var_name(i-j,*,*) for a range of fields
> //    Where var_name is the name of the NetCDF variable,
> //    and i,...,j specifies fixed dimension values,
> //    and i-j specifies a range of values for a single dimension,
> //    and *,* specifies the two dimensions for the gridded field.
> //
> //    NOTE: To verify winds as vectors rather than scalars,
> //          specify UGRD (or 33) followed by VGRD (or 34) with the
> //          same level values.
> //
> //    NOTE: To process a probability field, add "/PROB", such as
> "POP/Z0/PROB".
> //
> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB input
> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ]; for
NetCDF
> input
> //
>
> fcst_field[] = [ "TT(0,0,*,*)" ];
> obs_field[]  = [ "TMP/Z2" ];
>
> //
> // Specify a comma-separated list of groups of thresholds to be
applied to
> the
> // fields listed above.  Thresholds for the forecast and observation
fields
> // may be specified separately.  If the obs_thresh parameter is left
blank,
> // it will default to the contents of fcst_thresh.
> //
> // At least one threshold must be provided for each field listed
above.
>  The
> // lengths of the "fcst_field" and "fcst_thresh" arrays must match,
as must
> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> // thresholds to a field, separate the threshold values with a
space.
> //
> // Each threshold must be preceded by a two letter indicator for the
type
> of
> // thresholding to be performed:
> //    'lt' for less than     'le' for less than or equal to
> //    'eq' for equal to      'ne' for not equal to
> //    'gt' for greater than  'ge' for greater than or equal to
> //
> // NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
> //       and be preceeded by "ge".
> //
> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
> //
> fcst_thresh[] = [ "le273" ];
> obs_thresh[]  = [];
>
> //
> // Specify a comma-separated list of thresholds to be used when
computing
> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
applied to
> the
> // wind speed values derived from each U/V pair.  Only those U/V
pairs
> which meet
> // the wind speed threshold criteria are retained.  If the
obs_wind_thresh
> // parameter is left blank, it will default to the contents of
> fcst_wind_thresh.
> //
> // To apply multiple wind speed thresholds, separate the threshold
values
> with a
> // space.  Use "NA" to indicate that no wind speed threshold should
be
> applied.
> //
> // Each threshold must be preceded by a two letter indicator for the
type
> of
> // thresholding to be performed:
> //    'lt' for less than     'le' for less than or equal to
> //    'eq' for equal to      'ne' for not equal to
> //    'gt' for greater than  'ge' for greater than or equal to
> //    'NA' for no threshold
> //
> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> //
> fcst_wind_thresh[] = [ "NA" ];
> obs_wind_thresh[]  = [];
>
> //
> // Specify a comma-separated list of PrepBufr message types with
which
> // to perform the verification.  Statistics will be computed
separately
> // for each message type specified.  At least one PrepBufr message
type
> // must be provided.
> // List of valid message types:
> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> //    SFCSHP SPSSMI SYNDAT VADWND
> //    ANYAIR (= AIRCAR, AIRCFT)
> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> //    ONLYSF (= ADPSFC, SFCSHP)
> //
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> //
> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> //
> message_type[] = [ "ADPSFC" ];
>
> //
> // Specify a comma-separated list of grids to be used in masking the
data
> over
> // which to perform scoring.  An empty list indicates that no
masking grid
> // should be performed.  The standard NCEP grids are named "GNNN"
where NNN
> // indicates the three digit grid number.  Enter "FULL" to score
over the
> // entire domain.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> //
> // e.g. mask_grid[] = [ "FULL" ];
> //
> mask_grid[] = [ "FULL" ];
>
> //
> // Specify a comma-separated list of masking regions to be applied.
> // An empty list indicates that no additional masks should be used.
> // The masking regions may be defined in one of 4 ways:
> //
> // (1) An ASCII file containing a lat/lon polygon.
> //     Latitude in degrees north and longitude in degrees east.
> //     By default, the first and last polygon points are connected.
> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> //
> // (2) The NetCDF output of the gen_poly_mask tool.
> //
> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> //     to be used, and optionally, a threshold to be applied to the
field.
> //     e.g. "sample.nc var_name gt0.00"
> //
> // (4) A GRIB data file, followed by a description of the field
> //     to be used, and optionally, a threshold to be applied to the
field.
> //     e.g. "sample.grb APCP/A3 gt0.00"
> //
> // Any NetCDF or GRIB file used must have the same grid dimensions
as the
> // data being verified.
> //
> // MET_BASE may be used in the path for the files above.
> //
> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> //                      "poly_mask.ncf",
> //                      "sample.nc APCP",
> //                      "sample.grb HGT/Z0 gt100.0" ];
> //
> mask_poly[] = [];
>
> //
> // Specify the name of an ASCII file containing a space-separated
list of
> // station ID's at which to perform verification.  Each station ID
> specified
> // is treated as an individual masking region.
> //
> // An empty list file name indicates that no station ID masks should
be
> used.
> //
> // MET_BASE may be used in the path for the station ID mask file
name.
> //
> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
> //
> mask_sid = "";
>
> //
> // Specify a comma-separated list of values for alpha to be used
when
> computing
> // confidence intervals.  Values of alpha must be between 0 and 1.
> //
> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> //
> ci_alpha[] = [ 0.05 ];
>
> //
> // Specify the method to be used for computing bootstrap confidence
> intervals.
> // The value for this is interpreted as follows:
> //    (0) Use the BCa interval method (computationally intensive)
> //    (1) Use the percentile interval method
> //
> boot_interval = 1;
>
> //
> // Specify a proportion between 0 and 1 to define the replicate
sample size
> // to be used when computing percentile intervals.  The replicate
sample
> // size is set to boot_rep_prop * n, where n is the number of raw
data
> points.
> //
> // e.g boot_rep_prop = 0.80;
> //
> boot_rep_prop = 1.0;
>
> //
> // Specify the number of times each set of matched pair data should
be
> // resampled when computing bootstrap confidence intervals.  A value
of
> // zero disables the computation of bootstrap condifence intervals.
> //
> // e.g. n_boot_rep = 1000;
> //
> n_boot_rep = 1000;
>
> //
> // Specify the name of the random number generator to be used.  See
the MET
> // Users Guide for a list of possible random number generators.
> //
> boot_rng = "mt19937";
>
> //
> // Specify the seed value to be used when computing bootstrap
confidence
> // intervals.  If left unspecified, the seed will change for each
run and
> // the computed bootstrap confidence intervals will not be
reproducable.
> //
> boot_seed = "";
>
> //
> // Specify a comma-separated list of interpolation method(s) to be
used
> // for comparing the forecast grid to the observation points.
String
> values
> // are interpreted as follows:
> //    MIN     = Minimum in the neighborhood
> //    MAX     = Maximum in the neighborhood
> //    MEDIAN  = Median in the neighborhood
> //    UW_MEAN = Unweighted mean in the neighborhood
> //    DW_MEAN = Distance-weighted mean in the neighborhood
> //    LS_FIT  = Least-squares fit in the neighborhood
> //    BILIN   = Bilinear interpolation using the 4 closest points
> //
> // In all cases, vertical interpolation is performed in the natural
log
> // of pressure of the levels above and below the observation.
> //
> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> //
> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
>
> //
> // Specify a comma-separated list of box widths to be used by the
> // interpolation techniques listed above.  A value of 1 indicates
that
> // the nearest neighbor approach should be used.  For a value of n
> // greater than 1, the n*n grid points closest to the observation
define
> // the neighborhood.
> //
> // e.g. interp_width = [ 1, 3, 5 ];
> //
> interp_width[] = [ 1, 3 ];
>
> //
> // When interpolating, compute a ratio of the number of valid data
points
> // to the total number of points in the neighborhood.  If that ratio
is
> // less than this threshold, do not include the observation.  This
> // threshold must be between 0 and 1.  Setting this threshold to 1
will
> // require that each observation be surrounded by n*n valid forecast
> // points.
> //
> // e.g. interp_thresh = 1.0;
> //
> interp_thresh = 1.0;
>
> //
> // Specify flags to indicate the type of data to be output:
> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
> //           Total (TOTAL),
> //           Forecast Rate (F_RATE),
> //           Hit Rate (H_RATE),
> //           Observation Rate (O_RATE)
> //
> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> //           Total (TOTAL),
> //           Forecast Yes and Observation Yes Count (FY_OY),
> //           Forecast Yes and Observation No Count (FY_ON),
> //           Forecast No and Observation Yes Count (FN_OY),
> //           Forecast No and Observation No Count (FN_ON)
> //
> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> //           Total (TOTAL),
> //           Base Rate (BASER),
> //           Forecast Mean (FMEAN),
> //           Accuracy (ACC),
> //           Frequency Bias (FBIAS),
> //           Probability of Detecting Yes (PODY),
> //           Probability of Detecting No (PODN),
> //           Probability of False Detection (POFD),
> //           False Alarm Ratio (FAR),
> //           Critical Success Index (CSI),
> //           Gilbert Skill Score (GSS),
> //           Hanssen and Kuipers Discriminant (HK),
> //           Heidke Skill Score (HSS),
> //           Odds Ratio (ODDS),
> //           NOTE: All statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table
> Counts:
> //           Total (TOTAL),
> //           Number of Categories (N_CAT),
> //           Contingency Table Count columns repeated N_CAT*N_CAT
times
> //
> //    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table
> Scores:
> //           Total (TOTAL),
> //           Number of Categories (N_CAT),
> //           Accuracy (ACC),
> //           Hanssen and Kuipers Discriminant (HK),
> //           Heidke Skill Score (HSS),
> //           Gerrity Score (GER),
> //           NOTE: All statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> //           Total (TOTAL),
> //           Forecast Mean (FBAR),
> //           Forecast Standard Deviation (FSTDEV),
> //           Observation Mean (OBAR),
> //           Observation Standard Deviation (OSTDEV),
> //           Pearson's Correlation Coefficient (PR_CORR),
> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> //           Number of ranks compared (RANKS),
> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> //           Number of tied ranks in the observation field
(ORANK_TIES),
> //           Mean Error (ME),
> //           Standard Deviation of the Error (ESTDEV),
> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> //           Mean Absolute Error (MAE),
> //           Mean Squared Error (MSE),
> //           Bias-Corrected Mean Squared Error (BCMSE),
> //           Root Mean Squared Error (RMSE),
> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> //           NOTE: Most statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> //           Total (TOTAL),
> //           Forecast Mean (FBAR),
> //              = mean(f)
> //           Observation Mean (OBAR),
> //              = mean(o)
> //           Forecast*Observation Product Mean (FOBAR),
> //              = mean(f*o)
> //           Forecast Squared Mean (FFBAR),
> //              = mean(f^2)
> //           Observation Squared Mean (OOBAR)
> //              = mean(o^2)
> //
> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial Sums:
> //           Total (TOTAL),
> //           Forecast Anomaly Mean (FABAR),
> //              = mean(f-c)
> //           Observation Anomaly Mean (OABAR),
> //              = mean(o-c)
> //           Product of Forecast and Observation Anomalies Mean
(FOABAR),
> //              = mean((f-c)*(o-c))
> //           Forecast Anomaly Squared Mean (FFABAR),
> //              = mean((f-c)^2)
> //           Observation Anomaly Squared Mean (OOABAR)
> //              = mean((o-c)^2)
> //
> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
> //           Total (TOTAL),
> //           U-Forecast Mean (UFBAR),
> //              = mean(uf)
> //           V-Forecast Mean (VFBAR),
> //              = mean(vf)
> //           U-Observation Mean (UOBAR),
> //              = mean(uo)
> //           V-Observation Mean (VOBAR),
> //              = mean(vo)
> //           U-Product Plus V-Product (UVFOBAR),
> //              = mean(uf*uo+vf*vo)
> //           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
> //              = mean(uf^2+vf^2)
> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> //              = mean(uo^2+vo^2)
> //
> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial Sums:
> //           U-Forecast Anomaly Mean (UFABAR),
> //              = mean(uf-uc)
> //           V-Forecast Anomaly Mean (VFABAR),
> //              = mean(vf-vc)
> //           U-Observation Anomaly Mean (UOABAR),
> //              = mean(uo-uc)
> //           V-Observation Anomaly Mean (VOABAR),
> //              = mean(vo-vc)
> //           U-Anomaly Product Plus V-Anomaly Product (UVFOABAR),
> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
> //           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
Squared
> (UVFFABAR),
> //              = mean((uf-uc)^2+(vf-vc)^2)
> //           U-Observation Anomaly Squared Plus V-Observation
Anomaly
> Squared (UVOOABAR)
> //              = mean((uo-uc)^2+(vo-vc)^2)
> //
> //   (11) STAT and PCT Text Files, Nx2 Probability Contingency Table
> Counts:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Row Observation Yes Count (OY_i),
> //           Row Observation No Count (ON_i),
> //           NOTE: Previous 3 columns repeated for each row in the
table.
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (12) STAT and PSTD Text Files, Nx2 Probability Contingency
Table
> Scores:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Base Rate (BASER) with confidence interval limits,
> //           Reliability (RELIABILITY),
> //           Resolution (RESOLUTION),
> //           Uncertainty (UNCERTAINTY),
> //           Area Under the ROC Curve (ROC_AUC),
> //           Brier Score (BRIER) with confidence interval limits,
> //           Probability Threshold Value (THRESH_i)
> //           NOTE: Previous column repeated for each probability
threshold.
> //
> //   (13) STAT and PJC Text Files, Joint/Continuous Statistics of
> //                                 Probabilistic Variables:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Observation Yes Count Divided by Total (OY_TP_i),
> //           Observation No Count Divided by Total (ON_TP_i),
> //           Calibration (CALIBRATION_i),
> //           Refinement (REFINEMENT_i),
> //           Likelikhood (LIKELIHOOD_i),
> //           Base Rate (BASER_i),
> //           NOTE: Previous 7 columns repeated for each row in the
table.
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (14) STAT and PRC Text Files, ROC Curve Points for
> //                                 Probabilistic Variables:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Probability of Detecting Yes (PODY_i),
> //           Probability of False Detection (POFD_i),
> //           NOTE: Previous 3 columns repeated for each row in the
table.
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (15) STAT and MPR Text Files, Matched Pair Data:
> //           Total (TOTAL),
> //           Index (INDEX),
> //           Observation Station ID (OBS_SID),
> //           Observation Latitude (OBS_LAT),
> //           Observation Longitude (OBS_LON),
> //           Observation Level (OBS_LVL),
> //           Observation Elevation (OBS_ELV),
> //           Forecast Value (FCST),
> //           Observation Value (OBS),
> //           Climatological Value (CLIMO)
> //
> //   In the expressions above, f are forecast values, o are observed
> values,
> //   and c are climatological values.
> //
> // Values for these flags are interpreted as follows:
> //    (0) Do not generate output of this type
> //    (1) Write output to a STAT file
> //    (2) Write output to a STAT file and a text file
> //
> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1 ];
>
> //
> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> // Coefficients should be computed.  Computing them over large
datasets is
> // computationally intensive and slows down the runtime execution
> significantly.
> //    (0) Do not compute these correlation coefficients
> //    (1) Compute these correlation coefficients
> //
> rank_corr_flag = 1;
>
> //
> // Specify the GRIB Table 2 parameter table version number to be
used
> // for interpreting GRIB codes.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> grib_ptv = 2;
>
> //
> // Directory where temporary files should be written.
> //
> tmp_dir = "/tmp";
>
> //
> // Prefix to be used for the output file names.
> //
> output_prefix = "";
>
> //
> // Indicate a version number for the contents of this configuration
file.
> // The value should generally not be modified.
> //
> version = "V3.0.1";
>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51928] Re: METV3 Issue
From: Paul Oldenburg
Time: Fri Dec 09 09:38:56 2011

Tim,

We are not able to reproduce the error that you are reporting.  Are
you using the same exact data and config files that
you sent me and I tested with?  In any case, can you create a tar
archive of all the files involved in the point_stat
command that throws the error and put it on the FTP site?  I will need
to be able to reproduce this error, otherwise it
will be difficult for me to diagnose the problem.

Thanks,

Paul


On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>
> Paul,
> I tried running again using your configuration settings, but while
running
> pointstat I am still receiving errors. The error comes up as the
following
> ....
>
> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
> ndas.t12z.ncPointStatConfig -outdir . -v 99
> GSL_RNG_TYPE=mt19937
> GSL_RNG_SEED=18446744071864509006
> Forecast File: wrf.nc
> Climatology File: none
> Configuration File: PointStatConfig
> Observation File: ndas.t12z.nc
>
>
--------------------------------------------------------------------------------
>
> Reading records for TT(0,0,*,*).
>
>
>   LongArray::operator[](int) -> range check error ... 4
>
>
>
> - Tim
>
>
> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:
>
>> Tim,
>>
>> I ran the following pb2nc and point_stat commands using the
attached
>> config files to generate point verification data
>> with your PrepBUFR obs and p_interp model data.  Note that MET_BASE
is set
>> to the base folder of an instance of
>> METv3.0.1.  I pulled both config files, with slight modifications,
from
>> $MET_BASE/scripts/config.
>>
>> $MET_BASE/bin/pb2nc ndas.t12z.prepbufr.tm12.nr
ndas.t12z.ncPB2NCConfig_G212 -v 99
>>
>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig
-outdir . -v
>> 99
>>
>> In PointStatConfig, you will see the following settings.  The
fcst_field
>> setting format is due to the fact that fields
>> in wrf.nc are four dimensional, with the last two dimensions being
the
>> spatial (x,y) dimensions.  The obs_field
>> specifies surface temperature using a GRIB-style format, because
pb2nc
>> indexes fields in its output by GRIB code.  You
>> should follow a similar paradigm to verify additional fields beyond
>> temperature.
>>
>> fcst_field[] = [ "TT(0,0,*,*)" ];
>> obs_field[]  = [ "TMP/Z2" ];
>>
>> fcst_thresh[] = [ "le273" ];
>> obs_thresh[]  = [];
>>
>> If you have any questions or problems, please let me know.
>>
>> Good luck,
>>
>> Paul
>>
>>
>>
>>
>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>
>>> I just put the file on the server that I have been using. As far
as
>> running
>>> the UPP software, that is not really possible at the moment. I do
not
>> have
>>> any of that software installed or configured as I have never had a
reason
>>> to use it .
>>>
>>> - Tim
>>>
>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT
<met_help at ucar.edu
>>> wrote:
>>>
>>>> Tim,
>>>>
>>>> Can you please put the input PrepBUFR file that you pass to pb2nc
on the
>>>> FTP site?  When I look at the contents of
>>>> out.nc, it does not appear that there are any observations in
that
>> file.
>>>>  I would like to run pb2nc myself to see what
>>>> is going on.
>>>>
>>>> I made an incorrect assumption in my earlier emails that you were
trying
>>>> to verify model data in GRIB format.  Now that
>>>> I have your data in hand, I see that it is p_interp output, as
you
>>>> mentioned in your initial email.  MET support for
>>>> p_interp is not as robust as for GRIB.  In particular, grid-
relative
>> wind
>>>> directions in p_interp data files should not
>>>> be compared to lat-long relative wind directions in the PrepBUFR
obs.
>>>>  Would it be possible for you to run your WRF
>>>> output through the Unified Post Processor (UPP -
>>>> http://www.dtcenter.org/wrf-nmm/users/overview/upp_overview.php)
>>>> instead of or in addition to p_interp?  That would simplify MET
>>>> verification tasks.  Please let me know if you have any
>>>> questions.
>>>>
>>>> Thanks,
>>>>
>>>> Paul
>>>>
>>>>
>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
>>>>>
>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
>>>>> Transaction: Ticket created by tmelino at meso.com
>>>>>        Queue: met_help
>>>>>      Subject: Re: METV3 Issue
>>>>>        Owner: Nobody
>>>>>   Requestors: tmelino at meso.com
>>>>>       Status: new
>>>>>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
>>>>>
>>>>>
>>>>> Ok,
>>>>>
>>>>> The data should be there now. With out.nc being the obs and
wrf.ncbeing
>>>>> the forecast
>>>>>
>>>>> - Tim
>>>>>
>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com>
wrote:
>>>>>
>>>>>> Hi,
>>>>>> I have recently been doing some work with WRF and am trying to
add the
>>>> the
>>>>>> model evaluation tools to our standard model verification
system.  I
>>>>>> started the process by running the pressure interpolation
program on a
>>>>>> single wrfout file, which appeared to finish correctly. I have
>> attached
>>>> an
>>>>>> ncdump of the file header to this email it is called
>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a
single
>>>> prebufr
>>>>>> file from an NCEP repository for the time centered on the
forecast
>>>> period
>>>>>> and ran PB2NC and this also appeared to finish correctly and
output a
>>>>>> single netcdf file, the header information is also attached
>> (PB2NC.txt).
>>>>>> Then I attempted to run the point stat utility on these two
files but
>>>> the
>>>>>> program errors out telling me that there are more forecast
field that
>>>>>> observational fields "ERROR:
PointStatConfInfo::process_config() ->
>> The
>>>>>> number fcst_thresh entries provided must match the number of
fields
>>>>>> provided in fcst_field.". I ran the following command from the
>> terminal
>>>> to
>>>>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
>>>> out.nc
>>>>>> PointStatConfig".  I am not sure what the problem is I have red
the
>>>>>> documentation and it appears to be setup correctly but I am not
>>>> completely
>>>>>> sure as I have never used this software before.  What should
these
>>>> namelist
>>>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying to
>>>> verify
>>>>>> 10 meter winds? I appreciate your help!
>>>>>>
>>>>>> Also ... I ran the test all scripts after compilation , and the
code
>>>>>> completed successfully with no errors.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks ,
>>>>>> Tim
>>>>>>
>>>>
>>>>
>>>>
>>
>>
>>
>>
>>
////////////////////////////////////////////////////////////////////////////////
>> //
>> // Default pb2nc configuration file
>> //
>>
>>
////////////////////////////////////////////////////////////////////////////////
>>
>> //
>> // Stratify the observation data in the PrepBufr files in the
following
>> // ways:
>> //  (1) by message type: supply a list of PrepBufr message types
>> //      to retain (i.e. AIRCFT)
>> //  (2) by station id: supply a list of observation stations to
retain
>> //  (3) by valid time: supply starting and ending times in form
>> //      YYYY-MM-DD HH:MM:SS UTC
>> //  (4) by location: supply either an NCEP masking grid, a masking
>> //      lat/lon polygon or a file to a mask lat/lon polygon
>> //  (5) by elevation: supply min/max elevation values
>> //  (6) by report type (typ): supply a list of report types to
retain
>> //  (7) by instrument type (itp): supply a list of instrument type
to
>> //      retain
>> //  (8) by vertical level: supply min/max vertical levels
>> //  (9) by variable type: supply a list of variable types to retain
>> //      P, Q, T, Z, U, V
>> // (11) by quality mark: supply a quality mark threshold
>> // (12) Flag to retain values for all quality marks, or just the
first
>> //      quality mark (highest)
>> // (13) by data level category: supply a list of category types to
>> //      retain.
>> //
>> //      0 - Surface level (mass reports only)
>> //      1 - Mandatory level (upper-air profile reports)
>> //      2 - Significant temperature level (upper-air profile
reports)
>> //      2 - Significant temperature and winds-by-pressure level
>> //          (future combined mass and wind upper-air reports)
>> //      3 - Winds-by-pressure level (upper-air profile reports)
>> //      4 - Winds-by-height level (upper-air profile reports)
>> //      5 - Tropopause level (upper-air profile reports)
>> //      6 - Reports on a single level
>> //          (e.g., aircraft, satellite-wind, surface wind,
>> //           precipitable water retrievals, etc.)
>> //      7 - Auxiliary levels generated via interpolation from
spanning
>> levels
>> //          (upper-air profile reports)
>> //
>>
>> //
>> // Specify a comma-separated list of PrepBufr message type strings
to
>> retain.
>> // An empty list indicates that all should be retained.
>> // List of valid message types:
>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>> //    SFCSHP SPSSMI SYNDAT VADWND
>> //    ANYAIR (= AIRCAR, AIRCFT)
>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>> //    ONLYSF (= ADPSFC, SFCSHP)
>> //
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>> //
>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>> //
>> message_type[] = [];
>>
>> //
>> // Specify a comma-separated list of station ID strings to retain.
>> // An empty list indicates that all should be retained.
>> //
>> // e.g. station_id[] = [ "KDEN" ];
>> //
>> station_id[] = [];
>>
>> //
>> // Beginning and ending time offset values in seconds for
observations
>> // to retain.  The valid time window for retaining observations is
>> // defined in reference to the observation time.  So observations
with
>> // a valid time falling in the window [obs_time+beg_ds,
obs_time+end_ds]
>> // will be retained.
>> //
>> beg_ds = -1800;
>> end_ds =  1800;
>>
>> //
>> // Specify the name of a single grid to be used in masking the
data.
>> // An empty string indicates that no grid should be used.  The
standard
>> // NCEP grids are named "GNNN" where NNN indicates the three digit
grid
>> number.
>> //
>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>> //
>> // e.g. mask_grid = "G212";
>> //
>> mask_grid = "G212";
>>
>> //
>> // Specify a single ASCII file containing a lat/lon polygon.
>> // Latitude in degrees north and longitude in degrees east.
>> // By default, the first and last polygon points are connected.
>> //
>> // The lat/lon polygon file should contain a name for the polygon
>> // followed by a space-separated list of lat/lon points:
>> //    "name lat1 lon1 lat2 lon2... latn lonn"
>> //
>> // MET_BASE may be used in the path for the lat/lon polygon file.
>> //
>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
>> //
>> mask_poly = "";
>>
>> //
>> // Beginning and ending elevation values in meters for observations
>> // to retain.
>> //
>> beg_elev = -1000;
>> end_elev = 100000;
>>
>> //
>> // Specify a comma-separated list of PrepBufr report type values to
retain.
>> // An empty list indicates that all should be retained.
>> //
>> //
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
>> //
>> // e.g. pb_report_type[] = [ 120, 133 ];
>> //
>> pb_report_type[] = [];
>>
>> //
>> // Specify a comma-separated list of input report type values to
retain.
>> // An empty list indicates that all should be retained.
>> //
>> //
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
>> //
>> // e.g. in_report_type[] = [ 11, 22, 23 ];
>> //
>> in_report_type[] = [];
>>
>> //
>> // Specify a comma-separated list of instrument type values to
retain.
>> // An empty list indicates that all should be retained.
>> //
>> // e.g. instrument_type[] = [ 52, 87 ];
>> //
>> instrument_type[] = [];
>>
>> //
>> // Beginning and ending vertical levels to retain.
>> //
>> beg_level = 1;
>> end_level = 255;
>>
>> //
>> // Specify a comma-separated list of strings containing grib codes
or
>> // corresponding grib code abbreviations to retain or be derived
from
>> // the available observations.
>> //
>> // Grib Codes to be RETAINED:
>> //    SPFH or 51 for Specific Humidity in kg/kg
>> //    TMP  or 11 for Temperature in K
>> //    HGT  or 7  for Height in meters
>> //    UGRD or 33 for the East-West component of the wind in m/s
>> //    VGRD or 34 for the North-South component of the wind in m/s
>> //
>> // Grib Codes to be DERIVED:
>> //    DPT   or 17 for Dewpoint Temperature in K
>> //    WIND  or 32 for Wind Speed in m/s
>> //    RH    or 52 for Relative Humidity in %
>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
>> //
>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>> //
>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
>> //
>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
>>                    "DPT",  "WIND", "RH",   "MIXR" ];
>>
>> //
>> // Quality mark threshold to indicate which observations to retain.
>> // Observations with a quality mark equal to or LESS THAN this
threshold
>> // will be retained, while observations with a quality mark GREATER
THAN
>> // this threshold will be discarded.
>> //
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
>> //
>> quality_mark_thresh = 2;
>>
>> //
>> // Flag to indicate whether observations should be drawn from the
top
>> // of the event stack (most quality controlled) or the bottom of
the
>> // event stack (most raw).  A value of 1 indicates that the top of
the
>> // event stack should be used while a value of zero indicates that
the
>> // bottom should be used.
>> //
>> event_stack_flag = 1;
>>
>> //
>> // Space comma-separated list of data level categorie values to
retain,
>> // where a value of:
>> //    0 = Surface level (mass reports only)
>> //    1 = Mandatory level (upper-air profile reports)
>> //    2 = Significant temperature level (upper-air profile reports)
>> //    2 = Significant temperature and winds-by-pressure level
>> //        (future combined mass and wind upper-air reports)
>> //    3 = Winds-by-pressure level (upper-air profile reports)
>> //    4 = Winds-by-height level (upper-air profile reports)
>> //    5 = Tropopause level (upper-air profile reports)
>> //    6 = Reports on a single level
>> //        (e.g., aircraft, satellite-wind, surface wind,
>> //         precipitable water retrievals, etc.)
>> //    7 = Auxiliary levels generated via interpolation from
spanning levels
>> //        (upper-air profile reports)
>> // An empty list indicates that all should be retained.
>> //
>> //
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>> //
>> // e.g. level_category[] = [ 0, 1 ];
>> //
>> level_category[] = [];
>>
>> //
>> // Directory where temp files should be written by the PB2NC tool
>> //
>> tmp_dir = "/tmp";
>>
>> //
>> // Indicate a version number for the contents of this configuration
file.
>> // The value should generally not be modified.
>> //
>> version = "V3.0";
>>
>>
>>
////////////////////////////////////////////////////////////////////////////////
>> //
>> // Default point_stat configuration file
>> //
>>
>>
////////////////////////////////////////////////////////////////////////////////
>>
>> //
>> // Specify a name to designate the model being verified.  This name
will be
>> // written to the second column of the ASCII output generated.
>> //
>> model = "WRF";
>>
>> //
>> // Beginning and ending time offset values in seconds for
observations
>> // to be used.  These time offsets are defined in reference to the
>> // forecast valid time, v.  Observations with a valid time falling
in the
>> // window [v+beg_ds, v+end_ds] will be used.
>> // These selections are overridden by the command line arguments
>> // -obs_valid_beg and -obs_valid_end.
>> //
>> beg_ds = -1800;
>> end_ds =  1800;
>>
>> //
>> // Specify a comma-separated list of fields to be verified.  The
forecast
>> and
>> // observation fields may be specified separately.  If the
obs_field
>> parameter
>> // is left blank, it will default to the contents of fcst_field.
>> //
>> // Each field is specified as a GRIB code or abbreviation followed
by an
>> // accumulation or vertical level indicator for GRIB files or as a
>> variable name
>> // followed by a list of dimensions for NetCDF files output from
p_interp
>> or MET.
>> //
>> // Specifying verification fields for GRIB files:
>> //    GC/ANNN for accumulation interval NNN
>> //    GC/ZNNN for vertical level NNN
>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
>> //    GC/PNNN for pressure level NNN in hPa
>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>> //    GC/LNNN for a generic level type
>> //    GC/RNNN for a specific GRIB record number
>> //    Where GC is the number of or abbreviation for the grib code
>> //    to be verified.
>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>> //
>> // Specifying verification fields for NetCDF files:
>> //    var_name(i,...,j,*,*) for a single field
>> //    var_name(i-j,*,*) for a range of fields
>> //    Where var_name is the name of the NetCDF variable,
>> //    and i,...,j specifies fixed dimension values,
>> //    and i-j specifies a range of values for a single dimension,
>> //    and *,* specifies the two dimensions for the gridded field.
>> //
>> //    NOTE: To verify winds as vectors rather than scalars,
>> //          specify UGRD (or 33) followed by VGRD (or 34) with the
>> //          same level values.
>> //
>> //    NOTE: To process a probability field, add "/PROB", such as
>> "POP/Z0/PROB".
>> //
>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB
input
>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ]; for
NetCDF
>> input
>> //
>>
>> fcst_field[] = [ "TT(0,0,*,*)" ];
>> obs_field[]  = [ "TMP/Z2" ];
>>
>> //
>> // Specify a comma-separated list of groups of thresholds to be
applied to
>> the
>> // fields listed above.  Thresholds for the forecast and
observation fields
>> // may be specified separately.  If the obs_thresh parameter is
left blank,
>> // it will default to the contents of fcst_thresh.
>> //
>> // At least one threshold must be provided for each field listed
above.
>>  The
>> // lengths of the "fcst_field" and "fcst_thresh" arrays must match,
as must
>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
>> // thresholds to a field, separate the threshold values with a
space.
>> //
>> // Each threshold must be preceded by a two letter indicator for
the type
>> of
>> // thresholding to be performed:
>> //    'lt' for less than     'le' for less than or equal to
>> //    'eq' for equal to      'ne' for not equal to
>> //    'gt' for greater than  'ge' for greater than or equal to
>> //
>> // NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
>> //       and be preceeded by "ge".
>> //
>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
>> //
>> fcst_thresh[] = [ "le273" ];
>> obs_thresh[]  = [];
>>
>> //
>> // Specify a comma-separated list of thresholds to be used when
computing
>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
applied to
>> the
>> // wind speed values derived from each U/V pair.  Only those U/V
pairs
>> which meet
>> // the wind speed threshold criteria are retained.  If the
obs_wind_thresh
>> // parameter is left blank, it will default to the contents of
>> fcst_wind_thresh.
>> //
>> // To apply multiple wind speed thresholds, separate the threshold
values
>> with a
>> // space.  Use "NA" to indicate that no wind speed threshold should
be
>> applied.
>> //
>> // Each threshold must be preceded by a two letter indicator for
the type
>> of
>> // thresholding to be performed:
>> //    'lt' for less than     'le' for less than or equal to
>> //    'eq' for equal to      'ne' for not equal to
>> //    'gt' for greater than  'ge' for greater than or equal to
>> //    'NA' for no threshold
>> //
>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>> //
>> fcst_wind_thresh[] = [ "NA" ];
>> obs_wind_thresh[]  = [];
>>
>> //
>> // Specify a comma-separated list of PrepBufr message types with
which
>> // to perform the verification.  Statistics will be computed
separately
>> // for each message type specified.  At least one PrepBufr message
type
>> // must be provided.
>> // List of valid message types:
>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>> //    SFCSHP SPSSMI SYNDAT VADWND
>> //    ANYAIR (= AIRCAR, AIRCFT)
>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>> //    ONLYSF (= ADPSFC, SFCSHP)
>> //
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>> //
>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>> //
>> message_type[] = [ "ADPSFC" ];
>>
>> //
>> // Specify a comma-separated list of grids to be used in masking
the data
>> over
>> // which to perform scoring.  An empty list indicates that no
masking grid
>> // should be performed.  The standard NCEP grids are named "GNNN"
where NNN
>> // indicates the three digit grid number.  Enter "FULL" to score
over the
>> // entire domain.
>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>> //
>> // e.g. mask_grid[] = [ "FULL" ];
>> //
>> mask_grid[] = [ "FULL" ];
>>
>> //
>> // Specify a comma-separated list of masking regions to be applied.
>> // An empty list indicates that no additional masks should be used.
>> // The masking regions may be defined in one of 4 ways:
>> //
>> // (1) An ASCII file containing a lat/lon polygon.
>> //     Latitude in degrees north and longitude in degrees east.
>> //     By default, the first and last polygon points are connected.
>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>> //
>> // (2) The NetCDF output of the gen_poly_mask tool.
>> //
>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
>> //     to be used, and optionally, a threshold to be applied to the
field.
>> //     e.g. "sample.nc var_name gt0.00"
>> //
>> // (4) A GRIB data file, followed by a description of the field
>> //     to be used, and optionally, a threshold to be applied to the
field.
>> //     e.g. "sample.grb APCP/A3 gt0.00"
>> //
>> // Any NetCDF or GRIB file used must have the same grid dimensions
as the
>> // data being verified.
>> //
>> // MET_BASE may be used in the path for the files above.
>> //
>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>> //                      "poly_mask.ncf",
>> //                      "sample.nc APCP",
>> //                      "sample.grb HGT/Z0 gt100.0" ];
>> //
>> mask_poly[] = [];
>>
>> //
>> // Specify the name of an ASCII file containing a space-separated
list of
>> // station ID's at which to perform verification.  Each station ID
>> specified
>> // is treated as an individual masking region.
>> //
>> // An empty list file name indicates that no station ID masks
should be
>> used.
>> //
>> // MET_BASE may be used in the path for the station ID mask file
name.
>> //
>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
>> //
>> mask_sid = "";
>>
>> //
>> // Specify a comma-separated list of values for alpha to be used
when
>> computing
>> // confidence intervals.  Values of alpha must be between 0 and 1.
>> //
>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>> //
>> ci_alpha[] = [ 0.05 ];
>>
>> //
>> // Specify the method to be used for computing bootstrap confidence
>> intervals.
>> // The value for this is interpreted as follows:
>> //    (0) Use the BCa interval method (computationally intensive)
>> //    (1) Use the percentile interval method
>> //
>> boot_interval = 1;
>>
>> //
>> // Specify a proportion between 0 and 1 to define the replicate
sample size
>> // to be used when computing percentile intervals.  The replicate
sample
>> // size is set to boot_rep_prop * n, where n is the number of raw
data
>> points.
>> //
>> // e.g boot_rep_prop = 0.80;
>> //
>> boot_rep_prop = 1.0;
>>
>> //
>> // Specify the number of times each set of matched pair data should
be
>> // resampled when computing bootstrap confidence intervals.  A
value of
>> // zero disables the computation of bootstrap condifence intervals.
>> //
>> // e.g. n_boot_rep = 1000;
>> //
>> n_boot_rep = 1000;
>>
>> //
>> // Specify the name of the random number generator to be used.  See
the MET
>> // Users Guide for a list of possible random number generators.
>> //
>> boot_rng = "mt19937";
>>
>> //
>> // Specify the seed value to be used when computing bootstrap
confidence
>> // intervals.  If left unspecified, the seed will change for each
run and
>> // the computed bootstrap confidence intervals will not be
reproducable.
>> //
>> boot_seed = "";
>>
>> //
>> // Specify a comma-separated list of interpolation method(s) to be
used
>> // for comparing the forecast grid to the observation points.
String
>> values
>> // are interpreted as follows:
>> //    MIN     = Minimum in the neighborhood
>> //    MAX     = Maximum in the neighborhood
>> //    MEDIAN  = Median in the neighborhood
>> //    UW_MEAN = Unweighted mean in the neighborhood
>> //    DW_MEAN = Distance-weighted mean in the neighborhood
>> //    LS_FIT  = Least-squares fit in the neighborhood
>> //    BILIN   = Bilinear interpolation using the 4 closest points
>> //
>> // In all cases, vertical interpolation is performed in the natural
log
>> // of pressure of the levels above and below the observation.
>> //
>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>> //
>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
>>
>> //
>> // Specify a comma-separated list of box widths to be used by the
>> // interpolation techniques listed above.  A value of 1 indicates
that
>> // the nearest neighbor approach should be used.  For a value of n
>> // greater than 1, the n*n grid points closest to the observation
define
>> // the neighborhood.
>> //
>> // e.g. interp_width = [ 1, 3, 5 ];
>> //
>> interp_width[] = [ 1, 3 ];
>>
>> //
>> // When interpolating, compute a ratio of the number of valid data
points
>> // to the total number of points in the neighborhood.  If that
ratio is
>> // less than this threshold, do not include the observation.  This
>> // threshold must be between 0 and 1.  Setting this threshold to 1
will
>> // require that each observation be surrounded by n*n valid
forecast
>> // points.
>> //
>> // e.g. interp_thresh = 1.0;
>> //
>> interp_thresh = 1.0;
>>
>> //
>> // Specify flags to indicate the type of data to be output:
>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
>> //           Total (TOTAL),
>> //           Forecast Rate (F_RATE),
>> //           Hit Rate (H_RATE),
>> //           Observation Rate (O_RATE)
>> //
>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
>> //           Total (TOTAL),
>> //           Forecast Yes and Observation Yes Count (FY_OY),
>> //           Forecast Yes and Observation No Count (FY_ON),
>> //           Forecast No and Observation Yes Count (FN_OY),
>> //           Forecast No and Observation No Count (FN_ON)
>> //
>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
>> //           Total (TOTAL),
>> //           Base Rate (BASER),
>> //           Forecast Mean (FMEAN),
>> //           Accuracy (ACC),
>> //           Frequency Bias (FBIAS),
>> //           Probability of Detecting Yes (PODY),
>> //           Probability of Detecting No (PODN),
>> //           Probability of False Detection (POFD),
>> //           False Alarm Ratio (FAR),
>> //           Critical Success Index (CSI),
>> //           Gilbert Skill Score (GSS),
>> //           Hanssen and Kuipers Discriminant (HK),
>> //           Heidke Skill Score (HSS),
>> //           Odds Ratio (ODDS),
>> //           NOTE: All statistics listed above contain parametric
and/or
>> //                 non-parametric confidence interval limits.
>> //
>> //    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table
>> Counts:
>> //           Total (TOTAL),
>> //           Number of Categories (N_CAT),
>> //           Contingency Table Count columns repeated N_CAT*N_CAT
times
>> //
>> //    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table
>> Scores:
>> //           Total (TOTAL),
>> //           Number of Categories (N_CAT),
>> //           Accuracy (ACC),
>> //           Hanssen and Kuipers Discriminant (HK),
>> //           Heidke Skill Score (HSS),
>> //           Gerrity Score (GER),
>> //           NOTE: All statistics listed above contain parametric
and/or
>> //                 non-parametric confidence interval limits.
>> //
>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
>> //           Total (TOTAL),
>> //           Forecast Mean (FBAR),
>> //           Forecast Standard Deviation (FSTDEV),
>> //           Observation Mean (OBAR),
>> //           Observation Standard Deviation (OSTDEV),
>> //           Pearson's Correlation Coefficient (PR_CORR),
>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
>> //           Number of ranks compared (RANKS),
>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
>> //           Number of tied ranks in the observation field
(ORANK_TIES),
>> //           Mean Error (ME),
>> //           Standard Deviation of the Error (ESTDEV),
>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>> //           Mean Absolute Error (MAE),
>> //           Mean Squared Error (MSE),
>> //           Bias-Corrected Mean Squared Error (BCMSE),
>> //           Root Mean Squared Error (RMSE),
>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
>> //           NOTE: Most statistics listed above contain parametric
and/or
>> //                 non-parametric confidence interval limits.
>> //
>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>> //           Total (TOTAL),
>> //           Forecast Mean (FBAR),
>> //              = mean(f)
>> //           Observation Mean (OBAR),
>> //              = mean(o)
>> //           Forecast*Observation Product Mean (FOBAR),
>> //              = mean(f*o)
>> //           Forecast Squared Mean (FFBAR),
>> //              = mean(f^2)
>> //           Observation Squared Mean (OOBAR)
>> //              = mean(o^2)
>> //
>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial Sums:
>> //           Total (TOTAL),
>> //           Forecast Anomaly Mean (FABAR),
>> //              = mean(f-c)
>> //           Observation Anomaly Mean (OABAR),
>> //              = mean(o-c)
>> //           Product of Forecast and Observation Anomalies Mean
(FOABAR),
>> //              = mean((f-c)*(o-c))
>> //           Forecast Anomaly Squared Mean (FFABAR),
>> //              = mean((f-c)^2)
>> //           Observation Anomaly Squared Mean (OOABAR)
>> //              = mean((o-c)^2)
>> //
>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
>> //           Total (TOTAL),
>> //           U-Forecast Mean (UFBAR),
>> //              = mean(uf)
>> //           V-Forecast Mean (VFBAR),
>> //              = mean(vf)
>> //           U-Observation Mean (UOBAR),
>> //              = mean(uo)
>> //           V-Observation Mean (VOBAR),
>> //              = mean(vo)
>> //           U-Product Plus V-Product (UVFOBAR),
>> //              = mean(uf*uo+vf*vo)
>> //           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
>> //              = mean(uf^2+vf^2)
>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
>> //              = mean(uo^2+vo^2)
>> //
>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial Sums:
>> //           U-Forecast Anomaly Mean (UFABAR),
>> //              = mean(uf-uc)
>> //           V-Forecast Anomaly Mean (VFABAR),
>> //              = mean(vf-vc)
>> //           U-Observation Anomaly Mean (UOABAR),
>> //              = mean(uo-uc)
>> //           V-Observation Anomaly Mean (VOABAR),
>> //              = mean(vo-vc)
>> //           U-Anomaly Product Plus V-Anomaly Product (UVFOABAR),
>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
>> //           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
Squared
>> (UVFFABAR),
>> //              = mean((uf-uc)^2+(vf-vc)^2)
>> //           U-Observation Anomaly Squared Plus V-Observation
Anomaly
>> Squared (UVOOABAR)
>> //              = mean((uo-uc)^2+(vo-vc)^2)
>> //
>> //   (11) STAT and PCT Text Files, Nx2 Probability Contingency
Table
>> Counts:
>> //           Total (TOTAL),
>> //           Number of Forecast Probability Thresholds (N_THRESH),
>> //           Probability Threshold Value (THRESH_i),
>> //           Row Observation Yes Count (OY_i),
>> //           Row Observation No Count (ON_i),
>> //           NOTE: Previous 3 columns repeated for each row in the
table.
>> //           Last Probability Threshold Value (THRESH_n)
>> //
>> //   (12) STAT and PSTD Text Files, Nx2 Probability Contingency
Table
>> Scores:
>> //           Total (TOTAL),
>> //           Number of Forecast Probability Thresholds (N_THRESH),
>> //           Base Rate (BASER) with confidence interval limits,
>> //           Reliability (RELIABILITY),
>> //           Resolution (RESOLUTION),
>> //           Uncertainty (UNCERTAINTY),
>> //           Area Under the ROC Curve (ROC_AUC),
>> //           Brier Score (BRIER) with confidence interval limits,
>> //           Probability Threshold Value (THRESH_i)
>> //           NOTE: Previous column repeated for each probability
threshold.
>> //
>> //   (13) STAT and PJC Text Files, Joint/Continuous Statistics of
>> //                                 Probabilistic Variables:
>> //           Total (TOTAL),
>> //           Number of Forecast Probability Thresholds (N_THRESH),
>> //           Probability Threshold Value (THRESH_i),
>> //           Observation Yes Count Divided by Total (OY_TP_i),
>> //           Observation No Count Divided by Total (ON_TP_i),
>> //           Calibration (CALIBRATION_i),
>> //           Refinement (REFINEMENT_i),
>> //           Likelikhood (LIKELIHOOD_i),
>> //           Base Rate (BASER_i),
>> //           NOTE: Previous 7 columns repeated for each row in the
table.
>> //           Last Probability Threshold Value (THRESH_n)
>> //
>> //   (14) STAT and PRC Text Files, ROC Curve Points for
>> //                                 Probabilistic Variables:
>> //           Total (TOTAL),
>> //           Number of Forecast Probability Thresholds (N_THRESH),
>> //           Probability Threshold Value (THRESH_i),
>> //           Probability of Detecting Yes (PODY_i),
>> //           Probability of False Detection (POFD_i),
>> //           NOTE: Previous 3 columns repeated for each row in the
table.
>> //           Last Probability Threshold Value (THRESH_n)
>> //
>> //   (15) STAT and MPR Text Files, Matched Pair Data:
>> //           Total (TOTAL),
>> //           Index (INDEX),
>> //           Observation Station ID (OBS_SID),
>> //           Observation Latitude (OBS_LAT),
>> //           Observation Longitude (OBS_LON),
>> //           Observation Level (OBS_LVL),
>> //           Observation Elevation (OBS_ELV),
>> //           Forecast Value (FCST),
>> //           Observation Value (OBS),
>> //           Climatological Value (CLIMO)
>> //
>> //   In the expressions above, f are forecast values, o are
observed
>> values,
>> //   and c are climatological values.
>> //
>> // Values for these flags are interpreted as follows:
>> //    (0) Do not generate output of this type
>> //    (1) Write output to a STAT file
>> //    (2) Write output to a STAT file and a text file
>> //
>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1 ];
>>
>> //
>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
>> // Coefficients should be computed.  Computing them over large
datasets is
>> // computationally intensive and slows down the runtime execution
>> significantly.
>> //    (0) Do not compute these correlation coefficients
>> //    (1) Compute these correlation coefficients
>> //
>> rank_corr_flag = 1;
>>
>> //
>> // Specify the GRIB Table 2 parameter table version number to be
used
>> // for interpreting GRIB codes.
>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>> //
>> grib_ptv = 2;
>>
>> //
>> // Directory where temporary files should be written.
>> //
>> tmp_dir = "/tmp";
>>
>> //
>> // Prefix to be used for the output file names.
>> //
>> output_prefix = "";
>>
>> //
>> // Indicate a version number for the contents of this configuration
file.
>> // The value should generally not be modified.
>> //
>> version = "V3.0.1";
>>
>>
>>


------------------------------------------------
Subject: Re: METV3 Issue
From: Tim Melino
Time: Fri Dec 09 11:38:35 2011

Paul,
I put everything into a tar file and uploaded it.

- Tim


On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:

> Tim,
>
> We are not able to reproduce the error that you are reporting.  Are
you
> using the same exact data and config files that
> you sent me and I tested with?  In any case, can you create a tar
archive
> of all the files involved in the point_stat
> command that throws the error and put it on the FTP site?  I will
need to
> be able to reproduce this error, otherwise it
> will be difficult for me to diagnose the problem.
>
> Thanks,
>
> Paul
>
>
> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >
> > Paul,
> > I tried running again using your configuration settings, but while
> running
> > pointstat I am still receiving errors. The error comes up as the
> following
> > ....
> >
> > [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
> > ndas.t12z.ncPointStatConfig -outdir . -v 99
> > GSL_RNG_TYPE=mt19937
> > GSL_RNG_SEED=18446744071864509006
> > Forecast File: wrf.nc
> > Climatology File: none
> > Configuration File: PointStatConfig
> > Observation File: ndas.t12z.nc
> >
> >
>
--------------------------------------------------------------------------------
> >
> > Reading records for TT(0,0,*,*).
> >
> >
> >   LongArray::operator[](int) -> range check error ... 4
> >
> >
> >
> > - Tim
> >
> >
> > On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT
<met_help at ucar.edu
> >wrote:
> >
> >> Tim,
> >>
> >> I ran the following pb2nc and point_stat commands using the
attached
> >> config files to generate point verification data
> >> with your PrepBUFR obs and p_interp model data.  Note that
MET_BASE is
> set
> >> to the base folder of an instance of
> >> METv3.0.1.  I pulled both config files, with slight
modifications, from
> >> $MET_BASE/scripts/config.
> >>
> >> $MET_BASE/bin/pb2nc
ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
> >>
> >> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig
-outdir .
> -v
> >> 99
> >>
> >> In PointStatConfig, you will see the following settings.  The
fcst_field
> >> setting format is due to the fact that fields
> >> in wrf.nc are four dimensional, with the last two dimensions
being the
> >> spatial (x,y) dimensions.  The obs_field
> >> specifies surface temperature using a GRIB-style format, because
pb2nc
> >> indexes fields in its output by GRIB code.  You
> >> should follow a similar paradigm to verify additional fields
beyond
> >> temperature.
> >>
> >> fcst_field[] = [ "TT(0,0,*,*)" ];
> >> obs_field[]  = [ "TMP/Z2" ];
> >>
> >> fcst_thresh[] = [ "le273" ];
> >> obs_thresh[]  = [];
> >>
> >> If you have any questions or problems, please let me know.
> >>
> >> Good luck,
> >>
> >> Paul
> >>
> >>
> >>
> >>
> >> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >>>
> >>> I just put the file on the server that I have been using. As far
as
> >> running
> >>> the UPP software, that is not really possible at the moment. I
do not
> >> have
> >>> any of that software installed or configured as I have never had
a
> reason
> >>> to use it .
> >>>
> >>> - Tim
> >>>
> >>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
> met_help at ucar.edu
> >>> wrote:
> >>>
> >>>> Tim,
> >>>>
> >>>> Can you please put the input PrepBUFR file that you pass to
pb2nc on
> the
> >>>> FTP site?  When I look at the contents of
> >>>> out.nc, it does not appear that there are any observations in
that
> >> file.
> >>>>  I would like to run pb2nc myself to see what
> >>>> is going on.
> >>>>
> >>>> I made an incorrect assumption in my earlier emails that you
were
> trying
> >>>> to verify model data in GRIB format.  Now that
> >>>> I have your data in hand, I see that it is p_interp output, as
you
> >>>> mentioned in your initial email.  MET support for
> >>>> p_interp is not as robust as for GRIB.  In particular, grid-
relative
> >> wind
> >>>> directions in p_interp data files should not
> >>>> be compared to lat-long relative wind directions in the
PrepBUFR obs.
> >>>>  Would it be possible for you to run your WRF
> >>>> output through the Unified Post Processor (UPP -
> >>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php)
> >>>> instead of or in addition to p_interp?  That would simplify MET
> >>>> verification tasks.  Please let me know if you have any
> >>>> questions.
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Paul
> >>>>
> >>>>
> >>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
> >>>>>
> >>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> >>>>> Transaction: Ticket created by tmelino at meso.com
> >>>>>        Queue: met_help
> >>>>>      Subject: Re: METV3 Issue
> >>>>>        Owner: Nobody
> >>>>>   Requestors: tmelino at meso.com
> >>>>>       Status: new
> >>>>>  Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
> >>>>>
> >>>>>
> >>>>> Ok,
> >>>>>
> >>>>> The data should be there now. With out.nc being the obs and
> wrf.ncbeing
> >>>>> the forecast
> >>>>>
> >>>>> - Tim
> >>>>>
> >>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com>
wrote:
> >>>>>
> >>>>>> Hi,
> >>>>>> I have recently been doing some work with WRF and am trying
to add
> the
> >>>> the
> >>>>>> model evaluation tools to our standard model verification
system.  I
> >>>>>> started the process by running the pressure interpolation
program
> on a
> >>>>>> single wrfout file, which appeared to finish correctly. I
have
> >> attached
> >>>> an
> >>>>>> ncdump of the file header to this email it is called
> >>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a
single
> >>>> prebufr
> >>>>>> file from an NCEP repository for the time centered on the
forecast
> >>>> period
> >>>>>> and ran PB2NC and this also appeared to finish correctly and
output
> a
> >>>>>> single netcdf file, the header information is also attached
> >> (PB2NC.txt).
> >>>>>> Then I attempted to run the point stat utility on these two
files
> but
> >>>> the
> >>>>>> program errors out telling me that there are more forecast
field
> that
> >>>>>> observational fields "ERROR:
PointStatConfInfo::process_config() ->
> >> The
> >>>>>> number fcst_thresh entries provided must match the number of
fields
> >>>>>> provided in fcst_field.". I ran the following command from
the
> >> terminal
> >>>> to
> >>>>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
> >>>> out.nc
> >>>>>> PointStatConfig".  I am not sure what the problem is I have
red the
> >>>>>> documentation and it appears to be setup correctly but I am
not
> >>>> completely
> >>>>>> sure as I have never used this software before.  What should
these
> >>>> namelist
> >>>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying to
> >>>> verify
> >>>>>> 10 meter winds? I appreciate your help!
> >>>>>>
> >>>>>> Also ... I ran the test all scripts after compilation , and
the code
> >>>>>> completed successfully with no errors.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> Thanks ,
> >>>>>> Tim
> >>>>>>
> >>>>
> >>>>
> >>>>
> >>
> >>
> >>
> >>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >> //
> >> // Default pb2nc configuration file
> >> //
> >>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>
> >> //
> >> // Stratify the observation data in the PrepBufr files in the
following
> >> // ways:
> >> //  (1) by message type: supply a list of PrepBufr message types
> >> //      to retain (i.e. AIRCFT)
> >> //  (2) by station id: supply a list of observation stations to
retain
> >> //  (3) by valid time: supply starting and ending times in form
> >> //      YYYY-MM-DD HH:MM:SS UTC
> >> //  (4) by location: supply either an NCEP masking grid, a
masking
> >> //      lat/lon polygon or a file to a mask lat/lon polygon
> >> //  (5) by elevation: supply min/max elevation values
> >> //  (6) by report type (typ): supply a list of report types to
retain
> >> //  (7) by instrument type (itp): supply a list of instrument
type to
> >> //      retain
> >> //  (8) by vertical level: supply min/max vertical levels
> >> //  (9) by variable type: supply a list of variable types to
retain
> >> //      P, Q, T, Z, U, V
> >> // (11) by quality mark: supply a quality mark threshold
> >> // (12) Flag to retain values for all quality marks, or just the
first
> >> //      quality mark (highest)
> >> // (13) by data level category: supply a list of category types
to
> >> //      retain.
> >> //
> >> //      0 - Surface level (mass reports only)
> >> //      1 - Mandatory level (upper-air profile reports)
> >> //      2 - Significant temperature level (upper-air profile
reports)
> >> //      2 - Significant temperature and winds-by-pressure level
> >> //          (future combined mass and wind upper-air reports)
> >> //      3 - Winds-by-pressure level (upper-air profile reports)
> >> //      4 - Winds-by-height level (upper-air profile reports)
> >> //      5 - Tropopause level (upper-air profile reports)
> >> //      6 - Reports on a single level
> >> //          (e.g., aircraft, satellite-wind, surface wind,
> >> //           precipitable water retrievals, etc.)
> >> //      7 - Auxiliary levels generated via interpolation from
spanning
> >> levels
> >> //          (upper-air profile reports)
> >> //
> >>
> >> //
> >> // Specify a comma-separated list of PrepBufr message type
strings to
> >> retain.
> >> // An empty list indicates that all should be retained.
> >> // List of valid message types:
> >> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >> //    SFCSHP SPSSMI SYNDAT VADWND
> >> //    ANYAIR (= AIRCAR, AIRCFT)
> >> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >> //    ONLYSF (= ADPSFC, SFCSHP)
> >> //
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >> //
> >> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >> //
> >> message_type[] = [];
> >>
> >> //
> >> // Specify a comma-separated list of station ID strings to
retain.
> >> // An empty list indicates that all should be retained.
> >> //
> >> // e.g. station_id[] = [ "KDEN" ];
> >> //
> >> station_id[] = [];
> >>
> >> //
> >> // Beginning and ending time offset values in seconds for
observations
> >> // to retain.  The valid time window for retaining observations
is
> >> // defined in reference to the observation time.  So observations
with
> >> // a valid time falling in the window [obs_time+beg_ds,
obs_time+end_ds]
> >> // will be retained.
> >> //
> >> beg_ds = -1800;
> >> end_ds =  1800;
> >>
> >> //
> >> // Specify the name of a single grid to be used in masking the
data.
> >> // An empty string indicates that no grid should be used.  The
standard
> >> // NCEP grids are named "GNNN" where NNN indicates the three
digit grid
> >> number.
> >> //
> >> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >> //
> >> // e.g. mask_grid = "G212";
> >> //
> >> mask_grid = "G212";
> >>
> >> //
> >> // Specify a single ASCII file containing a lat/lon polygon.
> >> // Latitude in degrees north and longitude in degrees east.
> >> // By default, the first and last polygon points are connected.
> >> //
> >> // The lat/lon polygon file should contain a name for the polygon
> >> // followed by a space-separated list of lat/lon points:
> >> //    "name lat1 lon1 lat2 lon2... latn lonn"
> >> //
> >> // MET_BASE may be used in the path for the lat/lon polygon file.
> >> //
> >> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
> >> //
> >> mask_poly = "";
> >>
> >> //
> >> // Beginning and ending elevation values in meters for
observations
> >> // to retain.
> >> //
> >> beg_elev = -1000;
> >> end_elev = 100000;
> >>
> >> //
> >> // Specify a comma-separated list of PrepBufr report type values
to
> retain.
> >> // An empty list indicates that all should be retained.
> >> //
> >> //
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
> >> //
> >> // e.g. pb_report_type[] = [ 120, 133 ];
> >> //
> >> pb_report_type[] = [];
> >>
> >> //
> >> // Specify a comma-separated list of input report type values to
retain.
> >> // An empty list indicates that all should be retained.
> >> //
> >> //
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
> >> //
> >> // e.g. in_report_type[] = [ 11, 22, 23 ];
> >> //
> >> in_report_type[] = [];
> >>
> >> //
> >> // Specify a comma-separated list of instrument type values to
retain.
> >> // An empty list indicates that all should be retained.
> >> //
> >> // e.g. instrument_type[] = [ 52, 87 ];
> >> //
> >> instrument_type[] = [];
> >>
> >> //
> >> // Beginning and ending vertical levels to retain.
> >> //
> >> beg_level = 1;
> >> end_level = 255;
> >>
> >> //
> >> // Specify a comma-separated list of strings containing grib
codes or
> >> // corresponding grib code abbreviations to retain or be derived
from
> >> // the available observations.
> >> //
> >> // Grib Codes to be RETAINED:
> >> //    SPFH or 51 for Specific Humidity in kg/kg
> >> //    TMP  or 11 for Temperature in K
> >> //    HGT  or 7  for Height in meters
> >> //    UGRD or 33 for the East-West component of the wind in m/s
> >> //    VGRD or 34 for the North-South component of the wind in m/s
> >> //
> >> // Grib Codes to be DERIVED:
> >> //    DPT   or 17 for Dewpoint Temperature in K
> >> //    WIND  or 32 for Wind Speed in m/s
> >> //    RH    or 52 for Relative Humidity in %
> >> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
> >> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
> >> //
> >> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >> //
> >> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
> >> //
> >> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
> >>                    "DPT",  "WIND", "RH",   "MIXR" ];
> >>
> >> //
> >> // Quality mark threshold to indicate which observations to
retain.
> >> // Observations with a quality mark equal to or LESS THAN this
threshold
> >> // will be retained, while observations with a quality mark
GREATER THAN
> >> // this threshold will be discarded.
> >> //
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
> >> //
> >> quality_mark_thresh = 2;
> >>
> >> //
> >> // Flag to indicate whether observations should be drawn from the
top
> >> // of the event stack (most quality controlled) or the bottom of
the
> >> // event stack (most raw).  A value of 1 indicates that the top
of the
> >> // event stack should be used while a value of zero indicates
that the
> >> // bottom should be used.
> >> //
> >> event_stack_flag = 1;
> >>
> >> //
> >> // Space comma-separated list of data level categorie values to
retain,
> >> // where a value of:
> >> //    0 = Surface level (mass reports only)
> >> //    1 = Mandatory level (upper-air profile reports)
> >> //    2 = Significant temperature level (upper-air profile
reports)
> >> //    2 = Significant temperature and winds-by-pressure level
> >> //        (future combined mass and wind upper-air reports)
> >> //    3 = Winds-by-pressure level (upper-air profile reports)
> >> //    4 = Winds-by-height level (upper-air profile reports)
> >> //    5 = Tropopause level (upper-air profile reports)
> >> //    6 = Reports on a single level
> >> //        (e.g., aircraft, satellite-wind, surface wind,
> >> //         precipitable water retrievals, etc.)
> >> //    7 = Auxiliary levels generated via interpolation from
spanning
> levels
> >> //        (upper-air profile reports)
> >> // An empty list indicates that all should be retained.
> >> //
> >> //
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >> //
> >> // e.g. level_category[] = [ 0, 1 ];
> >> //
> >> level_category[] = [];
> >>
> >> //
> >> // Directory where temp files should be written by the PB2NC tool
> >> //
> >> tmp_dir = "/tmp";
> >>
> >> //
> >> // Indicate a version number for the contents of this
configuration
> file.
> >> // The value should generally not be modified.
> >> //
> >> version = "V3.0";
> >>
> >>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >> //
> >> // Default point_stat configuration file
> >> //
> >>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>
> >> //
> >> // Specify a name to designate the model being verified.  This
name
> will be
> >> // written to the second column of the ASCII output generated.
> >> //
> >> model = "WRF";
> >>
> >> //
> >> // Beginning and ending time offset values in seconds for
observations
> >> // to be used.  These time offsets are defined in reference to
the
> >> // forecast valid time, v.  Observations with a valid time
falling in
> the
> >> // window [v+beg_ds, v+end_ds] will be used.
> >> // These selections are overridden by the command line arguments
> >> // -obs_valid_beg and -obs_valid_end.
> >> //
> >> beg_ds = -1800;
> >> end_ds =  1800;
> >>
> >> //
> >> // Specify a comma-separated list of fields to be verified.  The
> forecast
> >> and
> >> // observation fields may be specified separately.  If the
obs_field
> >> parameter
> >> // is left blank, it will default to the contents of fcst_field.
> >> //
> >> // Each field is specified as a GRIB code or abbreviation
followed by an
> >> // accumulation or vertical level indicator for GRIB files or as
a
> >> variable name
> >> // followed by a list of dimensions for NetCDF files output from
> p_interp
> >> or MET.
> >> //
> >> // Specifying verification fields for GRIB files:
> >> //    GC/ANNN for accumulation interval NNN
> >> //    GC/ZNNN for vertical level NNN
> >> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
> >> //    GC/PNNN for pressure level NNN in hPa
> >> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >> //    GC/LNNN for a generic level type
> >> //    GC/RNNN for a specific GRIB record number
> >> //    Where GC is the number of or abbreviation for the grib code
> >> //    to be verified.
> >> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >> //
> >> // Specifying verification fields for NetCDF files:
> >> //    var_name(i,...,j,*,*) for a single field
> >> //    var_name(i-j,*,*) for a range of fields
> >> //    Where var_name is the name of the NetCDF variable,
> >> //    and i,...,j specifies fixed dimension values,
> >> //    and i-j specifies a range of values for a single dimension,
> >> //    and *,* specifies the two dimensions for the gridded field.
> >> //
> >> //    NOTE: To verify winds as vectors rather than scalars,
> >> //          specify UGRD (or 33) followed by VGRD (or 34) with
the
> >> //          same level values.
> >> //
> >> //    NOTE: To process a probability field, add "/PROB", such as
> >> "POP/Z0/PROB".
> >> //
> >> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB
input
> >> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ]; for
NetCDF
> >> input
> >> //
> >>
> >> fcst_field[] = [ "TT(0,0,*,*)" ];
> >> obs_field[]  = [ "TMP/Z2" ];
> >>
> >> //
> >> // Specify a comma-separated list of groups of thresholds to be
applied
> to
> >> the
> >> // fields listed above.  Thresholds for the forecast and
observation
> fields
> >> // may be specified separately.  If the obs_thresh parameter is
left
> blank,
> >> // it will default to the contents of fcst_thresh.
> >> //
> >> // At least one threshold must be provided for each field listed
above.
> >>  The
> >> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as
> must
> >> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
> multiple
> >> // thresholds to a field, separate the threshold values with a
space.
> >> //
> >> // Each threshold must be preceded by a two letter indicator for
the
> type
> >> of
> >> // thresholding to be performed:
> >> //    'lt' for less than     'le' for less than or equal to
> >> //    'eq' for equal to      'ne' for not equal to
> >> //    'gt' for greater than  'ge' for greater than or equal to
> >> //
> >> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
> >> //       and be preceeded by "ge".
> >> //
> >> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
> >> //
> >> fcst_thresh[] = [ "le273" ];
> >> obs_thresh[]  = [];
> >>
> >> //
> >> // Specify a comma-separated list of thresholds to be used when
> computing
> >> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
applied
> to
> >> the
> >> // wind speed values derived from each U/V pair.  Only those U/V
pairs
> >> which meet
> >> // the wind speed threshold criteria are retained.  If the
> obs_wind_thresh
> >> // parameter is left blank, it will default to the contents of
> >> fcst_wind_thresh.
> >> //
> >> // To apply multiple wind speed thresholds, separate the
threshold
> values
> >> with a
> >> // space.  Use "NA" to indicate that no wind speed threshold
should be
> >> applied.
> >> //
> >> // Each threshold must be preceded by a two letter indicator for
the
> type
> >> of
> >> // thresholding to be performed:
> >> //    'lt' for less than     'le' for less than or equal to
> >> //    'eq' for equal to      'ne' for not equal to
> >> //    'gt' for greater than  'ge' for greater than or equal to
> >> //    'NA' for no threshold
> >> //
> >> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >> //
> >> fcst_wind_thresh[] = [ "NA" ];
> >> obs_wind_thresh[]  = [];
> >>
> >> //
> >> // Specify a comma-separated list of PrepBufr message types with
which
> >> // to perform the verification.  Statistics will be computed
separately
> >> // for each message type specified.  At least one PrepBufr
message type
> >> // must be provided.
> >> // List of valid message types:
> >> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >> //    SFCSHP SPSSMI SYNDAT VADWND
> >> //    ANYAIR (= AIRCAR, AIRCFT)
> >> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >> //    ONLYSF (= ADPSFC, SFCSHP)
> >> //
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >> //
> >> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >> //
> >> message_type[] = [ "ADPSFC" ];
> >>
> >> //
> >> // Specify a comma-separated list of grids to be used in masking
the
> data
> >> over
> >> // which to perform scoring.  An empty list indicates that no
masking
> grid
> >> // should be performed.  The standard NCEP grids are named "GNNN"
where
> NNN
> >> // indicates the three digit grid number.  Enter "FULL" to score
over
> the
> >> // entire domain.
> >> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >> //
> >> // e.g. mask_grid[] = [ "FULL" ];
> >> //
> >> mask_grid[] = [ "FULL" ];
> >>
> >> //
> >> // Specify a comma-separated list of masking regions to be
applied.
> >> // An empty list indicates that no additional masks should be
used.
> >> // The masking regions may be defined in one of 4 ways:
> >> //
> >> // (1) An ASCII file containing a lat/lon polygon.
> >> //     Latitude in degrees north and longitude in degrees east.
> >> //     By default, the first and last polygon points are
connected.
> >> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> >> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >> //
> >> // (2) The NetCDF output of the gen_poly_mask tool.
> >> //
> >> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> >> //     to be used, and optionally, a threshold to be applied to
the
> field.
> >> //     e.g. "sample.nc var_name gt0.00"
> >> //
> >> // (4) A GRIB data file, followed by a description of the field
> >> //     to be used, and optionally, a threshold to be applied to
the
> field.
> >> //     e.g. "sample.grb APCP/A3 gt0.00"
> >> //
> >> // Any NetCDF or GRIB file used must have the same grid
dimensions as
> the
> >> // data being verified.
> >> //
> >> // MET_BASE may be used in the path for the files above.
> >> //
> >> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >> //                      "poly_mask.ncf",
> >> //                      "sample.nc APCP",
> >> //                      "sample.grb HGT/Z0 gt100.0" ];
> >> //
> >> mask_poly[] = [];
> >>
> >> //
> >> // Specify the name of an ASCII file containing a space-separated
list
> of
> >> // station ID's at which to perform verification.  Each station
ID
> >> specified
> >> // is treated as an individual masking region.
> >> //
> >> // An empty list file name indicates that no station ID masks
should be
> >> used.
> >> //
> >> // MET_BASE may be used in the path for the station ID mask file
name.
> >> //
> >> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
> >> //
> >> mask_sid = "";
> >>
> >> //
> >> // Specify a comma-separated list of values for alpha to be used
when
> >> computing
> >> // confidence intervals.  Values of alpha must be between 0 and
1.
> >> //
> >> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >> //
> >> ci_alpha[] = [ 0.05 ];
> >>
> >> //
> >> // Specify the method to be used for computing bootstrap
confidence
> >> intervals.
> >> // The value for this is interpreted as follows:
> >> //    (0) Use the BCa interval method (computationally intensive)
> >> //    (1) Use the percentile interval method
> >> //
> >> boot_interval = 1;
> >>
> >> //
> >> // Specify a proportion between 0 and 1 to define the replicate
sample
> size
> >> // to be used when computing percentile intervals.  The replicate
sample
> >> // size is set to boot_rep_prop * n, where n is the number of raw
data
> >> points.
> >> //
> >> // e.g boot_rep_prop = 0.80;
> >> //
> >> boot_rep_prop = 1.0;
> >>
> >> //
> >> // Specify the number of times each set of matched pair data
should be
> >> // resampled when computing bootstrap confidence intervals.  A
value of
> >> // zero disables the computation of bootstrap condifence
intervals.
> >> //
> >> // e.g. n_boot_rep = 1000;
> >> //
> >> n_boot_rep = 1000;
> >>
> >> //
> >> // Specify the name of the random number generator to be used.
See the
> MET
> >> // Users Guide for a list of possible random number generators.
> >> //
> >> boot_rng = "mt19937";
> >>
> >> //
> >> // Specify the seed value to be used when computing bootstrap
confidence
> >> // intervals.  If left unspecified, the seed will change for each
run
> and
> >> // the computed bootstrap confidence intervals will not be
reproducable.
> >> //
> >> boot_seed = "";
> >>
> >> //
> >> // Specify a comma-separated list of interpolation method(s) to
be used
> >> // for comparing the forecast grid to the observation points.
String
> >> values
> >> // are interpreted as follows:
> >> //    MIN     = Minimum in the neighborhood
> >> //    MAX     = Maximum in the neighborhood
> >> //    MEDIAN  = Median in the neighborhood
> >> //    UW_MEAN = Unweighted mean in the neighborhood
> >> //    DW_MEAN = Distance-weighted mean in the neighborhood
> >> //    LS_FIT  = Least-squares fit in the neighborhood
> >> //    BILIN   = Bilinear interpolation using the 4 closest points
> >> //
> >> // In all cases, vertical interpolation is performed in the
natural log
> >> // of pressure of the levels above and below the observation.
> >> //
> >> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >> //
> >> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
> >>
> >> //
> >> // Specify a comma-separated list of box widths to be used by the
> >> // interpolation techniques listed above.  A value of 1 indicates
that
> >> // the nearest neighbor approach should be used.  For a value of
n
> >> // greater than 1, the n*n grid points closest to the observation
define
> >> // the neighborhood.
> >> //
> >> // e.g. interp_width = [ 1, 3, 5 ];
> >> //
> >> interp_width[] = [ 1, 3 ];
> >>
> >> //
> >> // When interpolating, compute a ratio of the number of valid
data
> points
> >> // to the total number of points in the neighborhood.  If that
ratio is
> >> // less than this threshold, do not include the observation.
This
> >> // threshold must be between 0 and 1.  Setting this threshold to
1 will
> >> // require that each observation be surrounded by n*n valid
forecast
> >> // points.
> >> //
> >> // e.g. interp_thresh = 1.0;
> >> //
> >> interp_thresh = 1.0;
> >>
> >> //
> >> // Specify flags to indicate the type of data to be output:
> >> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> >> //           Total (TOTAL),
> >> //           Forecast Rate (F_RATE),
> >> //           Hit Rate (H_RATE),
> >> //           Observation Rate (O_RATE)
> >> //
> >> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> >> //           Total (TOTAL),
> >> //           Forecast Yes and Observation Yes Count (FY_OY),
> >> //           Forecast Yes and Observation No Count (FY_ON),
> >> //           Forecast No and Observation Yes Count (FN_OY),
> >> //           Forecast No and Observation No Count (FN_ON)
> >> //
> >> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> >> //           Total (TOTAL),
> >> //           Base Rate (BASER),
> >> //           Forecast Mean (FMEAN),
> >> //           Accuracy (ACC),
> >> //           Frequency Bias (FBIAS),
> >> //           Probability of Detecting Yes (PODY),
> >> //           Probability of Detecting No (PODN),
> >> //           Probability of False Detection (POFD),
> >> //           False Alarm Ratio (FAR),
> >> //           Critical Success Index (CSI),
> >> //           Gilbert Skill Score (GSS),
> >> //           Hanssen and Kuipers Discriminant (HK),
> >> //           Heidke Skill Score (HSS),
> >> //           Odds Ratio (ODDS),
> >> //           NOTE: All statistics listed above contain parametric
and/or
> >> //                 non-parametric confidence interval limits.
> >> //
> >> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table
> >> Counts:
> >> //           Total (TOTAL),
> >> //           Number of Categories (N_CAT),
> >> //           Contingency Table Count columns repeated N_CAT*N_CAT
times
> >> //
> >> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table
> >> Scores:
> >> //           Total (TOTAL),
> >> //           Number of Categories (N_CAT),
> >> //           Accuracy (ACC),
> >> //           Hanssen and Kuipers Discriminant (HK),
> >> //           Heidke Skill Score (HSS),
> >> //           Gerrity Score (GER),
> >> //           NOTE: All statistics listed above contain parametric
and/or
> >> //                 non-parametric confidence interval limits.
> >> //
> >> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> >> //           Total (TOTAL),
> >> //           Forecast Mean (FBAR),
> >> //           Forecast Standard Deviation (FSTDEV),
> >> //           Observation Mean (OBAR),
> >> //           Observation Standard Deviation (OSTDEV),
> >> //           Pearson's Correlation Coefficient (PR_CORR),
> >> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> >> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> >> //           Number of ranks compared (RANKS),
> >> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> >> //           Number of tied ranks in the observation field
(ORANK_TIES),
> >> //           Mean Error (ME),
> >> //           Standard Deviation of the Error (ESTDEV),
> >> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >> //           Mean Absolute Error (MAE),
> >> //           Mean Squared Error (MSE),
> >> //           Bias-Corrected Mean Squared Error (BCMSE),
> >> //           Root Mean Squared Error (RMSE),
> >> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> >> //           NOTE: Most statistics listed above contain
parametric
> and/or
> >> //                 non-parametric confidence interval limits.
> >> //
> >> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >> //           Total (TOTAL),
> >> //           Forecast Mean (FBAR),
> >> //              = mean(f)
> >> //           Observation Mean (OBAR),
> >> //              = mean(o)
> >> //           Forecast*Observation Product Mean (FOBAR),
> >> //              = mean(f*o)
> >> //           Forecast Squared Mean (FFBAR),
> >> //              = mean(f^2)
> >> //           Observation Squared Mean (OOBAR)
> >> //              = mean(o^2)
> >> //
> >> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial
Sums:
> >> //           Total (TOTAL),
> >> //           Forecast Anomaly Mean (FABAR),
> >> //              = mean(f-c)
> >> //           Observation Anomaly Mean (OABAR),
> >> //              = mean(o-c)
> >> //           Product of Forecast and Observation Anomalies Mean
> (FOABAR),
> >> //              = mean((f-c)*(o-c))
> >> //           Forecast Anomaly Squared Mean (FFABAR),
> >> //              = mean((f-c)^2)
> >> //           Observation Anomaly Squared Mean (OOABAR)
> >> //              = mean((o-c)^2)
> >> //
> >> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
> >> //           Total (TOTAL),
> >> //           U-Forecast Mean (UFBAR),
> >> //              = mean(uf)
> >> //           V-Forecast Mean (VFBAR),
> >> //              = mean(vf)
> >> //           U-Observation Mean (UOBAR),
> >> //              = mean(uo)
> >> //           V-Observation Mean (VOBAR),
> >> //              = mean(vo)
> >> //           U-Product Plus V-Product (UVFOBAR),
> >> //              = mean(uf*uo+vf*vo)
> >> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >> //              = mean(uf^2+vf^2)
> >> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> >> //              = mean(uo^2+vo^2)
> >> //
> >> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial
Sums:
> >> //           U-Forecast Anomaly Mean (UFABAR),
> >> //              = mean(uf-uc)
> >> //           V-Forecast Anomaly Mean (VFABAR),
> >> //              = mean(vf-vc)
> >> //           U-Observation Anomaly Mean (UOABAR),
> >> //              = mean(uo-uc)
> >> //           V-Observation Anomaly Mean (VOABAR),
> >> //              = mean(vo-vc)
> >> //           U-Anomaly Product Plus V-Anomaly Product (UVFOABAR),
> >> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
> >> //           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
Squared
> >> (UVFFABAR),
> >> //              = mean((uf-uc)^2+(vf-vc)^2)
> >> //           U-Observation Anomaly Squared Plus V-Observation
Anomaly
> >> Squared (UVOOABAR)
> >> //              = mean((uo-uc)^2+(vo-vc)^2)
> >> //
> >> //   (11) STAT and PCT Text Files, Nx2 Probability Contingency
Table
> >> Counts:
> >> //           Total (TOTAL),
> >> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >> //           Probability Threshold Value (THRESH_i),
> >> //           Row Observation Yes Count (OY_i),
> >> //           Row Observation No Count (ON_i),
> >> //           NOTE: Previous 3 columns repeated for each row in
the
> table.
> >> //           Last Probability Threshold Value (THRESH_n)
> >> //
> >> //   (12) STAT and PSTD Text Files, Nx2 Probability Contingency
Table
> >> Scores:
> >> //           Total (TOTAL),
> >> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >> //           Base Rate (BASER) with confidence interval limits,
> >> //           Reliability (RELIABILITY),
> >> //           Resolution (RESOLUTION),
> >> //           Uncertainty (UNCERTAINTY),
> >> //           Area Under the ROC Curve (ROC_AUC),
> >> //           Brier Score (BRIER) with confidence interval limits,
> >> //           Probability Threshold Value (THRESH_i)
> >> //           NOTE: Previous column repeated for each probability
> threshold.
> >> //
> >> //   (13) STAT and PJC Text Files, Joint/Continuous Statistics of
> >> //                                 Probabilistic Variables:
> >> //           Total (TOTAL),
> >> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >> //           Probability Threshold Value (THRESH_i),
> >> //           Observation Yes Count Divided by Total (OY_TP_i),
> >> //           Observation No Count Divided by Total (ON_TP_i),
> >> //           Calibration (CALIBRATION_i),
> >> //           Refinement (REFINEMENT_i),
> >> //           Likelikhood (LIKELIHOOD_i),
> >> //           Base Rate (BASER_i),
> >> //           NOTE: Previous 7 columns repeated for each row in
the
> table.
> >> //           Last Probability Threshold Value (THRESH_n)
> >> //
> >> //   (14) STAT and PRC Text Files, ROC Curve Points for
> >> //                                 Probabilistic Variables:
> >> //           Total (TOTAL),
> >> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >> //           Probability Threshold Value (THRESH_i),
> >> //           Probability of Detecting Yes (PODY_i),
> >> //           Probability of False Detection (POFD_i),
> >> //           NOTE: Previous 3 columns repeated for each row in
the
> table.
> >> //           Last Probability Threshold Value (THRESH_n)
> >> //
> >> //   (15) STAT and MPR Text Files, Matched Pair Data:
> >> //           Total (TOTAL),
> >> //           Index (INDEX),
> >> //           Observation Station ID (OBS_SID),
> >> //           Observation Latitude (OBS_LAT),
> >> //           Observation Longitude (OBS_LON),
> >> //           Observation Level (OBS_LVL),
> >> //           Observation Elevation (OBS_ELV),
> >> //           Forecast Value (FCST),
> >> //           Observation Value (OBS),
> >> //           Climatological Value (CLIMO)
> >> //
> >> //   In the expressions above, f are forecast values, o are
observed
> >> values,
> >> //   and c are climatological values.
> >> //
> >> // Values for these flags are interpreted as follows:
> >> //    (0) Do not generate output of this type
> >> //    (1) Write output to a STAT file
> >> //    (2) Write output to a STAT file and a text file
> >> //
> >> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1 ];
> >>
> >> //
> >> // Flag to indicate whether Kendall's Tau and Spearman's Rank
> Correlation
> >> // Coefficients should be computed.  Computing them over large
datasets
> is
> >> // computationally intensive and slows down the runtime execution
> >> significantly.
> >> //    (0) Do not compute these correlation coefficients
> >> //    (1) Compute these correlation coefficients
> >> //
> >> rank_corr_flag = 1;
> >>
> >> //
> >> // Specify the GRIB Table 2 parameter table version number to be
used
> >> // for interpreting GRIB codes.
> >> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >> //
> >> grib_ptv = 2;
> >>
> >> //
> >> // Directory where temporary files should be written.
> >> //
> >> tmp_dir = "/tmp";
> >>
> >> //
> >> // Prefix to be used for the output file names.
> >> //
> >> output_prefix = "";
> >>
> >> //
> >> // Indicate a version number for the contents of this
configuration
> file.
> >> // The value should generally not be modified.
> >> //
> >> version = "V3.0.1";
> >>
> >>
> >>
>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51928] Re: METV3 Issue
From: Paul Oldenburg
Time: Fri Dec 09 11:49:23 2011

Tim,

I'm still not able to reproduce the error that you reported.  Have you
applied all of the latest patches to METv3.0.1?
The latest patch tarball and instructions on how to apply it can be
found here:
http://www.dtcenter.org/met/users/support/known_issues/METv3.0.1/index.php.
Can you tell me what version of NetCDF you
linked MET against?  What family of compilers did you use to compile
MET (e.g. GNU/PGI/intel)?  I think we are down to a
configuration/environment problem at this point.  Sorry for the
trouble.

Paul


On 12/09/2011 11:38 AM, Tim Melino via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>
> Paul,
> I put everything into a tar file and uploaded it.
>
> - Tim
>
>
> On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:
>
>> Tim,
>>
>> We are not able to reproduce the error that you are reporting.  Are
you
>> using the same exact data and config files that
>> you sent me and I tested with?  In any case, can you create a tar
archive
>> of all the files involved in the point_stat
>> command that throws the error and put it on the FTP site?  I will
need to
>> be able to reproduce this error, otherwise it
>> will be difficult for me to diagnose the problem.
>>
>> Thanks,
>>
>> Paul
>>
>>
>> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>
>>> Paul,
>>> I tried running again using your configuration settings, but while
>> running
>>> pointstat I am still receiving errors. The error comes up as the
>> following
>>> ....
>>>
>>> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
>>> ndas.t12z.ncPointStatConfig -outdir . -v 99
>>> GSL_RNG_TYPE=mt19937
>>> GSL_RNG_SEED=18446744071864509006
>>> Forecast File: wrf.nc
>>> Climatology File: none
>>> Configuration File: PointStatConfig
>>> Observation File: ndas.t12z.nc
>>>
>>>
>>
--------------------------------------------------------------------------------
>>>
>>> Reading records for TT(0,0,*,*).
>>>
>>>
>>>   LongArray::operator[](int) -> range check error ... 4
>>>
>>>
>>>
>>> - Tim
>>>
>>>
>>> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT
<met_help at ucar.edu
>>> wrote:
>>>
>>>> Tim,
>>>>
>>>> I ran the following pb2nc and point_stat commands using the
attached
>>>> config files to generate point verification data
>>>> with your PrepBUFR obs and p_interp model data.  Note that
MET_BASE is
>> set
>>>> to the base folder of an instance of
>>>> METv3.0.1.  I pulled both config files, with slight
modifications, from
>>>> $MET_BASE/scripts/config.
>>>>
>>>> $MET_BASE/bin/pb2nc
ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
>>>>
>>>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig
-outdir .
>> -v
>>>> 99
>>>>
>>>> In PointStatConfig, you will see the following settings.  The
fcst_field
>>>> setting format is due to the fact that fields
>>>> in wrf.nc are four dimensional, with the last two dimensions
being the
>>>> spatial (x,y) dimensions.  The obs_field
>>>> specifies surface temperature using a GRIB-style format, because
pb2nc
>>>> indexes fields in its output by GRIB code.  You
>>>> should follow a similar paradigm to verify additional fields
beyond
>>>> temperature.
>>>>
>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>
>>>> fcst_thresh[] = [ "le273" ];
>>>> obs_thresh[]  = [];
>>>>
>>>> If you have any questions or problems, please let me know.
>>>>
>>>> Good luck,
>>>>
>>>> Paul
>>>>
>>>>
>>>>
>>>>
>>>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>>>
>>>>> I just put the file on the server that I have been using. As far
as
>>>> running
>>>>> the UPP software, that is not really possible at the moment. I
do not
>>>> have
>>>>> any of that software installed or configured as I have never had
a
>> reason
>>>>> to use it .
>>>>>
>>>>> - Tim
>>>>>
>>>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
>> met_help at ucar.edu
>>>>> wrote:
>>>>>
>>>>>> Tim,
>>>>>>
>>>>>> Can you please put the input PrepBUFR file that you pass to
pb2nc on
>> the
>>>>>> FTP site?  When I look at the contents of
>>>>>> out.nc, it does not appear that there are any observations in
that
>>>> file.
>>>>>>  I would like to run pb2nc myself to see what
>>>>>> is going on.
>>>>>>
>>>>>> I made an incorrect assumption in my earlier emails that you
were
>> trying
>>>>>> to verify model data in GRIB format.  Now that
>>>>>> I have your data in hand, I see that it is p_interp output, as
you
>>>>>> mentioned in your initial email.  MET support for
>>>>>> p_interp is not as robust as for GRIB.  In particular, grid-
relative
>>>> wind
>>>>>> directions in p_interp data files should not
>>>>>> be compared to lat-long relative wind directions in the
PrepBUFR obs.
>>>>>>  Would it be possible for you to run your WRF
>>>>>> output through the Unified Post Processor (UPP -
>>>>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php)
>>>>>> instead of or in addition to p_interp?  That would simplify MET
>>>>>> verification tasks.  Please let me know if you have any
>>>>>> questions.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Paul
>>>>>>
>>>>>>
>>>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
>>>>>>>
>>>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
>>>>>>> Transaction: Ticket created by tmelino at meso.com
>>>>>>>        Queue: met_help
>>>>>>>      Subject: Re: METV3 Issue
>>>>>>>        Owner: Nobody
>>>>>>>   Requestors: tmelino at meso.com
>>>>>>>       Status: new
>>>>>>>  Ticket <URL:
>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
>>>>>>>
>>>>>>>
>>>>>>> Ok,
>>>>>>>
>>>>>>> The data should be there now. With out.nc being the obs and
>> wrf.ncbeing
>>>>>>> the forecast
>>>>>>>
>>>>>>> - Tim
>>>>>>>
>>>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino <tmelino at meso.com>
wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>> I have recently been doing some work with WRF and am trying
to add
>> the
>>>>>> the
>>>>>>>> model evaluation tools to our standard model verification
system.  I
>>>>>>>> started the process by running the pressure interpolation
program
>> on a
>>>>>>>> single wrfout file, which appeared to finish correctly. I
have
>>>> attached
>>>>>> an
>>>>>>>> ncdump of the file header to this email it is called
>>>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded a
single
>>>>>> prebufr
>>>>>>>> file from an NCEP repository for the time centered on the
forecast
>>>>>> period
>>>>>>>> and ran PB2NC and this also appeared to finish correctly and
output
>> a
>>>>>>>> single netcdf file, the header information is also attached
>>>> (PB2NC.txt).
>>>>>>>> Then I attempted to run the point stat utility on these two
files
>> but
>>>>>> the
>>>>>>>> program errors out telling me that there are more forecast
field
>> that
>>>>>>>> observational fields "ERROR:
PointStatConfInfo::process_config() ->
>>>> The
>>>>>>>> number fcst_thresh entries provided must match the number of
fields
>>>>>>>> provided in fcst_field.". I ran the following command from
the
>>>> terminal
>>>>>> to
>>>>>>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
>>>>>> out.nc
>>>>>>>> PointStatConfig".  I am not sure what the problem is I have
red the
>>>>>>>> documentation and it appears to be setup correctly but I am
not
>>>>>> completely
>>>>>>>> sure as I have never used this software before.  What should
these
>>>>>> namelist
>>>>>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying to
>>>>>> verify
>>>>>>>> 10 meter winds? I appreciate your help!
>>>>>>>>
>>>>>>>> Also ... I ran the test all scripts after compilation , and
the code
>>>>>>>> completed successfully with no errors.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks ,
>>>>>>>> Tim
>>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>> //
>>>> // Default pb2nc configuration file
>>>> //
>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>
>>>> //
>>>> // Stratify the observation data in the PrepBufr files in the
following
>>>> // ways:
>>>> //  (1) by message type: supply a list of PrepBufr message types
>>>> //      to retain (i.e. AIRCFT)
>>>> //  (2) by station id: supply a list of observation stations to
retain
>>>> //  (3) by valid time: supply starting and ending times in form
>>>> //      YYYY-MM-DD HH:MM:SS UTC
>>>> //  (4) by location: supply either an NCEP masking grid, a
masking
>>>> //      lat/lon polygon or a file to a mask lat/lon polygon
>>>> //  (5) by elevation: supply min/max elevation values
>>>> //  (6) by report type (typ): supply a list of report types to
retain
>>>> //  (7) by instrument type (itp): supply a list of instrument
type to
>>>> //      retain
>>>> //  (8) by vertical level: supply min/max vertical levels
>>>> //  (9) by variable type: supply a list of variable types to
retain
>>>> //      P, Q, T, Z, U, V
>>>> // (11) by quality mark: supply a quality mark threshold
>>>> // (12) Flag to retain values for all quality marks, or just the
first
>>>> //      quality mark (highest)
>>>> // (13) by data level category: supply a list of category types
to
>>>> //      retain.
>>>> //
>>>> //      0 - Surface level (mass reports only)
>>>> //      1 - Mandatory level (upper-air profile reports)
>>>> //      2 - Significant temperature level (upper-air profile
reports)
>>>> //      2 - Significant temperature and winds-by-pressure level
>>>> //          (future combined mass and wind upper-air reports)
>>>> //      3 - Winds-by-pressure level (upper-air profile reports)
>>>> //      4 - Winds-by-height level (upper-air profile reports)
>>>> //      5 - Tropopause level (upper-air profile reports)
>>>> //      6 - Reports on a single level
>>>> //          (e.g., aircraft, satellite-wind, surface wind,
>>>> //           precipitable water retrievals, etc.)
>>>> //      7 - Auxiliary levels generated via interpolation from
spanning
>>>> levels
>>>> //          (upper-air profile reports)
>>>> //
>>>>
>>>> //
>>>> // Specify a comma-separated list of PrepBufr message type
strings to
>>>> retain.
>>>> // An empty list indicates that all should be retained.
>>>> // List of valid message types:
>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>> //
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>> //
>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>> //
>>>> message_type[] = [];
>>>>
>>>> //
>>>> // Specify a comma-separated list of station ID strings to
retain.
>>>> // An empty list indicates that all should be retained.
>>>> //
>>>> // e.g. station_id[] = [ "KDEN" ];
>>>> //
>>>> station_id[] = [];
>>>>
>>>> //
>>>> // Beginning and ending time offset values in seconds for
observations
>>>> // to retain.  The valid time window for retaining observations
is
>>>> // defined in reference to the observation time.  So observations
with
>>>> // a valid time falling in the window [obs_time+beg_ds,
obs_time+end_ds]
>>>> // will be retained.
>>>> //
>>>> beg_ds = -1800;
>>>> end_ds =  1800;
>>>>
>>>> //
>>>> // Specify the name of a single grid to be used in masking the
data.
>>>> // An empty string indicates that no grid should be used.  The
standard
>>>> // NCEP grids are named "GNNN" where NNN indicates the three
digit grid
>>>> number.
>>>> //
>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>> //
>>>> // e.g. mask_grid = "G212";
>>>> //
>>>> mask_grid = "G212";
>>>>
>>>> //
>>>> // Specify a single ASCII file containing a lat/lon polygon.
>>>> // Latitude in degrees north and longitude in degrees east.
>>>> // By default, the first and last polygon points are connected.
>>>> //
>>>> // The lat/lon polygon file should contain a name for the polygon
>>>> // followed by a space-separated list of lat/lon points:
>>>> //    "name lat1 lon1 lat2 lon2... latn lonn"
>>>> //
>>>> // MET_BASE may be used in the path for the lat/lon polygon file.
>>>> //
>>>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
>>>> //
>>>> mask_poly = "";
>>>>
>>>> //
>>>> // Beginning and ending elevation values in meters for
observations
>>>> // to retain.
>>>> //
>>>> beg_elev = -1000;
>>>> end_elev = 100000;
>>>>
>>>> //
>>>> // Specify a comma-separated list of PrepBufr report type values
to
>> retain.
>>>> // An empty list indicates that all should be retained.
>>>> //
>>>> //
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
>>>> //
>>>> // e.g. pb_report_type[] = [ 120, 133 ];
>>>> //
>>>> pb_report_type[] = [];
>>>>
>>>> //
>>>> // Specify a comma-separated list of input report type values to
retain.
>>>> // An empty list indicates that all should be retained.
>>>> //
>>>> //
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
>>>> //
>>>> // e.g. in_report_type[] = [ 11, 22, 23 ];
>>>> //
>>>> in_report_type[] = [];
>>>>
>>>> //
>>>> // Specify a comma-separated list of instrument type values to
retain.
>>>> // An empty list indicates that all should be retained.
>>>> //
>>>> // e.g. instrument_type[] = [ 52, 87 ];
>>>> //
>>>> instrument_type[] = [];
>>>>
>>>> //
>>>> // Beginning and ending vertical levels to retain.
>>>> //
>>>> beg_level = 1;
>>>> end_level = 255;
>>>>
>>>> //
>>>> // Specify a comma-separated list of strings containing grib
codes or
>>>> // corresponding grib code abbreviations to retain or be derived
from
>>>> // the available observations.
>>>> //
>>>> // Grib Codes to be RETAINED:
>>>> //    SPFH or 51 for Specific Humidity in kg/kg
>>>> //    TMP  or 11 for Temperature in K
>>>> //    HGT  or 7  for Height in meters
>>>> //    UGRD or 33 for the East-West component of the wind in m/s
>>>> //    VGRD or 34 for the North-South component of the wind in m/s
>>>> //
>>>> // Grib Codes to be DERIVED:
>>>> //    DPT   or 17 for Dewpoint Temperature in K
>>>> //    WIND  or 32 for Wind Speed in m/s
>>>> //    RH    or 52 for Relative Humidity in %
>>>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
>>>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
>>>> //
>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>> //
>>>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
>>>> //
>>>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
>>>>                    "DPT",  "WIND", "RH",   "MIXR" ];
>>>>
>>>> //
>>>> // Quality mark threshold to indicate which observations to
retain.
>>>> // Observations with a quality mark equal to or LESS THAN this
threshold
>>>> // will be retained, while observations with a quality mark
GREATER THAN
>>>> // this threshold will be discarded.
>>>> //
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
>>>> //
>>>> quality_mark_thresh = 2;
>>>>
>>>> //
>>>> // Flag to indicate whether observations should be drawn from the
top
>>>> // of the event stack (most quality controlled) or the bottom of
the
>>>> // event stack (most raw).  A value of 1 indicates that the top
of the
>>>> // event stack should be used while a value of zero indicates
that the
>>>> // bottom should be used.
>>>> //
>>>> event_stack_flag = 1;
>>>>
>>>> //
>>>> // Space comma-separated list of data level categorie values to
retain,
>>>> // where a value of:
>>>> //    0 = Surface level (mass reports only)
>>>> //    1 = Mandatory level (upper-air profile reports)
>>>> //    2 = Significant temperature level (upper-air profile
reports)
>>>> //    2 = Significant temperature and winds-by-pressure level
>>>> //        (future combined mass and wind upper-air reports)
>>>> //    3 = Winds-by-pressure level (upper-air profile reports)
>>>> //    4 = Winds-by-height level (upper-air profile reports)
>>>> //    5 = Tropopause level (upper-air profile reports)
>>>> //    6 = Reports on a single level
>>>> //        (e.g., aircraft, satellite-wind, surface wind,
>>>> //         precipitable water retrievals, etc.)
>>>> //    7 = Auxiliary levels generated via interpolation from
spanning
>> levels
>>>> //        (upper-air profile reports)
>>>> // An empty list indicates that all should be retained.
>>>> //
>>>> //
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>> //
>>>> // e.g. level_category[] = [ 0, 1 ];
>>>> //
>>>> level_category[] = [];
>>>>
>>>> //
>>>> // Directory where temp files should be written by the PB2NC tool
>>>> //
>>>> tmp_dir = "/tmp";
>>>>
>>>> //
>>>> // Indicate a version number for the contents of this
configuration
>> file.
>>>> // The value should generally not be modified.
>>>> //
>>>> version = "V3.0";
>>>>
>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>> //
>>>> // Default point_stat configuration file
>>>> //
>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>
>>>> //
>>>> // Specify a name to designate the model being verified.  This
name
>> will be
>>>> // written to the second column of the ASCII output generated.
>>>> //
>>>> model = "WRF";
>>>>
>>>> //
>>>> // Beginning and ending time offset values in seconds for
observations
>>>> // to be used.  These time offsets are defined in reference to
the
>>>> // forecast valid time, v.  Observations with a valid time
falling in
>> the
>>>> // window [v+beg_ds, v+end_ds] will be used.
>>>> // These selections are overridden by the command line arguments
>>>> // -obs_valid_beg and -obs_valid_end.
>>>> //
>>>> beg_ds = -1800;
>>>> end_ds =  1800;
>>>>
>>>> //
>>>> // Specify a comma-separated list of fields to be verified.  The
>> forecast
>>>> and
>>>> // observation fields may be specified separately.  If the
obs_field
>>>> parameter
>>>> // is left blank, it will default to the contents of fcst_field.
>>>> //
>>>> // Each field is specified as a GRIB code or abbreviation
followed by an
>>>> // accumulation or vertical level indicator for GRIB files or as
a
>>>> variable name
>>>> // followed by a list of dimensions for NetCDF files output from
>> p_interp
>>>> or MET.
>>>> //
>>>> // Specifying verification fields for GRIB files:
>>>> //    GC/ANNN for accumulation interval NNN
>>>> //    GC/ZNNN for vertical level NNN
>>>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
>>>> //    GC/PNNN for pressure level NNN in hPa
>>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>>>> //    GC/LNNN for a generic level type
>>>> //    GC/RNNN for a specific GRIB record number
>>>> //    Where GC is the number of or abbreviation for the grib code
>>>> //    to be verified.
>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>> //
>>>> // Specifying verification fields for NetCDF files:
>>>> //    var_name(i,...,j,*,*) for a single field
>>>> //    var_name(i-j,*,*) for a range of fields
>>>> //    Where var_name is the name of the NetCDF variable,
>>>> //    and i,...,j specifies fixed dimension values,
>>>> //    and i-j specifies a range of values for a single dimension,
>>>> //    and *,* specifies the two dimensions for the gridded field.
>>>> //
>>>> //    NOTE: To verify winds as vectors rather than scalars,
>>>> //          specify UGRD (or 33) followed by VGRD (or 34) with
the
>>>> //          same level values.
>>>> //
>>>> //    NOTE: To process a probability field, add "/PROB", such as
>>>> "POP/Z0/PROB".
>>>> //
>>>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB
input
>>>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ]; for
NetCDF
>>>> input
>>>> //
>>>>
>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>
>>>> //
>>>> // Specify a comma-separated list of groups of thresholds to be
applied
>> to
>>>> the
>>>> // fields listed above.  Thresholds for the forecast and
observation
>> fields
>>>> // may be specified separately.  If the obs_thresh parameter is
left
>> blank,
>>>> // it will default to the contents of fcst_thresh.
>>>> //
>>>> // At least one threshold must be provided for each field listed
above.
>>>>  The
>>>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as
>> must
>>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
>> multiple
>>>> // thresholds to a field, separate the threshold values with a
space.
>>>> //
>>>> // Each threshold must be preceded by a two letter indicator for
the
>> type
>>>> of
>>>> // thresholding to be performed:
>>>> //    'lt' for less than     'le' for less than or equal to
>>>> //    'eq' for equal to      'ne' for not equal to
>>>> //    'gt' for greater than  'ge' for greater than or equal to
>>>> //
>>>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
>>>> //       and be preceeded by "ge".
>>>> //
>>>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
>>>> //
>>>> fcst_thresh[] = [ "le273" ];
>>>> obs_thresh[]  = [];
>>>>
>>>> //
>>>> // Specify a comma-separated list of thresholds to be used when
>> computing
>>>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
applied
>> to
>>>> the
>>>> // wind speed values derived from each U/V pair.  Only those U/V
pairs
>>>> which meet
>>>> // the wind speed threshold criteria are retained.  If the
>> obs_wind_thresh
>>>> // parameter is left blank, it will default to the contents of
>>>> fcst_wind_thresh.
>>>> //
>>>> // To apply multiple wind speed thresholds, separate the
threshold
>> values
>>>> with a
>>>> // space.  Use "NA" to indicate that no wind speed threshold
should be
>>>> applied.
>>>> //
>>>> // Each threshold must be preceded by a two letter indicator for
the
>> type
>>>> of
>>>> // thresholding to be performed:
>>>> //    'lt' for less than     'le' for less than or equal to
>>>> //    'eq' for equal to      'ne' for not equal to
>>>> //    'gt' for greater than  'ge' for greater than or equal to
>>>> //    'NA' for no threshold
>>>> //
>>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>>>> //
>>>> fcst_wind_thresh[] = [ "NA" ];
>>>> obs_wind_thresh[]  = [];
>>>>
>>>> //
>>>> // Specify a comma-separated list of PrepBufr message types with
which
>>>> // to perform the verification.  Statistics will be computed
separately
>>>> // for each message type specified.  At least one PrepBufr
message type
>>>> // must be provided.
>>>> // List of valid message types:
>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>> //
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>> //
>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>> //
>>>> message_type[] = [ "ADPSFC" ];
>>>>
>>>> //
>>>> // Specify a comma-separated list of grids to be used in masking
the
>> data
>>>> over
>>>> // which to perform scoring.  An empty list indicates that no
masking
>> grid
>>>> // should be performed.  The standard NCEP grids are named "GNNN"
where
>> NNN
>>>> // indicates the three digit grid number.  Enter "FULL" to score
over
>> the
>>>> // entire domain.
>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>> //
>>>> // e.g. mask_grid[] = [ "FULL" ];
>>>> //
>>>> mask_grid[] = [ "FULL" ];
>>>>
>>>> //
>>>> // Specify a comma-separated list of masking regions to be
applied.
>>>> // An empty list indicates that no additional masks should be
used.
>>>> // The masking regions may be defined in one of 4 ways:
>>>> //
>>>> // (1) An ASCII file containing a lat/lon polygon.
>>>> //     Latitude in degrees north and longitude in degrees east.
>>>> //     By default, the first and last polygon points are
connected.
>>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
>>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>>>> //
>>>> // (2) The NetCDF output of the gen_poly_mask tool.
>>>> //
>>>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
>>>> //     to be used, and optionally, a threshold to be applied to
the
>> field.
>>>> //     e.g. "sample.nc var_name gt0.00"
>>>> //
>>>> // (4) A GRIB data file, followed by a description of the field
>>>> //     to be used, and optionally, a threshold to be applied to
the
>> field.
>>>> //     e.g. "sample.grb APCP/A3 gt0.00"
>>>> //
>>>> // Any NetCDF or GRIB file used must have the same grid
dimensions as
>> the
>>>> // data being verified.
>>>> //
>>>> // MET_BASE may be used in the path for the files above.
>>>> //
>>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>>>> //                      "poly_mask.ncf",
>>>> //                      "sample.nc APCP",
>>>> //                      "sample.grb HGT/Z0 gt100.0" ];
>>>> //
>>>> mask_poly[] = [];
>>>>
>>>> //
>>>> // Specify the name of an ASCII file containing a space-separated
list
>> of
>>>> // station ID's at which to perform verification.  Each station
ID
>>>> specified
>>>> // is treated as an individual masking region.
>>>> //
>>>> // An empty list file name indicates that no station ID masks
should be
>>>> used.
>>>> //
>>>> // MET_BASE may be used in the path for the station ID mask file
name.
>>>> //
>>>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
>>>> //
>>>> mask_sid = "";
>>>>
>>>> //
>>>> // Specify a comma-separated list of values for alpha to be used
when
>>>> computing
>>>> // confidence intervals.  Values of alpha must be between 0 and
1.
>>>> //
>>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>>>> //
>>>> ci_alpha[] = [ 0.05 ];
>>>>
>>>> //
>>>> // Specify the method to be used for computing bootstrap
confidence
>>>> intervals.
>>>> // The value for this is interpreted as follows:
>>>> //    (0) Use the BCa interval method (computationally intensive)
>>>> //    (1) Use the percentile interval method
>>>> //
>>>> boot_interval = 1;
>>>>
>>>> //
>>>> // Specify a proportion between 0 and 1 to define the replicate
sample
>> size
>>>> // to be used when computing percentile intervals.  The replicate
sample
>>>> // size is set to boot_rep_prop * n, where n is the number of raw
data
>>>> points.
>>>> //
>>>> // e.g boot_rep_prop = 0.80;
>>>> //
>>>> boot_rep_prop = 1.0;
>>>>
>>>> //
>>>> // Specify the number of times each set of matched pair data
should be
>>>> // resampled when computing bootstrap confidence intervals.  A
value of
>>>> // zero disables the computation of bootstrap condifence
intervals.
>>>> //
>>>> // e.g. n_boot_rep = 1000;
>>>> //
>>>> n_boot_rep = 1000;
>>>>
>>>> //
>>>> // Specify the name of the random number generator to be used.
See the
>> MET
>>>> // Users Guide for a list of possible random number generators.
>>>> //
>>>> boot_rng = "mt19937";
>>>>
>>>> //
>>>> // Specify the seed value to be used when computing bootstrap
confidence
>>>> // intervals.  If left unspecified, the seed will change for each
run
>> and
>>>> // the computed bootstrap confidence intervals will not be
reproducable.
>>>> //
>>>> boot_seed = "";
>>>>
>>>> //
>>>> // Specify a comma-separated list of interpolation method(s) to
be used
>>>> // for comparing the forecast grid to the observation points.
String
>>>> values
>>>> // are interpreted as follows:
>>>> //    MIN     = Minimum in the neighborhood
>>>> //    MAX     = Maximum in the neighborhood
>>>> //    MEDIAN  = Median in the neighborhood
>>>> //    UW_MEAN = Unweighted mean in the neighborhood
>>>> //    DW_MEAN = Distance-weighted mean in the neighborhood
>>>> //    LS_FIT  = Least-squares fit in the neighborhood
>>>> //    BILIN   = Bilinear interpolation using the 4 closest points
>>>> //
>>>> // In all cases, vertical interpolation is performed in the
natural log
>>>> // of pressure of the levels above and below the observation.
>>>> //
>>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>>>> //
>>>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
>>>>
>>>> //
>>>> // Specify a comma-separated list of box widths to be used by the
>>>> // interpolation techniques listed above.  A value of 1 indicates
that
>>>> // the nearest neighbor approach should be used.  For a value of
n
>>>> // greater than 1, the n*n grid points closest to the observation
define
>>>> // the neighborhood.
>>>> //
>>>> // e.g. interp_width = [ 1, 3, 5 ];
>>>> //
>>>> interp_width[] = [ 1, 3 ];
>>>>
>>>> //
>>>> // When interpolating, compute a ratio of the number of valid
data
>> points
>>>> // to the total number of points in the neighborhood.  If that
ratio is
>>>> // less than this threshold, do not include the observation.
This
>>>> // threshold must be between 0 and 1.  Setting this threshold to
1 will
>>>> // require that each observation be surrounded by n*n valid
forecast
>>>> // points.
>>>> //
>>>> // e.g. interp_thresh = 1.0;
>>>> //
>>>> interp_thresh = 1.0;
>>>>
>>>> //
>>>> // Specify flags to indicate the type of data to be output:
>>>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
>>>> //           Total (TOTAL),
>>>> //           Forecast Rate (F_RATE),
>>>> //           Hit Rate (H_RATE),
>>>> //           Observation Rate (O_RATE)
>>>> //
>>>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
>>>> //           Total (TOTAL),
>>>> //           Forecast Yes and Observation Yes Count (FY_OY),
>>>> //           Forecast Yes and Observation No Count (FY_ON),
>>>> //           Forecast No and Observation Yes Count (FN_OY),
>>>> //           Forecast No and Observation No Count (FN_ON)
>>>> //
>>>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
>>>> //           Total (TOTAL),
>>>> //           Base Rate (BASER),
>>>> //           Forecast Mean (FMEAN),
>>>> //           Accuracy (ACC),
>>>> //           Frequency Bias (FBIAS),
>>>> //           Probability of Detecting Yes (PODY),
>>>> //           Probability of Detecting No (PODN),
>>>> //           Probability of False Detection (POFD),
>>>> //           False Alarm Ratio (FAR),
>>>> //           Critical Success Index (CSI),
>>>> //           Gilbert Skill Score (GSS),
>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>> //           Heidke Skill Score (HSS),
>>>> //           Odds Ratio (ODDS),
>>>> //           NOTE: All statistics listed above contain parametric
and/or
>>>> //                 non-parametric confidence interval limits.
>>>> //
>>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table
>>>> Counts:
>>>> //           Total (TOTAL),
>>>> //           Number of Categories (N_CAT),
>>>> //           Contingency Table Count columns repeated N_CAT*N_CAT
times
>>>> //
>>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table
>>>> Scores:
>>>> //           Total (TOTAL),
>>>> //           Number of Categories (N_CAT),
>>>> //           Accuracy (ACC),
>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>> //           Heidke Skill Score (HSS),
>>>> //           Gerrity Score (GER),
>>>> //           NOTE: All statistics listed above contain parametric
and/or
>>>> //                 non-parametric confidence interval limits.
>>>> //
>>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
>>>> //           Total (TOTAL),
>>>> //           Forecast Mean (FBAR),
>>>> //           Forecast Standard Deviation (FSTDEV),
>>>> //           Observation Mean (OBAR),
>>>> //           Observation Standard Deviation (OSTDEV),
>>>> //           Pearson's Correlation Coefficient (PR_CORR),
>>>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
>>>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
>>>> //           Number of ranks compared (RANKS),
>>>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
>>>> //           Number of tied ranks in the observation field
(ORANK_TIES),
>>>> //           Mean Error (ME),
>>>> //           Standard Deviation of the Error (ESTDEV),
>>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>>>> //           Mean Absolute Error (MAE),
>>>> //           Mean Squared Error (MSE),
>>>> //           Bias-Corrected Mean Squared Error (BCMSE),
>>>> //           Root Mean Squared Error (RMSE),
>>>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
>>>> //           NOTE: Most statistics listed above contain
parametric
>> and/or
>>>> //                 non-parametric confidence interval limits.
>>>> //
>>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>>>> //           Total (TOTAL),
>>>> //           Forecast Mean (FBAR),
>>>> //              = mean(f)
>>>> //           Observation Mean (OBAR),
>>>> //              = mean(o)
>>>> //           Forecast*Observation Product Mean (FOBAR),
>>>> //              = mean(f*o)
>>>> //           Forecast Squared Mean (FFBAR),
>>>> //              = mean(f^2)
>>>> //           Observation Squared Mean (OOBAR)
>>>> //              = mean(o^2)
>>>> //
>>>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial
Sums:
>>>> //           Total (TOTAL),
>>>> //           Forecast Anomaly Mean (FABAR),
>>>> //              = mean(f-c)
>>>> //           Observation Anomaly Mean (OABAR),
>>>> //              = mean(o-c)
>>>> //           Product of Forecast and Observation Anomalies Mean
>> (FOABAR),
>>>> //              = mean((f-c)*(o-c))
>>>> //           Forecast Anomaly Squared Mean (FFABAR),
>>>> //              = mean((f-c)^2)
>>>> //           Observation Anomaly Squared Mean (OOABAR)
>>>> //              = mean((o-c)^2)
>>>> //
>>>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
>>>> //           Total (TOTAL),
>>>> //           U-Forecast Mean (UFBAR),
>>>> //              = mean(uf)
>>>> //           V-Forecast Mean (VFBAR),
>>>> //              = mean(vf)
>>>> //           U-Observation Mean (UOBAR),
>>>> //              = mean(uo)
>>>> //           V-Observation Mean (VOBAR),
>>>> //              = mean(vo)
>>>> //           U-Product Plus V-Product (UVFOBAR),
>>>> //              = mean(uf*uo+vf*vo)
>>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
>>>> //              = mean(uf^2+vf^2)
>>>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
>>>> //              = mean(uo^2+vo^2)
>>>> //
>>>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial
Sums:
>>>> //           U-Forecast Anomaly Mean (UFABAR),
>>>> //              = mean(uf-uc)
>>>> //           V-Forecast Anomaly Mean (VFABAR),
>>>> //              = mean(vf-vc)
>>>> //           U-Observation Anomaly Mean (UOABAR),
>>>> //              = mean(uo-uc)
>>>> //           V-Observation Anomaly Mean (VOABAR),
>>>> //              = mean(vo-vc)
>>>> //           U-Anomaly Product Plus V-Anomaly Product (UVFOABAR),
>>>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
>>>> //           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
Squared
>>>> (UVFFABAR),
>>>> //              = mean((uf-uc)^2+(vf-vc)^2)
>>>> //           U-Observation Anomaly Squared Plus V-Observation
Anomaly
>>>> Squared (UVOOABAR)
>>>> //              = mean((uo-uc)^2+(vo-vc)^2)
>>>> //
>>>> //   (11) STAT and PCT Text Files, Nx2 Probability Contingency
Table
>>>> Counts:
>>>> //           Total (TOTAL),
>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>> //           Probability Threshold Value (THRESH_i),
>>>> //           Row Observation Yes Count (OY_i),
>>>> //           Row Observation No Count (ON_i),
>>>> //           NOTE: Previous 3 columns repeated for each row in
the
>> table.
>>>> //           Last Probability Threshold Value (THRESH_n)
>>>> //
>>>> //   (12) STAT and PSTD Text Files, Nx2 Probability Contingency
Table
>>>> Scores:
>>>> //           Total (TOTAL),
>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>> //           Base Rate (BASER) with confidence interval limits,
>>>> //           Reliability (RELIABILITY),
>>>> //           Resolution (RESOLUTION),
>>>> //           Uncertainty (UNCERTAINTY),
>>>> //           Area Under the ROC Curve (ROC_AUC),
>>>> //           Brier Score (BRIER) with confidence interval limits,
>>>> //           Probability Threshold Value (THRESH_i)
>>>> //           NOTE: Previous column repeated for each probability
>> threshold.
>>>> //
>>>> //   (13) STAT and PJC Text Files, Joint/Continuous Statistics of
>>>> //                                 Probabilistic Variables:
>>>> //           Total (TOTAL),
>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>> //           Probability Threshold Value (THRESH_i),
>>>> //           Observation Yes Count Divided by Total (OY_TP_i),
>>>> //           Observation No Count Divided by Total (ON_TP_i),
>>>> //           Calibration (CALIBRATION_i),
>>>> //           Refinement (REFINEMENT_i),
>>>> //           Likelikhood (LIKELIHOOD_i),
>>>> //           Base Rate (BASER_i),
>>>> //           NOTE: Previous 7 columns repeated for each row in
the
>> table.
>>>> //           Last Probability Threshold Value (THRESH_n)
>>>> //
>>>> //   (14) STAT and PRC Text Files, ROC Curve Points for
>>>> //                                 Probabilistic Variables:
>>>> //           Total (TOTAL),
>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>> //           Probability Threshold Value (THRESH_i),
>>>> //           Probability of Detecting Yes (PODY_i),
>>>> //           Probability of False Detection (POFD_i),
>>>> //           NOTE: Previous 3 columns repeated for each row in
the
>> table.
>>>> //           Last Probability Threshold Value (THRESH_n)
>>>> //
>>>> //   (15) STAT and MPR Text Files, Matched Pair Data:
>>>> //           Total (TOTAL),
>>>> //           Index (INDEX),
>>>> //           Observation Station ID (OBS_SID),
>>>> //           Observation Latitude (OBS_LAT),
>>>> //           Observation Longitude (OBS_LON),
>>>> //           Observation Level (OBS_LVL),
>>>> //           Observation Elevation (OBS_ELV),
>>>> //           Forecast Value (FCST),
>>>> //           Observation Value (OBS),
>>>> //           Climatological Value (CLIMO)
>>>> //
>>>> //   In the expressions above, f are forecast values, o are
observed
>>>> values,
>>>> //   and c are climatological values.
>>>> //
>>>> // Values for these flags are interpreted as follows:
>>>> //    (0) Do not generate output of this type
>>>> //    (1) Write output to a STAT file
>>>> //    (2) Write output to a STAT file and a text file
>>>> //
>>>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1 ];
>>>>
>>>> //
>>>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
>> Correlation
>>>> // Coefficients should be computed.  Computing them over large
datasets
>> is
>>>> // computationally intensive and slows down the runtime execution
>>>> significantly.
>>>> //    (0) Do not compute these correlation coefficients
>>>> //    (1) Compute these correlation coefficients
>>>> //
>>>> rank_corr_flag = 1;
>>>>
>>>> //
>>>> // Specify the GRIB Table 2 parameter table version number to be
used
>>>> // for interpreting GRIB codes.
>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>> //
>>>> grib_ptv = 2;
>>>>
>>>> //
>>>> // Directory where temporary files should be written.
>>>> //
>>>> tmp_dir = "/tmp";
>>>>
>>>> //
>>>> // Prefix to be used for the output file names.
>>>> //
>>>> output_prefix = "";
>>>>
>>>> //
>>>> // Indicate a version number for the contents of this
configuration
>> file.
>>>> // The value should generally not be modified.
>>>> //
>>>> version = "V3.0.1";
>>>>
>>>>
>>>>
>>
>>
>>


------------------------------------------------
Subject: Re: METV3 Issue
From: Tim Melino
Time: Fri Dec 09 12:49:15 2011

Ok,
Here are some of the specifications and I will install the latest
patch
now.

netCDF version 4.1.1
INTEL-11.1.072 Compilers


- Tim


On Fri, Dec 9, 2011 at 1:49 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:

> Tim,
>
> I'm still not able to reproduce the error that you reported.  Have
you
> applied all of the latest patches to METv3.0.1?
> The latest patch tarball and instructions on how to apply it can be
found
> here:
>
http://www.dtcenter.org/met/users/support/known_issues/METv3.0.1/index.php.
>  Can you tell me what version of NetCDF you
> linked MET against?  What family of compilers did you use to compile
MET
> (e.g. GNU/PGI/intel)?  I think we are down to a
> configuration/environment problem at this point.  Sorry for the
trouble.
>
> Paul
>
>
> On 12/09/2011 11:38 AM, Tim Melino via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >
> > Paul,
> > I put everything into a tar file and uploaded it.
> >
> > - Tim
> >
> >
> > On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT <
> met_help at ucar.edu>wrote:
> >
> >> Tim,
> >>
> >> We are not able to reproduce the error that you are reporting.
Are you
> >> using the same exact data and config files that
> >> you sent me and I tested with?  In any case, can you create a tar
> archive
> >> of all the files involved in the point_stat
> >> command that throws the error and put it on the FTP site?  I will
need
> to
> >> be able to reproduce this error, otherwise it
> >> will be difficult for me to diagnose the problem.
> >>
> >> Thanks,
> >>
> >> Paul
> >>
> >>
> >> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >>>
> >>> Paul,
> >>> I tried running again using your configuration settings, but
while
> >> running
> >>> pointstat I am still receiving errors. The error comes up as the
> >> following
> >>> ....
> >>>
> >>> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
> >>> ndas.t12z.ncPointStatConfig -outdir . -v 99
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744071864509006
> >>> Forecast File: wrf.nc
> >>> Climatology File: none
> >>> Configuration File: PointStatConfig
> >>> Observation File: ndas.t12z.nc
> >>>
> >>>
> >>
>
--------------------------------------------------------------------------------
> >>>
> >>> Reading records for TT(0,0,*,*).
> >>>
> >>>
> >>>   LongArray::operator[](int) -> range check error ... 4
> >>>
> >>>
> >>>
> >>> - Tim
> >>>
> >>>
> >>> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT <
> met_help at ucar.edu
> >>> wrote:
> >>>
> >>>> Tim,
> >>>>
> >>>> I ran the following pb2nc and point_stat commands using the
attached
> >>>> config files to generate point verification data
> >>>> with your PrepBUFR obs and p_interp model data.  Note that
MET_BASE is
> >> set
> >>>> to the base folder of an instance of
> >>>> METv3.0.1.  I pulled both config files, with slight
modifications,
> from
> >>>> $MET_BASE/scripts/config.
> >>>>
> >>>> $MET_BASE/bin/pb2nc
> ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
> >>>>
> >>>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig
-outdir
> .
> >> -v
> >>>> 99
> >>>>
> >>>> In PointStatConfig, you will see the following settings.  The
> fcst_field
> >>>> setting format is due to the fact that fields
> >>>> in wrf.nc are four dimensional, with the last two dimensions
being
> the
> >>>> spatial (x,y) dimensions.  The obs_field
> >>>> specifies surface temperature using a GRIB-style format,
because pb2nc
> >>>> indexes fields in its output by GRIB code.  You
> >>>> should follow a similar paradigm to verify additional fields
beyond
> >>>> temperature.
> >>>>
> >>>> fcst_field[] = [ "TT(0,0,*,*)" ];
> >>>> obs_field[]  = [ "TMP/Z2" ];
> >>>>
> >>>> fcst_thresh[] = [ "le273" ];
> >>>> obs_thresh[]  = [];
> >>>>
> >>>> If you have any questions or problems, please let me know.
> >>>>
> >>>> Good luck,
> >>>>
> >>>> Paul
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928
>
> >>>>>
> >>>>> I just put the file on the server that I have been using. As
far as
> >>>> running
> >>>>> the UPP software, that is not really possible at the moment. I
do not
> >>>> have
> >>>>> any of that software installed or configured as I have never
had a
> >> reason
> >>>>> to use it .
> >>>>>
> >>>>> - Tim
> >>>>>
> >>>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
> >> met_help at ucar.edu
> >>>>> wrote:
> >>>>>
> >>>>>> Tim,
> >>>>>>
> >>>>>> Can you please put the input PrepBUFR file that you pass to
pb2nc on
> >> the
> >>>>>> FTP site?  When I look at the contents of
> >>>>>> out.nc, it does not appear that there are any observations in
that
> >>>> file.
> >>>>>>  I would like to run pb2nc myself to see what
> >>>>>> is going on.
> >>>>>>
> >>>>>> I made an incorrect assumption in my earlier emails that you
were
> >> trying
> >>>>>> to verify model data in GRIB format.  Now that
> >>>>>> I have your data in hand, I see that it is p_interp output,
as you
> >>>>>> mentioned in your initial email.  MET support for
> >>>>>> p_interp is not as robust as for GRIB.  In particular, grid-
relative
> >>>> wind
> >>>>>> directions in p_interp data files should not
> >>>>>> be compared to lat-long relative wind directions in the
PrepBUFR
> obs.
> >>>>>>  Would it be possible for you to run your WRF
> >>>>>> output through the Unified Post Processor (UPP -
> >>>>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php)
> >>>>>> instead of or in addition to p_interp?  That would simplify
MET
> >>>>>> verification tasks.  Please let me know if you have any
> >>>>>> questions.
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>> Paul
> >>>>>>
> >>>>>>
> >>>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
> >>>>>>>
> >>>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> >>>>>>> Transaction: Ticket created by tmelino at meso.com
> >>>>>>>        Queue: met_help
> >>>>>>>      Subject: Re: METV3 Issue
> >>>>>>>        Owner: Nobody
> >>>>>>>   Requestors: tmelino at meso.com
> >>>>>>>       Status: new
> >>>>>>>  Ticket <URL:
> >> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
> >>>>>>>
> >>>>>>>
> >>>>>>> Ok,
> >>>>>>>
> >>>>>>> The data should be there now. With out.nc being the obs and
> >> wrf.ncbeing
> >>>>>>> the forecast
> >>>>>>>
> >>>>>>> - Tim
> >>>>>>>
> >>>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino
<tmelino at meso.com>
> wrote:
> >>>>>>>
> >>>>>>>> Hi,
> >>>>>>>> I have recently been doing some work with WRF and am trying
to add
> >> the
> >>>>>> the
> >>>>>>>> model evaluation tools to our standard model verification
system.
>  I
> >>>>>>>> started the process by running the pressure interpolation
program
> >> on a
> >>>>>>>> single wrfout file, which appeared to finish correctly. I
have
> >>>> attached
> >>>>>> an
> >>>>>>>> ncdump of the file header to this email it is called
> >>>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded
a
> single
> >>>>>> prebufr
> >>>>>>>> file from an NCEP repository for the time centered on the
forecast
> >>>>>> period
> >>>>>>>> and ran PB2NC and this also appeared to finish correctly
and
> output
> >> a
> >>>>>>>> single netcdf file, the header information is also attached
> >>>> (PB2NC.txt).
> >>>>>>>> Then I attempted to run the point stat utility on these two
files
> >> but
> >>>>>> the
> >>>>>>>> program errors out telling me that there are more forecast
field
> >> that
> >>>>>>>> observational fields "ERROR:
PointStatConfInfo::process_config()
> ->
> >>>> The
> >>>>>>>> number fcst_thresh entries provided must match the number
of
> fields
> >>>>>>>> provided in fcst_field.". I ran the following command from
the
> >>>> terminal
> >>>>>> to
> >>>>>>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
> >>>>>> out.nc
> >>>>>>>> PointStatConfig".  I am not sure what the problem is I have
red
> the
> >>>>>>>> documentation and it appears to be setup correctly but I am
not
> >>>>>> completely
> >>>>>>>> sure as I have never used this software before.  What
should these
> >>>>>> namelist
> >>>>>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying
> to
> >>>>>> verify
> >>>>>>>> 10 meter winds? I appreciate your help!
> >>>>>>>>
> >>>>>>>> Also ... I ran the test all scripts after compilation , and
the
> code
> >>>>>>>> completed successfully with no errors.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Thanks ,
> >>>>>>>> Tim
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>> //
> >>>> // Default pb2nc configuration file
> >>>> //
> >>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>
> >>>> //
> >>>> // Stratify the observation data in the PrepBufr files in the
> following
> >>>> // ways:
> >>>> //  (1) by message type: supply a list of PrepBufr message
types
> >>>> //      to retain (i.e. AIRCFT)
> >>>> //  (2) by station id: supply a list of observation stations to
retain
> >>>> //  (3) by valid time: supply starting and ending times in form
> >>>> //      YYYY-MM-DD HH:MM:SS UTC
> >>>> //  (4) by location: supply either an NCEP masking grid, a
masking
> >>>> //      lat/lon polygon or a file to a mask lat/lon polygon
> >>>> //  (5) by elevation: supply min/max elevation values
> >>>> //  (6) by report type (typ): supply a list of report types to
retain
> >>>> //  (7) by instrument type (itp): supply a list of instrument
type to
> >>>> //      retain
> >>>> //  (8) by vertical level: supply min/max vertical levels
> >>>> //  (9) by variable type: supply a list of variable types to
retain
> >>>> //      P, Q, T, Z, U, V
> >>>> // (11) by quality mark: supply a quality mark threshold
> >>>> // (12) Flag to retain values for all quality marks, or just
the first
> >>>> //      quality mark (highest)
> >>>> // (13) by data level category: supply a list of category types
to
> >>>> //      retain.
> >>>> //
> >>>> //      0 - Surface level (mass reports only)
> >>>> //      1 - Mandatory level (upper-air profile reports)
> >>>> //      2 - Significant temperature level (upper-air profile
reports)
> >>>> //      2 - Significant temperature and winds-by-pressure level
> >>>> //          (future combined mass and wind upper-air reports)
> >>>> //      3 - Winds-by-pressure level (upper-air profile reports)
> >>>> //      4 - Winds-by-height level (upper-air profile reports)
> >>>> //      5 - Tropopause level (upper-air profile reports)
> >>>> //      6 - Reports on a single level
> >>>> //          (e.g., aircraft, satellite-wind, surface wind,
> >>>> //           precipitable water retrievals, etc.)
> >>>> //      7 - Auxiliary levels generated via interpolation from
spanning
> >>>> levels
> >>>> //          (upper-air profile reports)
> >>>> //
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of PrepBufr message type
strings to
> >>>> retain.
> >>>> // An empty list indicates that all should be retained.
> >>>> // List of valid message types:
> >>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >>>> //    SFCSHP SPSSMI SYNDAT VADWND
> >>>> //    ANYAIR (= AIRCAR, AIRCFT)
> >>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >>>> //    ONLYSF (= ADPSFC, SFCSHP)
> >>>> //
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>> //
> >>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >>>> //
> >>>> message_type[] = [];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of station ID strings to
retain.
> >>>> // An empty list indicates that all should be retained.
> >>>> //
> >>>> // e.g. station_id[] = [ "KDEN" ];
> >>>> //
> >>>> station_id[] = [];
> >>>>
> >>>> //
> >>>> // Beginning and ending time offset values in seconds for
observations
> >>>> // to retain.  The valid time window for retaining observations
is
> >>>> // defined in reference to the observation time.  So
observations with
> >>>> // a valid time falling in the window [obs_time+beg_ds,
> obs_time+end_ds]
> >>>> // will be retained.
> >>>> //
> >>>> beg_ds = -1800;
> >>>> end_ds =  1800;
> >>>>
> >>>> //
> >>>> // Specify the name of a single grid to be used in masking the
data.
> >>>> // An empty string indicates that no grid should be used.  The
> standard
> >>>> // NCEP grids are named "GNNN" where NNN indicates the three
digit
> grid
> >>>> number.
> >>>> //
> >>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>>> //
> >>>> // e.g. mask_grid = "G212";
> >>>> //
> >>>> mask_grid = "G212";
> >>>>
> >>>> //
> >>>> // Specify a single ASCII file containing a lat/lon polygon.
> >>>> // Latitude in degrees north and longitude in degrees east.
> >>>> // By default, the first and last polygon points are connected.
> >>>> //
> >>>> // The lat/lon polygon file should contain a name for the
polygon
> >>>> // followed by a space-separated list of lat/lon points:
> >>>> //    "name lat1 lon1 lat2 lon2... latn lonn"
> >>>> //
> >>>> // MET_BASE may be used in the path for the lat/lon polygon
file.
> >>>> //
> >>>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
> >>>> //
> >>>> mask_poly = "";
> >>>>
> >>>> //
> >>>> // Beginning and ending elevation values in meters for
observations
> >>>> // to retain.
> >>>> //
> >>>> beg_elev = -1000;
> >>>> end_elev = 100000;
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of PrepBufr report type
values to
> >> retain.
> >>>> // An empty list indicates that all should be retained.
> >>>> //
> >>>> //
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
> >>>> //
> >>>> // e.g. pb_report_type[] = [ 120, 133 ];
> >>>> //
> >>>> pb_report_type[] = [];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of input report type values
to
> retain.
> >>>> // An empty list indicates that all should be retained.
> >>>> //
> >>>> //
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
> >>>> //
> >>>> // e.g. in_report_type[] = [ 11, 22, 23 ];
> >>>> //
> >>>> in_report_type[] = [];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of instrument type values to
retain.
> >>>> // An empty list indicates that all should be retained.
> >>>> //
> >>>> // e.g. instrument_type[] = [ 52, 87 ];
> >>>> //
> >>>> instrument_type[] = [];
> >>>>
> >>>> //
> >>>> // Beginning and ending vertical levels to retain.
> >>>> //
> >>>> beg_level = 1;
> >>>> end_level = 255;
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of strings containing grib
codes or
> >>>> // corresponding grib code abbreviations to retain or be
derived from
> >>>> // the available observations.
> >>>> //
> >>>> // Grib Codes to be RETAINED:
> >>>> //    SPFH or 51 for Specific Humidity in kg/kg
> >>>> //    TMP  or 11 for Temperature in K
> >>>> //    HGT  or 7  for Height in meters
> >>>> //    UGRD or 33 for the East-West component of the wind in m/s
> >>>> //    VGRD or 34 for the North-South component of the wind in
m/s
> >>>> //
> >>>> // Grib Codes to be DERIVED:
> >>>> //    DPT   or 17 for Dewpoint Temperature in K
> >>>> //    WIND  or 32 for Wind Speed in m/s
> >>>> //    RH    or 52 for Relative Humidity in %
> >>>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
> >>>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
> >>>> //
> >>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>> //
> >>>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
> >>>> //
> >>>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
> >>>>                    "DPT",  "WIND", "RH",   "MIXR" ];
> >>>>
> >>>> //
> >>>> // Quality mark threshold to indicate which observations to
retain.
> >>>> // Observations with a quality mark equal to or LESS THAN this
> threshold
> >>>> // will be retained, while observations with a quality mark
GREATER
> THAN
> >>>> // this threshold will be discarded.
> >>>> //
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
> >>>> //
> >>>> quality_mark_thresh = 2;
> >>>>
> >>>> //
> >>>> // Flag to indicate whether observations should be drawn from
the top
> >>>> // of the event stack (most quality controlled) or the bottom
of the
> >>>> // event stack (most raw).  A value of 1 indicates that the top
of the
> >>>> // event stack should be used while a value of zero indicates
that the
> >>>> // bottom should be used.
> >>>> //
> >>>> event_stack_flag = 1;
> >>>>
> >>>> //
> >>>> // Space comma-separated list of data level categorie values to
> retain,
> >>>> // where a value of:
> >>>> //    0 = Surface level (mass reports only)
> >>>> //    1 = Mandatory level (upper-air profile reports)
> >>>> //    2 = Significant temperature level (upper-air profile
reports)
> >>>> //    2 = Significant temperature and winds-by-pressure level
> >>>> //        (future combined mass and wind upper-air reports)
> >>>> //    3 = Winds-by-pressure level (upper-air profile reports)
> >>>> //    4 = Winds-by-height level (upper-air profile reports)
> >>>> //    5 = Tropopause level (upper-air profile reports)
> >>>> //    6 = Reports on a single level
> >>>> //        (e.g., aircraft, satellite-wind, surface wind,
> >>>> //         precipitable water retrievals, etc.)
> >>>> //    7 = Auxiliary levels generated via interpolation from
spanning
> >> levels
> >>>> //        (upper-air profile reports)
> >>>> // An empty list indicates that all should be retained.
> >>>> //
> >>>> //
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>> //
> >>>> // e.g. level_category[] = [ 0, 1 ];
> >>>> //
> >>>> level_category[] = [];
> >>>>
> >>>> //
> >>>> // Directory where temp files should be written by the PB2NC
tool
> >>>> //
> >>>> tmp_dir = "/tmp";
> >>>>
> >>>> //
> >>>> // Indicate a version number for the contents of this
configuration
> >> file.
> >>>> // The value should generally not be modified.
> >>>> //
> >>>> version = "V3.0";
> >>>>
> >>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>> //
> >>>> // Default point_stat configuration file
> >>>> //
> >>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>
> >>>> //
> >>>> // Specify a name to designate the model being verified.  This
name
> >> will be
> >>>> // written to the second column of the ASCII output generated.
> >>>> //
> >>>> model = "WRF";
> >>>>
> >>>> //
> >>>> // Beginning and ending time offset values in seconds for
observations
> >>>> // to be used.  These time offsets are defined in reference to
the
> >>>> // forecast valid time, v.  Observations with a valid time
falling in
> >> the
> >>>> // window [v+beg_ds, v+end_ds] will be used.
> >>>> // These selections are overridden by the command line
arguments
> >>>> // -obs_valid_beg and -obs_valid_end.
> >>>> //
> >>>> beg_ds = -1800;
> >>>> end_ds =  1800;
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of fields to be verified.
The
> >> forecast
> >>>> and
> >>>> // observation fields may be specified separately.  If the
obs_field
> >>>> parameter
> >>>> // is left blank, it will default to the contents of
fcst_field.
> >>>> //
> >>>> // Each field is specified as a GRIB code or abbreviation
followed by
> an
> >>>> // accumulation or vertical level indicator for GRIB files or
as a
> >>>> variable name
> >>>> // followed by a list of dimensions for NetCDF files output
from
> >> p_interp
> >>>> or MET.
> >>>> //
> >>>> // Specifying verification fields for GRIB files:
> >>>> //    GC/ANNN for accumulation interval NNN
> >>>> //    GC/ZNNN for vertical level NNN
> >>>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
> >>>> //    GC/PNNN for pressure level NNN in hPa
> >>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >>>> //    GC/LNNN for a generic level type
> >>>> //    GC/RNNN for a specific GRIB record number
> >>>> //    Where GC is the number of or abbreviation for the grib
code
> >>>> //    to be verified.
> >>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>> //
> >>>> // Specifying verification fields for NetCDF files:
> >>>> //    var_name(i,...,j,*,*) for a single field
> >>>> //    var_name(i-j,*,*) for a range of fields
> >>>> //    Where var_name is the name of the NetCDF variable,
> >>>> //    and i,...,j specifies fixed dimension values,
> >>>> //    and i-j specifies a range of values for a single
dimension,
> >>>> //    and *,* specifies the two dimensions for the gridded
field.
> >>>> //
> >>>> //    NOTE: To verify winds as vectors rather than scalars,
> >>>> //          specify UGRD (or 33) followed by VGRD (or 34) with
the
> >>>> //          same level values.
> >>>> //
> >>>> //    NOTE: To process a probability field, add "/PROB", such
as
> >>>> "POP/Z0/PROB".
> >>>> //
> >>>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB
input
> >>>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ];
for
> NetCDF
> >>>> input
> >>>> //
> >>>>
> >>>> fcst_field[] = [ "TT(0,0,*,*)" ];
> >>>> obs_field[]  = [ "TMP/Z2" ];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of groups of thresholds to be
> applied
> >> to
> >>>> the
> >>>> // fields listed above.  Thresholds for the forecast and
observation
> >> fields
> >>>> // may be specified separately.  If the obs_thresh parameter is
left
> >> blank,
> >>>> // it will default to the contents of fcst_thresh.
> >>>> //
> >>>> // At least one threshold must be provided for each field
listed
> above.
> >>>>  The
> >>>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as
> >> must
> >>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To
apply
> >> multiple
> >>>> // thresholds to a field, separate the threshold values with a
space.
> >>>> //
> >>>> // Each threshold must be preceded by a two letter indicator
for the
> >> type
> >>>> of
> >>>> // thresholding to be performed:
> >>>> //    'lt' for less than     'le' for less than or equal to
> >>>> //    'eq' for equal to      'ne' for not equal to
> >>>> //    'gt' for greater than  'ge' for greater than or equal to
> >>>> //
> >>>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with
> 1.0,
> >>>> //       and be preceeded by "ge".
> >>>> //
> >>>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
> >>>> //
> >>>> fcst_thresh[] = [ "le273" ];
> >>>> obs_thresh[]  = [];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of thresholds to be used when
> >> computing
> >>>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
> applied
> >> to
> >>>> the
> >>>> // wind speed values derived from each U/V pair.  Only those
U/V pairs
> >>>> which meet
> >>>> // the wind speed threshold criteria are retained.  If the
> >> obs_wind_thresh
> >>>> // parameter is left blank, it will default to the contents of
> >>>> fcst_wind_thresh.
> >>>> //
> >>>> // To apply multiple wind speed thresholds, separate the
threshold
> >> values
> >>>> with a
> >>>> // space.  Use "NA" to indicate that no wind speed threshold
should be
> >>>> applied.
> >>>> //
> >>>> // Each threshold must be preceded by a two letter indicator
for the
> >> type
> >>>> of
> >>>> // thresholding to be performed:
> >>>> //    'lt' for less than     'le' for less than or equal to
> >>>> //    'eq' for equal to      'ne' for not equal to
> >>>> //    'gt' for greater than  'ge' for greater than or equal to
> >>>> //    'NA' for no threshold
> >>>> //
> >>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >>>> //
> >>>> fcst_wind_thresh[] = [ "NA" ];
> >>>> obs_wind_thresh[]  = [];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of PrepBufr message types
with which
> >>>> // to perform the verification.  Statistics will be computed
> separately
> >>>> // for each message type specified.  At least one PrepBufr
message
> type
> >>>> // must be provided.
> >>>> // List of valid message types:
> >>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >>>> //    SFCSHP SPSSMI SYNDAT VADWND
> >>>> //    ANYAIR (= AIRCAR, AIRCFT)
> >>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >>>> //    ONLYSF (= ADPSFC, SFCSHP)
> >>>> //
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>> //
> >>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >>>> //
> >>>> message_type[] = [ "ADPSFC" ];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of grids to be used in
masking the
> >> data
> >>>> over
> >>>> // which to perform scoring.  An empty list indicates that no
masking
> >> grid
> >>>> // should be performed.  The standard NCEP grids are named
"GNNN"
> where
> >> NNN
> >>>> // indicates the three digit grid number.  Enter "FULL" to
score over
> >> the
> >>>> // entire domain.
> >>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>>> //
> >>>> // e.g. mask_grid[] = [ "FULL" ];
> >>>> //
> >>>> mask_grid[] = [ "FULL" ];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of masking regions to be
applied.
> >>>> // An empty list indicates that no additional masks should be
used.
> >>>> // The masking regions may be defined in one of 4 ways:
> >>>> //
> >>>> // (1) An ASCII file containing a lat/lon polygon.
> >>>> //     Latitude in degrees north and longitude in degrees east.
> >>>> //     By default, the first and last polygon points are
connected.
> >>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> >>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >>>> //
> >>>> // (2) The NetCDF output of the gen_poly_mask tool.
> >>>> //
> >>>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> >>>> //     to be used, and optionally, a threshold to be applied to
the
> >> field.
> >>>> //     e.g. "sample.nc var_name gt0.00"
> >>>> //
> >>>> // (4) A GRIB data file, followed by a description of the field
> >>>> //     to be used, and optionally, a threshold to be applied to
the
> >> field.
> >>>> //     e.g. "sample.grb APCP/A3 gt0.00"
> >>>> //
> >>>> // Any NetCDF or GRIB file used must have the same grid
dimensions as
> >> the
> >>>> // data being verified.
> >>>> //
> >>>> // MET_BASE may be used in the path for the files above.
> >>>> //
> >>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >>>> //                      "poly_mask.ncf",
> >>>> //                      "sample.nc APCP",
> >>>> //                      "sample.grb HGT/Z0 gt100.0" ];
> >>>> //
> >>>> mask_poly[] = [];
> >>>>
> >>>> //
> >>>> // Specify the name of an ASCII file containing a space-
separated list
> >> of
> >>>> // station ID's at which to perform verification.  Each station
ID
> >>>> specified
> >>>> // is treated as an individual masking region.
> >>>> //
> >>>> // An empty list file name indicates that no station ID masks
should
> be
> >>>> used.
> >>>> //
> >>>> // MET_BASE may be used in the path for the station ID mask
file name.
> >>>> //
> >>>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
> >>>> //
> >>>> mask_sid = "";
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of values for alpha to be
used when
> >>>> computing
> >>>> // confidence intervals.  Values of alpha must be between 0 and
1.
> >>>> //
> >>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >>>> //
> >>>> ci_alpha[] = [ 0.05 ];
> >>>>
> >>>> //
> >>>> // Specify the method to be used for computing bootstrap
confidence
> >>>> intervals.
> >>>> // The value for this is interpreted as follows:
> >>>> //    (0) Use the BCa interval method (computationally
intensive)
> >>>> //    (1) Use the percentile interval method
> >>>> //
> >>>> boot_interval = 1;
> >>>>
> >>>> //
> >>>> // Specify a proportion between 0 and 1 to define the replicate
sample
> >> size
> >>>> // to be used when computing percentile intervals.  The
replicate
> sample
> >>>> // size is set to boot_rep_prop * n, where n is the number of
raw data
> >>>> points.
> >>>> //
> >>>> // e.g boot_rep_prop = 0.80;
> >>>> //
> >>>> boot_rep_prop = 1.0;
> >>>>
> >>>> //
> >>>> // Specify the number of times each set of matched pair data
should be
> >>>> // resampled when computing bootstrap confidence intervals.  A
value
> of
> >>>> // zero disables the computation of bootstrap condifence
intervals.
> >>>> //
> >>>> // e.g. n_boot_rep = 1000;
> >>>> //
> >>>> n_boot_rep = 1000;
> >>>>
> >>>> //
> >>>> // Specify the name of the random number generator to be used.
See
> the
> >> MET
> >>>> // Users Guide for a list of possible random number generators.
> >>>> //
> >>>> boot_rng = "mt19937";
> >>>>
> >>>> //
> >>>> // Specify the seed value to be used when computing bootstrap
> confidence
> >>>> // intervals.  If left unspecified, the seed will change for
each run
> >> and
> >>>> // the computed bootstrap confidence intervals will not be
> reproducable.
> >>>> //
> >>>> boot_seed = "";
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of interpolation method(s) to
be
> used
> >>>> // for comparing the forecast grid to the observation points.
String
> >>>> values
> >>>> // are interpreted as follows:
> >>>> //    MIN     = Minimum in the neighborhood
> >>>> //    MAX     = Maximum in the neighborhood
> >>>> //    MEDIAN  = Median in the neighborhood
> >>>> //    UW_MEAN = Unweighted mean in the neighborhood
> >>>> //    DW_MEAN = Distance-weighted mean in the neighborhood
> >>>> //    LS_FIT  = Least-squares fit in the neighborhood
> >>>> //    BILIN   = Bilinear interpolation using the 4 closest
points
> >>>> //
> >>>> // In all cases, vertical interpolation is performed in the
natural
> log
> >>>> // of pressure of the levels above and below the observation.
> >>>> //
> >>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >>>> //
> >>>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
> >>>>
> >>>> //
> >>>> // Specify a comma-separated list of box widths to be used by
the
> >>>> // interpolation techniques listed above.  A value of 1
indicates that
> >>>> // the nearest neighbor approach should be used.  For a value
of n
> >>>> // greater than 1, the n*n grid points closest to the
observation
> define
> >>>> // the neighborhood.
> >>>> //
> >>>> // e.g. interp_width = [ 1, 3, 5 ];
> >>>> //
> >>>> interp_width[] = [ 1, 3 ];
> >>>>
> >>>> //
> >>>> // When interpolating, compute a ratio of the number of valid
data
> >> points
> >>>> // to the total number of points in the neighborhood.  If that
ratio
> is
> >>>> // less than this threshold, do not include the observation.
This
> >>>> // threshold must be between 0 and 1.  Setting this threshold
to 1
> will
> >>>> // require that each observation be surrounded by n*n valid
forecast
> >>>> // points.
> >>>> //
> >>>> // e.g. interp_thresh = 1.0;
> >>>> //
> >>>> interp_thresh = 1.0;
> >>>>
> >>>> //
> >>>> // Specify flags to indicate the type of data to be output:
> >>>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> >>>> //           Total (TOTAL),
> >>>> //           Forecast Rate (F_RATE),
> >>>> //           Hit Rate (H_RATE),
> >>>> //           Observation Rate (O_RATE)
> >>>> //
> >>>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> >>>> //           Total (TOTAL),
> >>>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>>> //           Forecast Yes and Observation No Count (FY_ON),
> >>>> //           Forecast No and Observation Yes Count (FN_OY),
> >>>> //           Forecast No and Observation No Count (FN_ON)
> >>>> //
> >>>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> >>>> //           Total (TOTAL),
> >>>> //           Base Rate (BASER),
> >>>> //           Forecast Mean (FMEAN),
> >>>> //           Accuracy (ACC),
> >>>> //           Frequency Bias (FBIAS),
> >>>> //           Probability of Detecting Yes (PODY),
> >>>> //           Probability of Detecting No (PODN),
> >>>> //           Probability of False Detection (POFD),
> >>>> //           False Alarm Ratio (FAR),
> >>>> //           Critical Success Index (CSI),
> >>>> //           Gilbert Skill Score (GSS),
> >>>> //           Hanssen and Kuipers Discriminant (HK),
> >>>> //           Heidke Skill Score (HSS),
> >>>> //           Odds Ratio (ODDS),
> >>>> //           NOTE: All statistics listed above contain
parametric
> and/or
> >>>> //                 non-parametric confidence interval limits.
> >>>> //
> >>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency
> Table
> >>>> Counts:
> >>>> //           Total (TOTAL),
> >>>> //           Number of Categories (N_CAT),
> >>>> //           Contingency Table Count columns repeated
N_CAT*N_CAT
> times
> >>>> //
> >>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency
> Table
> >>>> Scores:
> >>>> //           Total (TOTAL),
> >>>> //           Number of Categories (N_CAT),
> >>>> //           Accuracy (ACC),
> >>>> //           Hanssen and Kuipers Discriminant (HK),
> >>>> //           Heidke Skill Score (HSS),
> >>>> //           Gerrity Score (GER),
> >>>> //           NOTE: All statistics listed above contain
parametric
> and/or
> >>>> //                 non-parametric confidence interval limits.
> >>>> //
> >>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> >>>> //           Total (TOTAL),
> >>>> //           Forecast Mean (FBAR),
> >>>> //           Forecast Standard Deviation (FSTDEV),
> >>>> //           Observation Mean (OBAR),
> >>>> //           Observation Standard Deviation (OSTDEV),
> >>>> //           Pearson's Correlation Coefficient (PR_CORR),
> >>>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> >>>> //           Kendall Tau Rank Correlation Coefficient
(KT_CORR),
> >>>> //           Number of ranks compared (RANKS),
> >>>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> >>>> //           Number of tied ranks in the observation field
> (ORANK_TIES),
> >>>> //           Mean Error (ME),
> >>>> //           Standard Deviation of the Error (ESTDEV),
> >>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >>>> //           Mean Absolute Error (MAE),
> >>>> //           Mean Squared Error (MSE),
> >>>> //           Bias-Corrected Mean Squared Error (BCMSE),
> >>>> //           Root Mean Squared Error (RMSE),
> >>>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> >>>> //           NOTE: Most statistics listed above contain
parametric
> >> and/or
> >>>> //                 non-parametric confidence interval limits.
> >>>> //
> >>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >>>> //           Total (TOTAL),
> >>>> //           Forecast Mean (FBAR),
> >>>> //              = mean(f)
> >>>> //           Observation Mean (OBAR),
> >>>> //              = mean(o)
> >>>> //           Forecast*Observation Product Mean (FOBAR),
> >>>> //              = mean(f*o)
> >>>> //           Forecast Squared Mean (FFBAR),
> >>>> //              = mean(f^2)
> >>>> //           Observation Squared Mean (OOBAR)
> >>>> //              = mean(o^2)
> >>>> //
> >>>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial
Sums:
> >>>> //           Total (TOTAL),
> >>>> //           Forecast Anomaly Mean (FABAR),
> >>>> //              = mean(f-c)
> >>>> //           Observation Anomaly Mean (OABAR),
> >>>> //              = mean(o-c)
> >>>> //           Product of Forecast and Observation Anomalies Mean
> >> (FOABAR),
> >>>> //              = mean((f-c)*(o-c))
> >>>> //           Forecast Anomaly Squared Mean (FFABAR),
> >>>> //              = mean((f-c)^2)
> >>>> //           Observation Anomaly Squared Mean (OOABAR)
> >>>> //              = mean((o-c)^2)
> >>>> //
> >>>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
> >>>> //           Total (TOTAL),
> >>>> //           U-Forecast Mean (UFBAR),
> >>>> //              = mean(uf)
> >>>> //           V-Forecast Mean (VFBAR),
> >>>> //              = mean(vf)
> >>>> //           U-Observation Mean (UOBAR),
> >>>> //              = mean(uo)
> >>>> //           V-Observation Mean (VOBAR),
> >>>> //              = mean(vo)
> >>>> //           U-Product Plus V-Product (UVFOBAR),
> >>>> //              = mean(uf*uo+vf*vo)
> >>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >>>> //              = mean(uf^2+vf^2)
> >>>> //           U-Observation Squared Plus V-Observation Squared
> (UVOOBAR)
> >>>> //              = mean(uo^2+vo^2)
> >>>> //
> >>>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial
Sums:
> >>>> //           U-Forecast Anomaly Mean (UFABAR),
> >>>> //              = mean(uf-uc)
> >>>> //           V-Forecast Anomaly Mean (VFABAR),
> >>>> //              = mean(vf-vc)
> >>>> //           U-Observation Anomaly Mean (UOABAR),
> >>>> //              = mean(uo-uc)
> >>>> //           V-Observation Anomaly Mean (VOABAR),
> >>>> //              = mean(vo-vc)
> >>>> //           U-Anomaly Product Plus V-Anomaly Product
(UVFOABAR),
> >>>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
> >>>> //           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
> Squared
> >>>> (UVFFABAR),
> >>>> //              = mean((uf-uc)^2+(vf-vc)^2)
> >>>> //           U-Observation Anomaly Squared Plus V-Observation
Anomaly
> >>>> Squared (UVOOABAR)
> >>>> //              = mean((uo-uc)^2+(vo-vc)^2)
> >>>> //
> >>>> //   (11) STAT and PCT Text Files, Nx2 Probability Contingency
Table
> >>>> Counts:
> >>>> //           Total (TOTAL),
> >>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>> //           Probability Threshold Value (THRESH_i),
> >>>> //           Row Observation Yes Count (OY_i),
> >>>> //           Row Observation No Count (ON_i),
> >>>> //           NOTE: Previous 3 columns repeated for each row in
the
> >> table.
> >>>> //           Last Probability Threshold Value (THRESH_n)
> >>>> //
> >>>> //   (12) STAT and PSTD Text Files, Nx2 Probability Contingency
Table
> >>>> Scores:
> >>>> //           Total (TOTAL),
> >>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>> //           Base Rate (BASER) with confidence interval limits,
> >>>> //           Reliability (RELIABILITY),
> >>>> //           Resolution (RESOLUTION),
> >>>> //           Uncertainty (UNCERTAINTY),
> >>>> //           Area Under the ROC Curve (ROC_AUC),
> >>>> //           Brier Score (BRIER) with confidence interval
limits,
> >>>> //           Probability Threshold Value (THRESH_i)
> >>>> //           NOTE: Previous column repeated for each
probability
> >> threshold.
> >>>> //
> >>>> //   (13) STAT and PJC Text Files, Joint/Continuous Statistics
of
> >>>> //                                 Probabilistic Variables:
> >>>> //           Total (TOTAL),
> >>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>> //           Probability Threshold Value (THRESH_i),
> >>>> //           Observation Yes Count Divided by Total (OY_TP_i),
> >>>> //           Observation No Count Divided by Total (ON_TP_i),
> >>>> //           Calibration (CALIBRATION_i),
> >>>> //           Refinement (REFINEMENT_i),
> >>>> //           Likelikhood (LIKELIHOOD_i),
> >>>> //           Base Rate (BASER_i),
> >>>> //           NOTE: Previous 7 columns repeated for each row in
the
> >> table.
> >>>> //           Last Probability Threshold Value (THRESH_n)
> >>>> //
> >>>> //   (14) STAT and PRC Text Files, ROC Curve Points for
> >>>> //                                 Probabilistic Variables:
> >>>> //           Total (TOTAL),
> >>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>> //           Probability Threshold Value (THRESH_i),
> >>>> //           Probability of Detecting Yes (PODY_i),
> >>>> //           Probability of False Detection (POFD_i),
> >>>> //           NOTE: Previous 3 columns repeated for each row in
the
> >> table.
> >>>> //           Last Probability Threshold Value (THRESH_n)
> >>>> //
> >>>> //   (15) STAT and MPR Text Files, Matched Pair Data:
> >>>> //           Total (TOTAL),
> >>>> //           Index (INDEX),
> >>>> //           Observation Station ID (OBS_SID),
> >>>> //           Observation Latitude (OBS_LAT),
> >>>> //           Observation Longitude (OBS_LON),
> >>>> //           Observation Level (OBS_LVL),
> >>>> //           Observation Elevation (OBS_ELV),
> >>>> //           Forecast Value (FCST),
> >>>> //           Observation Value (OBS),
> >>>> //           Climatological Value (CLIMO)
> >>>> //
> >>>> //   In the expressions above, f are forecast values, o are
observed
> >>>> values,
> >>>> //   and c are climatological values.
> >>>> //
> >>>> // Values for these flags are interpreted as follows:
> >>>> //    (0) Do not generate output of this type
> >>>> //    (1) Write output to a STAT file
> >>>> //    (2) Write output to a STAT file and a text file
> >>>> //
> >>>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1
];
> >>>>
> >>>> //
> >>>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
> >> Correlation
> >>>> // Coefficients should be computed.  Computing them over large
> datasets
> >> is
> >>>> // computationally intensive and slows down the runtime
execution
> >>>> significantly.
> >>>> //    (0) Do not compute these correlation coefficients
> >>>> //    (1) Compute these correlation coefficients
> >>>> //
> >>>> rank_corr_flag = 1;
> >>>>
> >>>> //
> >>>> // Specify the GRIB Table 2 parameter table version number to
be used
> >>>> // for interpreting GRIB codes.
> >>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>> //
> >>>> grib_ptv = 2;
> >>>>
> >>>> //
> >>>> // Directory where temporary files should be written.
> >>>> //
> >>>> tmp_dir = "/tmp";
> >>>>
> >>>> //
> >>>> // Prefix to be used for the output file names.
> >>>> //
> >>>> output_prefix = "";
> >>>>
> >>>> //
> >>>> // Indicate a version number for the contents of this
configuration
> >> file.
> >>>> // The value should generally not be modified.
> >>>> //
> >>>> version = "V3.0.1";
> >>>>
> >>>>
> >>>>
> >>
> >>
> >>
>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51928] Re: METV3 Issue
From: Paul Oldenburg
Time: Fri Dec 09 14:04:50 2011

Tim,

I was able to reproduce the error you reported when running METv3.0.1
compiled with intel compilers.  This may take us a
little time to sort out.  Thanks for reporting this issue, and we'll
let you know when we have a solution for you.

Thanks,

Paul


On 12/09/2011 12:49 PM, Tim Melino via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>
> Ok,
> Here are some of the specifications and I will install the latest
patch
> now.
>
> netCDF version 4.1.1
> INTEL-11.1.072 Compilers
>
>
> - Tim
>
>
> On Fri, Dec 9, 2011 at 1:49 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:
>
>> Tim,
>>
>> I'm still not able to reproduce the error that you reported.  Have
you
>> applied all of the latest patches to METv3.0.1?
>> The latest patch tarball and instructions on how to apply it can be
found
>> here:
>>
http://www.dtcenter.org/met/users/support/known_issues/METv3.0.1/index.php.
>>  Can you tell me what version of NetCDF you
>> linked MET against?  What family of compilers did you use to
compile MET
>> (e.g. GNU/PGI/intel)?  I think we are down to a
>> configuration/environment problem at this point.  Sorry for the
trouble.
>>
>> Paul
>>
>>
>> On 12/09/2011 11:38 AM, Tim Melino via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>
>>> Paul,
>>> I put everything into a tar file and uploaded it.
>>>
>>> - Tim
>>>
>>>
>>> On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT <
>> met_help at ucar.edu>wrote:
>>>
>>>> Tim,
>>>>
>>>> We are not able to reproduce the error that you are reporting.
Are you
>>>> using the same exact data and config files that
>>>> you sent me and I tested with?  In any case, can you create a tar
>> archive
>>>> of all the files involved in the point_stat
>>>> command that throws the error and put it on the FTP site?  I will
need
>> to
>>>> be able to reproduce this error, otherwise it
>>>> will be difficult for me to diagnose the problem.
>>>>
>>>> Thanks,
>>>>
>>>> Paul
>>>>
>>>>
>>>> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>>>
>>>>> Paul,
>>>>> I tried running again using your configuration settings, but
while
>>>> running
>>>>> pointstat I am still receiving errors. The error comes up as the
>>>> following
>>>>> ....
>>>>>
>>>>> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
>>>>> ndas.t12z.ncPointStatConfig -outdir . -v 99
>>>>> GSL_RNG_TYPE=mt19937
>>>>> GSL_RNG_SEED=18446744071864509006
>>>>> Forecast File: wrf.nc
>>>>> Climatology File: none
>>>>> Configuration File: PointStatConfig
>>>>> Observation File: ndas.t12z.nc
>>>>>
>>>>>
>>>>
>>
--------------------------------------------------------------------------------
>>>>>
>>>>> Reading records for TT(0,0,*,*).
>>>>>
>>>>>
>>>>>   LongArray::operator[](int) -> range check error ... 4
>>>>>
>>>>>
>>>>>
>>>>> - Tim
>>>>>
>>>>>
>>>>> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT <
>> met_help at ucar.edu
>>>>> wrote:
>>>>>
>>>>>> Tim,
>>>>>>
>>>>>> I ran the following pb2nc and point_stat commands using the
attached
>>>>>> config files to generate point verification data
>>>>>> with your PrepBUFR obs and p_interp model data.  Note that
MET_BASE is
>>>> set
>>>>>> to the base folder of an instance of
>>>>>> METv3.0.1.  I pulled both config files, with slight
modifications,
>> from
>>>>>> $MET_BASE/scripts/config.
>>>>>>
>>>>>> $MET_BASE/bin/pb2nc
>> ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
>>>>>>
>>>>>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig
-outdir
>> .
>>>> -v
>>>>>> 99
>>>>>>
>>>>>> In PointStatConfig, you will see the following settings.  The
>> fcst_field
>>>>>> setting format is due to the fact that fields
>>>>>> in wrf.nc are four dimensional, with the last two dimensions
being
>> the
>>>>>> spatial (x,y) dimensions.  The obs_field
>>>>>> specifies surface temperature using a GRIB-style format,
because pb2nc
>>>>>> indexes fields in its output by GRIB code.  You
>>>>>> should follow a similar paradigm to verify additional fields
beyond
>>>>>> temperature.
>>>>>>
>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>>>
>>>>>> fcst_thresh[] = [ "le273" ];
>>>>>> obs_thresh[]  = [];
>>>>>>
>>>>>> If you have any questions or problems, please let me know.
>>>>>>
>>>>>> Good luck,
>>>>>>
>>>>>> Paul
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928
>
>>>>>>>
>>>>>>> I just put the file on the server that I have been using. As
far as
>>>>>> running
>>>>>>> the UPP software, that is not really possible at the moment. I
do not
>>>>>> have
>>>>>>> any of that software installed or configured as I have never
had a
>>>> reason
>>>>>>> to use it .
>>>>>>>
>>>>>>> - Tim
>>>>>>>
>>>>>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
>>>> met_help at ucar.edu
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Tim,
>>>>>>>>
>>>>>>>> Can you please put the input PrepBUFR file that you pass to
pb2nc on
>>>> the
>>>>>>>> FTP site?  When I look at the contents of
>>>>>>>> out.nc, it does not appear that there are any observations in
that
>>>>>> file.
>>>>>>>>  I would like to run pb2nc myself to see what
>>>>>>>> is going on.
>>>>>>>>
>>>>>>>> I made an incorrect assumption in my earlier emails that you
were
>>>> trying
>>>>>>>> to verify model data in GRIB format.  Now that
>>>>>>>> I have your data in hand, I see that it is p_interp output,
as you
>>>>>>>> mentioned in your initial email.  MET support for
>>>>>>>> p_interp is not as robust as for GRIB.  In particular, grid-
relative
>>>>>> wind
>>>>>>>> directions in p_interp data files should not
>>>>>>>> be compared to lat-long relative wind directions in the
PrepBUFR
>> obs.
>>>>>>>>  Would it be possible for you to run your WRF
>>>>>>>> output through the Unified Post Processor (UPP -
>>>>>>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php)
>>>>>>>> instead of or in addition to p_interp?  That would simplify
MET
>>>>>>>> verification tasks.  Please let me know if you have any
>>>>>>>> questions.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Paul
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
>>>>>>>>>
>>>>>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
>>>>>>>>> Transaction: Ticket created by tmelino at meso.com
>>>>>>>>>        Queue: met_help
>>>>>>>>>      Subject: Re: METV3 Issue
>>>>>>>>>        Owner: Nobody
>>>>>>>>>   Requestors: tmelino at meso.com
>>>>>>>>>       Status: new
>>>>>>>>>  Ticket <URL:
>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Ok,
>>>>>>>>>
>>>>>>>>> The data should be there now. With out.nc being the obs and
>>>> wrf.ncbeing
>>>>>>>>> the forecast
>>>>>>>>>
>>>>>>>>> - Tim
>>>>>>>>>
>>>>>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino
<tmelino at meso.com>
>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>> I have recently been doing some work with WRF and am trying
to add
>>>> the
>>>>>>>> the
>>>>>>>>>> model evaluation tools to our standard model verification
system.
>>  I
>>>>>>>>>> started the process by running the pressure interpolation
program
>>>> on a
>>>>>>>>>> single wrfout file, which appeared to finish correctly. I
have
>>>>>> attached
>>>>>>>> an
>>>>>>>>>> ncdump of the file header to this email it is called
>>>>>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I downloaded
a
>> single
>>>>>>>> prebufr
>>>>>>>>>> file from an NCEP repository for the time centered on the
forecast
>>>>>>>> period
>>>>>>>>>> and ran PB2NC and this also appeared to finish correctly
and
>> output
>>>> a
>>>>>>>>>> single netcdf file, the header information is also attached
>>>>>> (PB2NC.txt).
>>>>>>>>>> Then I attempted to run the point stat utility on these two
files
>>>> but
>>>>>>>> the
>>>>>>>>>> program errors out telling me that there are more forecast
field
>>>> that
>>>>>>>>>> observational fields "ERROR:
PointStatConfInfo::process_config()
>> ->
>>>>>> The
>>>>>>>>>> number fcst_thresh entries provided must match the number
of
>> fields
>>>>>>>>>> provided in fcst_field.". I ran the following command from
the
>>>>>> terminal
>>>>>>>> to
>>>>>>>>>> run point stat "bin/point_stat wrfout_d02_2011-12-
07_00:00:00_PLEV
>>>>>>>> out.nc
>>>>>>>>>> PointStatConfig".  I am not sure what the problem is I have
red
>> the
>>>>>>>>>> documentation and it appears to be setup correctly but I am
not
>>>>>>>> completely
>>>>>>>>>> sure as I have never used this software before.  What
should these
>>>>>>>> namelist
>>>>>>>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying
>> to
>>>>>>>> verify
>>>>>>>>>> 10 meter winds? I appreciate your help!
>>>>>>>>>>
>>>>>>>>>> Also ... I ran the test all scripts after compilation , and
the
>> code
>>>>>>>>>> completed successfully with no errors.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks ,
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>> //
>>>>>> // Default pb2nc configuration file
>>>>>> //
>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>
>>>>>> //
>>>>>> // Stratify the observation data in the PrepBufr files in the
>> following
>>>>>> // ways:
>>>>>> //  (1) by message type: supply a list of PrepBufr message
types
>>>>>> //      to retain (i.e. AIRCFT)
>>>>>> //  (2) by station id: supply a list of observation stations to
retain
>>>>>> //  (3) by valid time: supply starting and ending times in form
>>>>>> //      YYYY-MM-DD HH:MM:SS UTC
>>>>>> //  (4) by location: supply either an NCEP masking grid, a
masking
>>>>>> //      lat/lon polygon or a file to a mask lat/lon polygon
>>>>>> //  (5) by elevation: supply min/max elevation values
>>>>>> //  (6) by report type (typ): supply a list of report types to
retain
>>>>>> //  (7) by instrument type (itp): supply a list of instrument
type to
>>>>>> //      retain
>>>>>> //  (8) by vertical level: supply min/max vertical levels
>>>>>> //  (9) by variable type: supply a list of variable types to
retain
>>>>>> //      P, Q, T, Z, U, V
>>>>>> // (11) by quality mark: supply a quality mark threshold
>>>>>> // (12) Flag to retain values for all quality marks, or just
the first
>>>>>> //      quality mark (highest)
>>>>>> // (13) by data level category: supply a list of category types
to
>>>>>> //      retain.
>>>>>> //
>>>>>> //      0 - Surface level (mass reports only)
>>>>>> //      1 - Mandatory level (upper-air profile reports)
>>>>>> //      2 - Significant temperature level (upper-air profile
reports)
>>>>>> //      2 - Significant temperature and winds-by-pressure level
>>>>>> //          (future combined mass and wind upper-air reports)
>>>>>> //      3 - Winds-by-pressure level (upper-air profile reports)
>>>>>> //      4 - Winds-by-height level (upper-air profile reports)
>>>>>> //      5 - Tropopause level (upper-air profile reports)
>>>>>> //      6 - Reports on a single level
>>>>>> //          (e.g., aircraft, satellite-wind, surface wind,
>>>>>> //           precipitable water retrievals, etc.)
>>>>>> //      7 - Auxiliary levels generated via interpolation from
spanning
>>>>>> levels
>>>>>> //          (upper-air profile reports)
>>>>>> //
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of PrepBufr message type
strings to
>>>>>> retain.
>>>>>> // An empty list indicates that all should be retained.
>>>>>> // List of valid message types:
>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>>>> //
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>> //
>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>>>> //
>>>>>> message_type[] = [];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of station ID strings to
retain.
>>>>>> // An empty list indicates that all should be retained.
>>>>>> //
>>>>>> // e.g. station_id[] = [ "KDEN" ];
>>>>>> //
>>>>>> station_id[] = [];
>>>>>>
>>>>>> //
>>>>>> // Beginning and ending time offset values in seconds for
observations
>>>>>> // to retain.  The valid time window for retaining observations
is
>>>>>> // defined in reference to the observation time.  So
observations with
>>>>>> // a valid time falling in the window [obs_time+beg_ds,
>> obs_time+end_ds]
>>>>>> // will be retained.
>>>>>> //
>>>>>> beg_ds = -1800;
>>>>>> end_ds =  1800;
>>>>>>
>>>>>> //
>>>>>> // Specify the name of a single grid to be used in masking the
data.
>>>>>> // An empty string indicates that no grid should be used.  The
>> standard
>>>>>> // NCEP grids are named "GNNN" where NNN indicates the three
digit
>> grid
>>>>>> number.
>>>>>> //
>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>>>> //
>>>>>> // e.g. mask_grid = "G212";
>>>>>> //
>>>>>> mask_grid = "G212";
>>>>>>
>>>>>> //
>>>>>> // Specify a single ASCII file containing a lat/lon polygon.
>>>>>> // Latitude in degrees north and longitude in degrees east.
>>>>>> // By default, the first and last polygon points are connected.
>>>>>> //
>>>>>> // The lat/lon polygon file should contain a name for the
polygon
>>>>>> // followed by a space-separated list of lat/lon points:
>>>>>> //    "name lat1 lon1 lat2 lon2... latn lonn"
>>>>>> //
>>>>>> // MET_BASE may be used in the path for the lat/lon polygon
file.
>>>>>> //
>>>>>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
>>>>>> //
>>>>>> mask_poly = "";
>>>>>>
>>>>>> //
>>>>>> // Beginning and ending elevation values in meters for
observations
>>>>>> // to retain.
>>>>>> //
>>>>>> beg_elev = -1000;
>>>>>> end_elev = 100000;
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of PrepBufr report type
values to
>>>> retain.
>>>>>> // An empty list indicates that all should be retained.
>>>>>> //
>>>>>> //
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
>>>>>> //
>>>>>> // e.g. pb_report_type[] = [ 120, 133 ];
>>>>>> //
>>>>>> pb_report_type[] = [];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of input report type values
to
>> retain.
>>>>>> // An empty list indicates that all should be retained.
>>>>>> //
>>>>>> //
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
>>>>>> //
>>>>>> // e.g. in_report_type[] = [ 11, 22, 23 ];
>>>>>> //
>>>>>> in_report_type[] = [];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of instrument type values to
retain.
>>>>>> // An empty list indicates that all should be retained.
>>>>>> //
>>>>>> // e.g. instrument_type[] = [ 52, 87 ];
>>>>>> //
>>>>>> instrument_type[] = [];
>>>>>>
>>>>>> //
>>>>>> // Beginning and ending vertical levels to retain.
>>>>>> //
>>>>>> beg_level = 1;
>>>>>> end_level = 255;
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of strings containing grib
codes or
>>>>>> // corresponding grib code abbreviations to retain or be
derived from
>>>>>> // the available observations.
>>>>>> //
>>>>>> // Grib Codes to be RETAINED:
>>>>>> //    SPFH or 51 for Specific Humidity in kg/kg
>>>>>> //    TMP  or 11 for Temperature in K
>>>>>> //    HGT  or 7  for Height in meters
>>>>>> //    UGRD or 33 for the East-West component of the wind in m/s
>>>>>> //    VGRD or 34 for the North-South component of the wind in
m/s
>>>>>> //
>>>>>> // Grib Codes to be DERIVED:
>>>>>> //    DPT   or 17 for Dewpoint Temperature in K
>>>>>> //    WIND  or 32 for Wind Speed in m/s
>>>>>> //    RH    or 52 for Relative Humidity in %
>>>>>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
>>>>>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in Pa
>>>>>> //
>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>> //
>>>>>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
>>>>>> //
>>>>>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
>>>>>>                    "DPT",  "WIND", "RH",   "MIXR" ];
>>>>>>
>>>>>> //
>>>>>> // Quality mark threshold to indicate which observations to
retain.
>>>>>> // Observations with a quality mark equal to or LESS THAN this
>> threshold
>>>>>> // will be retained, while observations with a quality mark
GREATER
>> THAN
>>>>>> // this threshold will be discarded.
>>>>>> //
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
>>>>>> //
>>>>>> quality_mark_thresh = 2;
>>>>>>
>>>>>> //
>>>>>> // Flag to indicate whether observations should be drawn from
the top
>>>>>> // of the event stack (most quality controlled) or the bottom
of the
>>>>>> // event stack (most raw).  A value of 1 indicates that the top
of the
>>>>>> // event stack should be used while a value of zero indicates
that the
>>>>>> // bottom should be used.
>>>>>> //
>>>>>> event_stack_flag = 1;
>>>>>>
>>>>>> //
>>>>>> // Space comma-separated list of data level categorie values to
>> retain,
>>>>>> // where a value of:
>>>>>> //    0 = Surface level (mass reports only)
>>>>>> //    1 = Mandatory level (upper-air profile reports)
>>>>>> //    2 = Significant temperature level (upper-air profile
reports)
>>>>>> //    2 = Significant temperature and winds-by-pressure level
>>>>>> //        (future combined mass and wind upper-air reports)
>>>>>> //    3 = Winds-by-pressure level (upper-air profile reports)
>>>>>> //    4 = Winds-by-height level (upper-air profile reports)
>>>>>> //    5 = Tropopause level (upper-air profile reports)
>>>>>> //    6 = Reports on a single level
>>>>>> //        (e.g., aircraft, satellite-wind, surface wind,
>>>>>> //         precipitable water retrievals, etc.)
>>>>>> //    7 = Auxiliary levels generated via interpolation from
spanning
>>>> levels
>>>>>> //        (upper-air profile reports)
>>>>>> // An empty list indicates that all should be retained.
>>>>>> //
>>>>>> //
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>> //
>>>>>> // e.g. level_category[] = [ 0, 1 ];
>>>>>> //
>>>>>> level_category[] = [];
>>>>>>
>>>>>> //
>>>>>> // Directory where temp files should be written by the PB2NC
tool
>>>>>> //
>>>>>> tmp_dir = "/tmp";
>>>>>>
>>>>>> //
>>>>>> // Indicate a version number for the contents of this
configuration
>>>> file.
>>>>>> // The value should generally not be modified.
>>>>>> //
>>>>>> version = "V3.0";
>>>>>>
>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>> //
>>>>>> // Default point_stat configuration file
>>>>>> //
>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>
>>>>>> //
>>>>>> // Specify a name to designate the model being verified.  This
name
>>>> will be
>>>>>> // written to the second column of the ASCII output generated.
>>>>>> //
>>>>>> model = "WRF";
>>>>>>
>>>>>> //
>>>>>> // Beginning and ending time offset values in seconds for
observations
>>>>>> // to be used.  These time offsets are defined in reference to
the
>>>>>> // forecast valid time, v.  Observations with a valid time
falling in
>>>> the
>>>>>> // window [v+beg_ds, v+end_ds] will be used.
>>>>>> // These selections are overridden by the command line
arguments
>>>>>> // -obs_valid_beg and -obs_valid_end.
>>>>>> //
>>>>>> beg_ds = -1800;
>>>>>> end_ds =  1800;
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of fields to be verified.
The
>>>> forecast
>>>>>> and
>>>>>> // observation fields may be specified separately.  If the
obs_field
>>>>>> parameter
>>>>>> // is left blank, it will default to the contents of
fcst_field.
>>>>>> //
>>>>>> // Each field is specified as a GRIB code or abbreviation
followed by
>> an
>>>>>> // accumulation or vertical level indicator for GRIB files or
as a
>>>>>> variable name
>>>>>> // followed by a list of dimensions for NetCDF files output
from
>>>> p_interp
>>>>>> or MET.
>>>>>> //
>>>>>> // Specifying verification fields for GRIB files:
>>>>>> //    GC/ANNN for accumulation interval NNN
>>>>>> //    GC/ZNNN for vertical level NNN
>>>>>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
>>>>>> //    GC/PNNN for pressure level NNN in hPa
>>>>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>>>>>> //    GC/LNNN for a generic level type
>>>>>> //    GC/RNNN for a specific GRIB record number
>>>>>> //    Where GC is the number of or abbreviation for the grib
code
>>>>>> //    to be verified.
>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>> //
>>>>>> // Specifying verification fields for NetCDF files:
>>>>>> //    var_name(i,...,j,*,*) for a single field
>>>>>> //    var_name(i-j,*,*) for a range of fields
>>>>>> //    Where var_name is the name of the NetCDF variable,
>>>>>> //    and i,...,j specifies fixed dimension values,
>>>>>> //    and i-j specifies a range of values for a single
dimension,
>>>>>> //    and *,* specifies the two dimensions for the gridded
field.
>>>>>> //
>>>>>> //    NOTE: To verify winds as vectors rather than scalars,
>>>>>> //          specify UGRD (or 33) followed by VGRD (or 34) with
the
>>>>>> //          same level values.
>>>>>> //
>>>>>> //    NOTE: To process a probability field, add "/PROB", such
as
>>>>>> "POP/Z0/PROB".
>>>>>> //
>>>>>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a GRIB
input
>>>>>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ];
for
>> NetCDF
>>>>>> input
>>>>>> //
>>>>>>
>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of groups of thresholds to be
>> applied
>>>> to
>>>>>> the
>>>>>> // fields listed above.  Thresholds for the forecast and
observation
>>>> fields
>>>>>> // may be specified separately.  If the obs_thresh parameter is
left
>>>> blank,
>>>>>> // it will default to the contents of fcst_thresh.
>>>>>> //
>>>>>> // At least one threshold must be provided for each field
listed
>> above.
>>>>>>  The
>>>>>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as
>>>> must
>>>>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To
apply
>>>> multiple
>>>>>> // thresholds to a field, separate the threshold values with a
space.
>>>>>> //
>>>>>> // Each threshold must be preceded by a two letter indicator
for the
>>>> type
>>>>>> of
>>>>>> // thresholding to be performed:
>>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>>> //    'gt' for greater than  'ge' for greater than or equal to
>>>>>> //
>>>>>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with
>> 1.0,
>>>>>> //       and be preceeded by "ge".
>>>>>> //
>>>>>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
>>>>>> //
>>>>>> fcst_thresh[] = [ "le273" ];
>>>>>> obs_thresh[]  = [];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of thresholds to be used when
>>>> computing
>>>>>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds are
>> applied
>>>> to
>>>>>> the
>>>>>> // wind speed values derived from each U/V pair.  Only those
U/V pairs
>>>>>> which meet
>>>>>> // the wind speed threshold criteria are retained.  If the
>>>> obs_wind_thresh
>>>>>> // parameter is left blank, it will default to the contents of
>>>>>> fcst_wind_thresh.
>>>>>> //
>>>>>> // To apply multiple wind speed thresholds, separate the
threshold
>>>> values
>>>>>> with a
>>>>>> // space.  Use "NA" to indicate that no wind speed threshold
should be
>>>>>> applied.
>>>>>> //
>>>>>> // Each threshold must be preceded by a two letter indicator
for the
>>>> type
>>>>>> of
>>>>>> // thresholding to be performed:
>>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>>> //    'gt' for greater than  'ge' for greater than or equal to
>>>>>> //    'NA' for no threshold
>>>>>> //
>>>>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>>>>>> //
>>>>>> fcst_wind_thresh[] = [ "NA" ];
>>>>>> obs_wind_thresh[]  = [];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of PrepBufr message types
with which
>>>>>> // to perform the verification.  Statistics will be computed
>> separately
>>>>>> // for each message type specified.  At least one PrepBufr
message
>> type
>>>>>> // must be provided.
>>>>>> // List of valid message types:
>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>>>> //
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>> //
>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>>>> //
>>>>>> message_type[] = [ "ADPSFC" ];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of grids to be used in
masking the
>>>> data
>>>>>> over
>>>>>> // which to perform scoring.  An empty list indicates that no
masking
>>>> grid
>>>>>> // should be performed.  The standard NCEP grids are named
"GNNN"
>> where
>>>> NNN
>>>>>> // indicates the three digit grid number.  Enter "FULL" to
score over
>>>> the
>>>>>> // entire domain.
>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>>>> //
>>>>>> // e.g. mask_grid[] = [ "FULL" ];
>>>>>> //
>>>>>> mask_grid[] = [ "FULL" ];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of masking regions to be
applied.
>>>>>> // An empty list indicates that no additional masks should be
used.
>>>>>> // The masking regions may be defined in one of 4 ways:
>>>>>> //
>>>>>> // (1) An ASCII file containing a lat/lon polygon.
>>>>>> //     Latitude in degrees north and longitude in degrees east.
>>>>>> //     By default, the first and last polygon points are
connected.
>>>>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
>>>>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>>>>>> //
>>>>>> // (2) The NetCDF output of the gen_poly_mask tool.
>>>>>> //
>>>>>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
>>>>>> //     to be used, and optionally, a threshold to be applied to
the
>>>> field.
>>>>>> //     e.g. "sample.nc var_name gt0.00"
>>>>>> //
>>>>>> // (4) A GRIB data file, followed by a description of the field
>>>>>> //     to be used, and optionally, a threshold to be applied to
the
>>>> field.
>>>>>> //     e.g. "sample.grb APCP/A3 gt0.00"
>>>>>> //
>>>>>> // Any NetCDF or GRIB file used must have the same grid
dimensions as
>>>> the
>>>>>> // data being verified.
>>>>>> //
>>>>>> // MET_BASE may be used in the path for the files above.
>>>>>> //
>>>>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>>>>>> //                      "poly_mask.ncf",
>>>>>> //                      "sample.nc APCP",
>>>>>> //                      "sample.grb HGT/Z0 gt100.0" ];
>>>>>> //
>>>>>> mask_poly[] = [];
>>>>>>
>>>>>> //
>>>>>> // Specify the name of an ASCII file containing a space-
separated list
>>>> of
>>>>>> // station ID's at which to perform verification.  Each station
ID
>>>>>> specified
>>>>>> // is treated as an individual masking region.
>>>>>> //
>>>>>> // An empty list file name indicates that no station ID masks
should
>> be
>>>>>> used.
>>>>>> //
>>>>>> // MET_BASE may be used in the path for the station ID mask
file name.
>>>>>> //
>>>>>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
>>>>>> //
>>>>>> mask_sid = "";
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of values for alpha to be
used when
>>>>>> computing
>>>>>> // confidence intervals.  Values of alpha must be between 0 and
1.
>>>>>> //
>>>>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>>>>>> //
>>>>>> ci_alpha[] = [ 0.05 ];
>>>>>>
>>>>>> //
>>>>>> // Specify the method to be used for computing bootstrap
confidence
>>>>>> intervals.
>>>>>> // The value for this is interpreted as follows:
>>>>>> //    (0) Use the BCa interval method (computationally
intensive)
>>>>>> //    (1) Use the percentile interval method
>>>>>> //
>>>>>> boot_interval = 1;
>>>>>>
>>>>>> //
>>>>>> // Specify a proportion between 0 and 1 to define the replicate
sample
>>>> size
>>>>>> // to be used when computing percentile intervals.  The
replicate
>> sample
>>>>>> // size is set to boot_rep_prop * n, where n is the number of
raw data
>>>>>> points.
>>>>>> //
>>>>>> // e.g boot_rep_prop = 0.80;
>>>>>> //
>>>>>> boot_rep_prop = 1.0;
>>>>>>
>>>>>> //
>>>>>> // Specify the number of times each set of matched pair data
should be
>>>>>> // resampled when computing bootstrap confidence intervals.  A
value
>> of
>>>>>> // zero disables the computation of bootstrap condifence
intervals.
>>>>>> //
>>>>>> // e.g. n_boot_rep = 1000;
>>>>>> //
>>>>>> n_boot_rep = 1000;
>>>>>>
>>>>>> //
>>>>>> // Specify the name of the random number generator to be used.
See
>> the
>>>> MET
>>>>>> // Users Guide for a list of possible random number generators.
>>>>>> //
>>>>>> boot_rng = "mt19937";
>>>>>>
>>>>>> //
>>>>>> // Specify the seed value to be used when computing bootstrap
>> confidence
>>>>>> // intervals.  If left unspecified, the seed will change for
each run
>>>> and
>>>>>> // the computed bootstrap confidence intervals will not be
>> reproducable.
>>>>>> //
>>>>>> boot_seed = "";
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of interpolation method(s) to
be
>> used
>>>>>> // for comparing the forecast grid to the observation points.
String
>>>>>> values
>>>>>> // are interpreted as follows:
>>>>>> //    MIN     = Minimum in the neighborhood
>>>>>> //    MAX     = Maximum in the neighborhood
>>>>>> //    MEDIAN  = Median in the neighborhood
>>>>>> //    UW_MEAN = Unweighted mean in the neighborhood
>>>>>> //    DW_MEAN = Distance-weighted mean in the neighborhood
>>>>>> //    LS_FIT  = Least-squares fit in the neighborhood
>>>>>> //    BILIN   = Bilinear interpolation using the 4 closest
points
>>>>>> //
>>>>>> // In all cases, vertical interpolation is performed in the
natural
>> log
>>>>>> // of pressure of the levels above and below the observation.
>>>>>> //
>>>>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>>>>>> //
>>>>>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
>>>>>>
>>>>>> //
>>>>>> // Specify a comma-separated list of box widths to be used by
the
>>>>>> // interpolation techniques listed above.  A value of 1
indicates that
>>>>>> // the nearest neighbor approach should be used.  For a value
of n
>>>>>> // greater than 1, the n*n grid points closest to the
observation
>> define
>>>>>> // the neighborhood.
>>>>>> //
>>>>>> // e.g. interp_width = [ 1, 3, 5 ];
>>>>>> //
>>>>>> interp_width[] = [ 1, 3 ];
>>>>>>
>>>>>> //
>>>>>> // When interpolating, compute a ratio of the number of valid
data
>>>> points
>>>>>> // to the total number of points in the neighborhood.  If that
ratio
>> is
>>>>>> // less than this threshold, do not include the observation.
This
>>>>>> // threshold must be between 0 and 1.  Setting this threshold
to 1
>> will
>>>>>> // require that each observation be surrounded by n*n valid
forecast
>>>>>> // points.
>>>>>> //
>>>>>> // e.g. interp_thresh = 1.0;
>>>>>> //
>>>>>> interp_thresh = 1.0;
>>>>>>
>>>>>> //
>>>>>> // Specify flags to indicate the type of data to be output:
>>>>>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
>>>>>> //           Total (TOTAL),
>>>>>> //           Forecast Rate (F_RATE),
>>>>>> //           Hit Rate (H_RATE),
>>>>>> //           Observation Rate (O_RATE)
>>>>>> //
>>>>>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
>>>>>> //           Total (TOTAL),
>>>>>> //           Forecast Yes and Observation Yes Count (FY_OY),
>>>>>> //           Forecast Yes and Observation No Count (FY_ON),
>>>>>> //           Forecast No and Observation Yes Count (FN_OY),
>>>>>> //           Forecast No and Observation No Count (FN_ON)
>>>>>> //
>>>>>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
>>>>>> //           Total (TOTAL),
>>>>>> //           Base Rate (BASER),
>>>>>> //           Forecast Mean (FMEAN),
>>>>>> //           Accuracy (ACC),
>>>>>> //           Frequency Bias (FBIAS),
>>>>>> //           Probability of Detecting Yes (PODY),
>>>>>> //           Probability of Detecting No (PODN),
>>>>>> //           Probability of False Detection (POFD),
>>>>>> //           False Alarm Ratio (FAR),
>>>>>> //           Critical Success Index (CSI),
>>>>>> //           Gilbert Skill Score (GSS),
>>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>>> //           Heidke Skill Score (HSS),
>>>>>> //           Odds Ratio (ODDS),
>>>>>> //           NOTE: All statistics listed above contain
parametric
>> and/or
>>>>>> //                 non-parametric confidence interval limits.
>>>>>> //
>>>>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency
>> Table
>>>>>> Counts:
>>>>>> //           Total (TOTAL),
>>>>>> //           Number of Categories (N_CAT),
>>>>>> //           Contingency Table Count columns repeated
N_CAT*N_CAT
>> times
>>>>>> //
>>>>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency
>> Table
>>>>>> Scores:
>>>>>> //           Total (TOTAL),
>>>>>> //           Number of Categories (N_CAT),
>>>>>> //           Accuracy (ACC),
>>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>>> //           Heidke Skill Score (HSS),
>>>>>> //           Gerrity Score (GER),
>>>>>> //           NOTE: All statistics listed above contain
parametric
>> and/or
>>>>>> //                 non-parametric confidence interval limits.
>>>>>> //
>>>>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
>>>>>> //           Total (TOTAL),
>>>>>> //           Forecast Mean (FBAR),
>>>>>> //           Forecast Standard Deviation (FSTDEV),
>>>>>> //           Observation Mean (OBAR),
>>>>>> //           Observation Standard Deviation (OSTDEV),
>>>>>> //           Pearson's Correlation Coefficient (PR_CORR),
>>>>>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
>>>>>> //           Kendall Tau Rank Correlation Coefficient
(KT_CORR),
>>>>>> //           Number of ranks compared (RANKS),
>>>>>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
>>>>>> //           Number of tied ranks in the observation field
>> (ORANK_TIES),
>>>>>> //           Mean Error (ME),
>>>>>> //           Standard Deviation of the Error (ESTDEV),
>>>>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>>>>>> //           Mean Absolute Error (MAE),
>>>>>> //           Mean Squared Error (MSE),
>>>>>> //           Bias-Corrected Mean Squared Error (BCMSE),
>>>>>> //           Root Mean Squared Error (RMSE),
>>>>>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
>>>>>> //           NOTE: Most statistics listed above contain
parametric
>>>> and/or
>>>>>> //                 non-parametric confidence interval limits.
>>>>>> //
>>>>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>>>>>> //           Total (TOTAL),
>>>>>> //           Forecast Mean (FBAR),
>>>>>> //              = mean(f)
>>>>>> //           Observation Mean (OBAR),
>>>>>> //              = mean(o)
>>>>>> //           Forecast*Observation Product Mean (FOBAR),
>>>>>> //              = mean(f*o)
>>>>>> //           Forecast Squared Mean (FFBAR),
>>>>>> //              = mean(f^2)
>>>>>> //           Observation Squared Mean (OOBAR)
>>>>>> //              = mean(o^2)
>>>>>> //
>>>>>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial
Sums:
>>>>>> //           Total (TOTAL),
>>>>>> //           Forecast Anomaly Mean (FABAR),
>>>>>> //              = mean(f-c)
>>>>>> //           Observation Anomaly Mean (OABAR),
>>>>>> //              = mean(o-c)
>>>>>> //           Product of Forecast and Observation Anomalies Mean
>>>> (FOABAR),
>>>>>> //              = mean((f-c)*(o-c))
>>>>>> //           Forecast Anomaly Squared Mean (FFABAR),
>>>>>> //              = mean((f-c)^2)
>>>>>> //           Observation Anomaly Squared Mean (OOABAR)
>>>>>> //              = mean((o-c)^2)
>>>>>> //
>>>>>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
>>>>>> //           Total (TOTAL),
>>>>>> //           U-Forecast Mean (UFBAR),
>>>>>> //              = mean(uf)
>>>>>> //           V-Forecast Mean (VFBAR),
>>>>>> //              = mean(vf)
>>>>>> //           U-Observation Mean (UOBAR),
>>>>>> //              = mean(uo)
>>>>>> //           V-Observation Mean (VOBAR),
>>>>>> //              = mean(vo)
>>>>>> //           U-Product Plus V-Product (UVFOBAR),
>>>>>> //              = mean(uf*uo+vf*vo)
>>>>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
>>>>>> //              = mean(uf^2+vf^2)
>>>>>> //           U-Observation Squared Plus V-Observation Squared
>> (UVOOBAR)
>>>>>> //              = mean(uo^2+vo^2)
>>>>>> //
>>>>>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial
Sums:
>>>>>> //           U-Forecast Anomaly Mean (UFABAR),
>>>>>> //              = mean(uf-uc)
>>>>>> //           V-Forecast Anomaly Mean (VFABAR),
>>>>>> //              = mean(vf-vc)
>>>>>> //           U-Observation Anomaly Mean (UOABAR),
>>>>>> //              = mean(uo-uc)
>>>>>> //           V-Observation Anomaly Mean (VOABAR),
>>>>>> //              = mean(vo-vc)
>>>>>> //           U-Anomaly Product Plus V-Anomaly Product
(UVFOABAR),
>>>>>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
>>>>>> //           U-Forecast Anomaly Squared Plus V-Forecast Anomaly
>> Squared
>>>>>> (UVFFABAR),
>>>>>> //              = mean((uf-uc)^2+(vf-vc)^2)
>>>>>> //           U-Observation Anomaly Squared Plus V-Observation
Anomaly
>>>>>> Squared (UVOOABAR)
>>>>>> //              = mean((uo-uc)^2+(vo-vc)^2)
>>>>>> //
>>>>>> //   (11) STAT and PCT Text Files, Nx2 Probability Contingency
Table
>>>>>> Counts:
>>>>>> //           Total (TOTAL),
>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>> //           Row Observation Yes Count (OY_i),
>>>>>> //           Row Observation No Count (ON_i),
>>>>>> //           NOTE: Previous 3 columns repeated for each row in
the
>>>> table.
>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>> //
>>>>>> //   (12) STAT and PSTD Text Files, Nx2 Probability Contingency
Table
>>>>>> Scores:
>>>>>> //           Total (TOTAL),
>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>> //           Base Rate (BASER) with confidence interval limits,
>>>>>> //           Reliability (RELIABILITY),
>>>>>> //           Resolution (RESOLUTION),
>>>>>> //           Uncertainty (UNCERTAINTY),
>>>>>> //           Area Under the ROC Curve (ROC_AUC),
>>>>>> //           Brier Score (BRIER) with confidence interval
limits,
>>>>>> //           Probability Threshold Value (THRESH_i)
>>>>>> //           NOTE: Previous column repeated for each
probability
>>>> threshold.
>>>>>> //
>>>>>> //   (13) STAT and PJC Text Files, Joint/Continuous Statistics
of
>>>>>> //                                 Probabilistic Variables:
>>>>>> //           Total (TOTAL),
>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>> //           Observation Yes Count Divided by Total (OY_TP_i),
>>>>>> //           Observation No Count Divided by Total (ON_TP_i),
>>>>>> //           Calibration (CALIBRATION_i),
>>>>>> //           Refinement (REFINEMENT_i),
>>>>>> //           Likelikhood (LIKELIHOOD_i),
>>>>>> //           Base Rate (BASER_i),
>>>>>> //           NOTE: Previous 7 columns repeated for each row in
the
>>>> table.
>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>> //
>>>>>> //   (14) STAT and PRC Text Files, ROC Curve Points for
>>>>>> //                                 Probabilistic Variables:
>>>>>> //           Total (TOTAL),
>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>> //           Probability of Detecting Yes (PODY_i),
>>>>>> //           Probability of False Detection (POFD_i),
>>>>>> //           NOTE: Previous 3 columns repeated for each row in
the
>>>> table.
>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>> //
>>>>>> //   (15) STAT and MPR Text Files, Matched Pair Data:
>>>>>> //           Total (TOTAL),
>>>>>> //           Index (INDEX),
>>>>>> //           Observation Station ID (OBS_SID),
>>>>>> //           Observation Latitude (OBS_LAT),
>>>>>> //           Observation Longitude (OBS_LON),
>>>>>> //           Observation Level (OBS_LVL),
>>>>>> //           Observation Elevation (OBS_ELV),
>>>>>> //           Forecast Value (FCST),
>>>>>> //           Observation Value (OBS),
>>>>>> //           Climatological Value (CLIMO)
>>>>>> //
>>>>>> //   In the expressions above, f are forecast values, o are
observed
>>>>>> values,
>>>>>> //   and c are climatological values.
>>>>>> //
>>>>>> // Values for these flags are interpreted as follows:
>>>>>> //    (0) Do not generate output of this type
>>>>>> //    (1) Write output to a STAT file
>>>>>> //    (2) Write output to a STAT file and a text file
>>>>>> //
>>>>>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1
];
>>>>>>
>>>>>> //
>>>>>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
>>>> Correlation
>>>>>> // Coefficients should be computed.  Computing them over large
>> datasets
>>>> is
>>>>>> // computationally intensive and slows down the runtime
execution
>>>>>> significantly.
>>>>>> //    (0) Do not compute these correlation coefficients
>>>>>> //    (1) Compute these correlation coefficients
>>>>>> //
>>>>>> rank_corr_flag = 1;
>>>>>>
>>>>>> //
>>>>>> // Specify the GRIB Table 2 parameter table version number to
be used
>>>>>> // for interpreting GRIB codes.
>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>> //
>>>>>> grib_ptv = 2;
>>>>>>
>>>>>> //
>>>>>> // Directory where temporary files should be written.
>>>>>> //
>>>>>> tmp_dir = "/tmp";
>>>>>>
>>>>>> //
>>>>>> // Prefix to be used for the output file names.
>>>>>> //
>>>>>> output_prefix = "";
>>>>>>
>>>>>> //
>>>>>> // Indicate a version number for the contents of this
configuration
>>>> file.
>>>>>> // The value should generally not be modified.
>>>>>> //
>>>>>> version = "V3.0.1";
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>>
>>
>>
>>


------------------------------------------------
Subject: Re: METV3 Issue
From: Tim Melino
Time: Fri Dec 09 14:12:43 2011

Thanks, Paul for all you help on this. I installed everything with
gfortran
and gcc , which seems to work. I will switch back to intel when the
issue
is resolved.

- Tim

On Fri, Dec 9, 2011 at 4:04 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:

> Tim,
>
> I was able to reproduce the error you reported when running
METv3.0.1
> compiled with intel compilers.  This may take us a
> little time to sort out.  Thanks for reporting this issue, and we'll
let
> you know when we have a solution for you.
>
> Thanks,
>
> Paul
>
>
> On 12/09/2011 12:49 PM, Tim Melino via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >
> > Ok,
> > Here are some of the specifications and I will install the latest
patch
> > now.
> >
> > netCDF version 4.1.1
> > INTEL-11.1.072 Compilers
> >
> >
> > - Tim
> >
> >
> > On Fri, Dec 9, 2011 at 1:49 PM, Paul Oldenburg via RT
<met_help at ucar.edu
> >wrote:
> >
> >> Tim,
> >>
> >> I'm still not able to reproduce the error that you reported.
Have you
> >> applied all of the latest patches to METv3.0.1?
> >> The latest patch tarball and instructions on how to apply it can
be
> found
> >> here:
> >>
>
http://www.dtcenter.org/met/users/support/known_issues/METv3.0.1/index.php
> .
> >>  Can you tell me what version of NetCDF you
> >> linked MET against?  What family of compilers did you use to
compile MET
> >> (e.g. GNU/PGI/intel)?  I think we are down to a
> >> configuration/environment problem at this point.  Sorry for the
trouble.
> >>
> >> Paul
> >>
> >>
> >> On 12/09/2011 11:38 AM, Tim Melino via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >>>
> >>> Paul,
> >>> I put everything into a tar file and uploaded it.
> >>>
> >>> - Tim
> >>>
> >>>
> >>> On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT <
> >> met_help at ucar.edu>wrote:
> >>>
> >>>> Tim,
> >>>>
> >>>> We are not able to reproduce the error that you are reporting.
Are
> you
> >>>> using the same exact data and config files that
> >>>> you sent me and I tested with?  In any case, can you create a
tar
> >> archive
> >>>> of all the files involved in the point_stat
> >>>> command that throws the error and put it on the FTP site?  I
will need
> >> to
> >>>> be able to reproduce this error, otherwise it
> >>>> will be difficult for me to diagnose the problem.
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Paul
> >>>>
> >>>>
> >>>> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928
>
> >>>>>
> >>>>> Paul,
> >>>>> I tried running again using your configuration settings, but
while
> >>>> running
> >>>>> pointstat I am still receiving errors. The error comes up as
the
> >>>> following
> >>>>> ....
> >>>>>
> >>>>> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
> >>>>> ndas.t12z.ncPointStatConfig -outdir . -v 99
> >>>>> GSL_RNG_TYPE=mt19937
> >>>>> GSL_RNG_SEED=18446744071864509006
> >>>>> Forecast File: wrf.nc
> >>>>> Climatology File: none
> >>>>> Configuration File: PointStatConfig
> >>>>> Observation File: ndas.t12z.nc
> >>>>>
> >>>>>
> >>>>
> >>
>
--------------------------------------------------------------------------------
> >>>>>
> >>>>> Reading records for TT(0,0,*,*).
> >>>>>
> >>>>>
> >>>>>   LongArray::operator[](int) -> range check error ... 4
> >>>>>
> >>>>>
> >>>>>
> >>>>> - Tim
> >>>>>
> >>>>>
> >>>>> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT <
> >> met_help at ucar.edu
> >>>>> wrote:
> >>>>>
> >>>>>> Tim,
> >>>>>>
> >>>>>> I ran the following pb2nc and point_stat commands using the
attached
> >>>>>> config files to generate point verification data
> >>>>>> with your PrepBUFR obs and p_interp model data.  Note that
MET_BASE
> is
> >>>> set
> >>>>>> to the base folder of an instance of
> >>>>>> METv3.0.1.  I pulled both config files, with slight
modifications,
> >> from
> >>>>>> $MET_BASE/scripts/config.
> >>>>>>
> >>>>>> $MET_BASE/bin/pb2nc
> >> ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
> >>>>>>
> >>>>>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig
> -outdir
> >> .
> >>>> -v
> >>>>>> 99
> >>>>>>
> >>>>>> In PointStatConfig, you will see the following settings.  The
> >> fcst_field
> >>>>>> setting format is due to the fact that fields
> >>>>>> in wrf.nc are four dimensional, with the last two dimensions
being
> >> the
> >>>>>> spatial (x,y) dimensions.  The obs_field
> >>>>>> specifies surface temperature using a GRIB-style format,
because
> pb2nc
> >>>>>> indexes fields in its output by GRIB code.  You
> >>>>>> should follow a similar paradigm to verify additional fields
beyond
> >>>>>> temperature.
> >>>>>>
> >>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
> >>>>>> obs_field[]  = [ "TMP/Z2" ];
> >>>>>>
> >>>>>> fcst_thresh[] = [ "le273" ];
> >>>>>> obs_thresh[]  = [];
> >>>>>>
> >>>>>> If you have any questions or problems, please let me know.
> >>>>>>
> >>>>>> Good luck,
> >>>>>>
> >>>>>> Paul
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >>>>>>>
> >>>>>>> I just put the file on the server that I have been using. As
far as
> >>>>>> running
> >>>>>>> the UPP software, that is not really possible at the moment.
I do
> not
> >>>>>> have
> >>>>>>> any of that software installed or configured as I have never
had a
> >>>> reason
> >>>>>>> to use it .
> >>>>>>>
> >>>>>>> - Tim
> >>>>>>>
> >>>>>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
> >>>> met_help at ucar.edu
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Tim,
> >>>>>>>>
> >>>>>>>> Can you please put the input PrepBUFR file that you pass to
pb2nc
> on
> >>>> the
> >>>>>>>> FTP site?  When I look at the contents of
> >>>>>>>> out.nc, it does not appear that there are any observations
in
> that
> >>>>>> file.
> >>>>>>>>  I would like to run pb2nc myself to see what
> >>>>>>>> is going on.
> >>>>>>>>
> >>>>>>>> I made an incorrect assumption in my earlier emails that
you were
> >>>> trying
> >>>>>>>> to verify model data in GRIB format.  Now that
> >>>>>>>> I have your data in hand, I see that it is p_interp output,
as you
> >>>>>>>> mentioned in your initial email.  MET support for
> >>>>>>>> p_interp is not as robust as for GRIB.  In particular,
> grid-relative
> >>>>>> wind
> >>>>>>>> directions in p_interp data files should not
> >>>>>>>> be compared to lat-long relative wind directions in the
PrepBUFR
> >> obs.
> >>>>>>>>  Would it be possible for you to run your WRF
> >>>>>>>> output through the Unified Post Processor (UPP -
> >>>>>>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php)
> >>>>>>>> instead of or in addition to p_interp?  That would simplify
MET
> >>>>>>>> verification tasks.  Please let me know if you have any
> >>>>>>>> questions.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>>
> >>>>>>>> Paul
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
> >>>>>>>>>
> >>>>>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> >>>>>>>>> Transaction: Ticket created by tmelino at meso.com
> >>>>>>>>>        Queue: met_help
> >>>>>>>>>      Subject: Re: METV3 Issue
> >>>>>>>>>        Owner: Nobody
> >>>>>>>>>   Requestors: tmelino at meso.com
> >>>>>>>>>       Status: new
> >>>>>>>>>  Ticket <URL:
> >>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Ok,
> >>>>>>>>>
> >>>>>>>>> The data should be there now. With out.nc being the obs
and
> >>>> wrf.ncbeing
> >>>>>>>>> the forecast
> >>>>>>>>>
> >>>>>>>>> - Tim
> >>>>>>>>>
> >>>>>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino
<tmelino at meso.com>
> >> wrote:
> >>>>>>>>>
> >>>>>>>>>> Hi,
> >>>>>>>>>> I have recently been doing some work with WRF and am
trying to
> add
> >>>> the
> >>>>>>>> the
> >>>>>>>>>> model evaluation tools to our standard model verification
> system.
> >>  I
> >>>>>>>>>> started the process by running the pressure interpolation
> program
> >>>> on a
> >>>>>>>>>> single wrfout file, which appeared to finish correctly. I
have
> >>>>>> attached
> >>>>>>>> an
> >>>>>>>>>> ncdump of the file header to this email it is called
> >>>>>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I
downloaded a
> >> single
> >>>>>>>> prebufr
> >>>>>>>>>> file from an NCEP repository for the time centered on the
> forecast
> >>>>>>>> period
> >>>>>>>>>> and ran PB2NC and this also appeared to finish correctly
and
> >> output
> >>>> a
> >>>>>>>>>> single netcdf file, the header information is also
attached
> >>>>>> (PB2NC.txt).
> >>>>>>>>>> Then I attempted to run the point stat utility on these
two
> files
> >>>> but
> >>>>>>>> the
> >>>>>>>>>> program errors out telling me that there are more
forecast field
> >>>> that
> >>>>>>>>>> observational fields "ERROR:
PointStatConfInfo::process_config()
> >> ->
> >>>>>> The
> >>>>>>>>>> number fcst_thresh entries provided must match the number
of
> >> fields
> >>>>>>>>>> provided in fcst_field.". I ran the following command
from the
> >>>>>> terminal
> >>>>>>>> to
> >>>>>>>>>> run point stat "bin/point_stat
> wrfout_d02_2011-12-07_00:00:00_PLEV
> >>>>>>>> out.nc
> >>>>>>>>>> PointStatConfig".  I am not sure what the problem is I
have red
> >> the
> >>>>>>>>>> documentation and it appears to be setup correctly but I
am not
> >>>>>>>> completely
> >>>>>>>>>> sure as I have never used this software before.  What
should
> these
> >>>>>>>> namelist
> >>>>>>>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying
> >> to
> >>>>>>>> verify
> >>>>>>>>>> 10 meter winds? I appreciate your help!
> >>>>>>>>>>
> >>>>>>>>>> Also ... I ran the test all scripts after compilation ,
and the
> >> code
> >>>>>>>>>> completed successfully with no errors.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Thanks ,
> >>>>>>>>>> Tim
> >>>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>> //
> >>>>>> // Default pb2nc configuration file
> >>>>>> //
> >>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>>
> >>>>>> //
> >>>>>> // Stratify the observation data in the PrepBufr files in the
> >> following
> >>>>>> // ways:
> >>>>>> //  (1) by message type: supply a list of PrepBufr message
types
> >>>>>> //      to retain (i.e. AIRCFT)
> >>>>>> //  (2) by station id: supply a list of observation stations
to
> retain
> >>>>>> //  (3) by valid time: supply starting and ending times in
form
> >>>>>> //      YYYY-MM-DD HH:MM:SS UTC
> >>>>>> //  (4) by location: supply either an NCEP masking grid, a
masking
> >>>>>> //      lat/lon polygon or a file to a mask lat/lon polygon
> >>>>>> //  (5) by elevation: supply min/max elevation values
> >>>>>> //  (6) by report type (typ): supply a list of report types
to
> retain
> >>>>>> //  (7) by instrument type (itp): supply a list of instrument
type
> to
> >>>>>> //      retain
> >>>>>> //  (8) by vertical level: supply min/max vertical levels
> >>>>>> //  (9) by variable type: supply a list of variable types to
retain
> >>>>>> //      P, Q, T, Z, U, V
> >>>>>> // (11) by quality mark: supply a quality mark threshold
> >>>>>> // (12) Flag to retain values for all quality marks, or just
the
> first
> >>>>>> //      quality mark (highest)
> >>>>>> // (13) by data level category: supply a list of category
types to
> >>>>>> //      retain.
> >>>>>> //
> >>>>>> //      0 - Surface level (mass reports only)
> >>>>>> //      1 - Mandatory level (upper-air profile reports)
> >>>>>> //      2 - Significant temperature level (upper-air profile
> reports)
> >>>>>> //      2 - Significant temperature and winds-by-pressure
level
> >>>>>> //          (future combined mass and wind upper-air reports)
> >>>>>> //      3 - Winds-by-pressure level (upper-air profile
reports)
> >>>>>> //      4 - Winds-by-height level (upper-air profile reports)
> >>>>>> //      5 - Tropopause level (upper-air profile reports)
> >>>>>> //      6 - Reports on a single level
> >>>>>> //          (e.g., aircraft, satellite-wind, surface wind,
> >>>>>> //           precipitable water retrievals, etc.)
> >>>>>> //      7 - Auxiliary levels generated via interpolation from
> spanning
> >>>>>> levels
> >>>>>> //          (upper-air profile reports)
> >>>>>> //
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of PrepBufr message type
strings
> to
> >>>>>> retain.
> >>>>>> // An empty list indicates that all should be retained.
> >>>>>> // List of valid message types:
> >>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
> >>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
> >>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
> >>>>>> //
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>>>> //
> >>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >>>>>> //
> >>>>>> message_type[] = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of station ID strings to
retain.
> >>>>>> // An empty list indicates that all should be retained.
> >>>>>> //
> >>>>>> // e.g. station_id[] = [ "KDEN" ];
> >>>>>> //
> >>>>>> station_id[] = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Beginning and ending time offset values in seconds for
> observations
> >>>>>> // to retain.  The valid time window for retaining
observations is
> >>>>>> // defined in reference to the observation time.  So
observations
> with
> >>>>>> // a valid time falling in the window [obs_time+beg_ds,
> >> obs_time+end_ds]
> >>>>>> // will be retained.
> >>>>>> //
> >>>>>> beg_ds = -1800;
> >>>>>> end_ds =  1800;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify the name of a single grid to be used in masking
the data.
> >>>>>> // An empty string indicates that no grid should be used.
The
> >> standard
> >>>>>> // NCEP grids are named "GNNN" where NNN indicates the three
digit
> >> grid
> >>>>>> number.
> >>>>>> //
> >>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>>>>> //
> >>>>>> // e.g. mask_grid = "G212";
> >>>>>> //
> >>>>>> mask_grid = "G212";
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a single ASCII file containing a lat/lon polygon.
> >>>>>> // Latitude in degrees north and longitude in degrees east.
> >>>>>> // By default, the first and last polygon points are
connected.
> >>>>>> //
> >>>>>> // The lat/lon polygon file should contain a name for the
polygon
> >>>>>> // followed by a space-separated list of lat/lon points:
> >>>>>> //    "name lat1 lon1 lat2 lon2... latn lonn"
> >>>>>> //
> >>>>>> // MET_BASE may be used in the path for the lat/lon polygon
file.
> >>>>>> //
> >>>>>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
> >>>>>> //
> >>>>>> mask_poly = "";
> >>>>>>
> >>>>>> //
> >>>>>> // Beginning and ending elevation values in meters for
observations
> >>>>>> // to retain.
> >>>>>> //
> >>>>>> beg_elev = -1000;
> >>>>>> end_elev = 100000;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of PrepBufr report type
values to
> >>>> retain.
> >>>>>> // An empty list indicates that all should be retained.
> >>>>>> //
> >>>>>> //
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
> >>>>>> //
> >>>>>> // e.g. pb_report_type[] = [ 120, 133 ];
> >>>>>> //
> >>>>>> pb_report_type[] = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of input report type values
to
> >> retain.
> >>>>>> // An empty list indicates that all should be retained.
> >>>>>> //
> >>>>>> //
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
> >>>>>> //
> >>>>>> // e.g. in_report_type[] = [ 11, 22, 23 ];
> >>>>>> //
> >>>>>> in_report_type[] = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of instrument type values
to
> retain.
> >>>>>> // An empty list indicates that all should be retained.
> >>>>>> //
> >>>>>> // e.g. instrument_type[] = [ 52, 87 ];
> >>>>>> //
> >>>>>> instrument_type[] = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Beginning and ending vertical levels to retain.
> >>>>>> //
> >>>>>> beg_level = 1;
> >>>>>> end_level = 255;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of strings containing grib
codes
> or
> >>>>>> // corresponding grib code abbreviations to retain or be
derived
> from
> >>>>>> // the available observations.
> >>>>>> //
> >>>>>> // Grib Codes to be RETAINED:
> >>>>>> //    SPFH or 51 for Specific Humidity in kg/kg
> >>>>>> //    TMP  or 11 for Temperature in K
> >>>>>> //    HGT  or 7  for Height in meters
> >>>>>> //    UGRD or 33 for the East-West component of the wind in
m/s
> >>>>>> //    VGRD or 34 for the North-South component of the wind in
m/s
> >>>>>> //
> >>>>>> // Grib Codes to be DERIVED:
> >>>>>> //    DPT   or 17 for Dewpoint Temperature in K
> >>>>>> //    WIND  or 32 for Wind Speed in m/s
> >>>>>> //    RH    or 52 for Relative Humidity in %
> >>>>>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
> >>>>>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in
Pa
> >>>>>> //
> >>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>>>> //
> >>>>>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
> >>>>>> //
> >>>>>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
> >>>>>>                    "DPT",  "WIND", "RH",   "MIXR" ];
> >>>>>>
> >>>>>> //
> >>>>>> // Quality mark threshold to indicate which observations to
retain.
> >>>>>> // Observations with a quality mark equal to or LESS THAN
this
> >> threshold
> >>>>>> // will be retained, while observations with a quality mark
GREATER
> >> THAN
> >>>>>> // this threshold will be discarded.
> >>>>>> //
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
> >>>>>> //
> >>>>>> quality_mark_thresh = 2;
> >>>>>>
> >>>>>> //
> >>>>>> // Flag to indicate whether observations should be drawn from
the
> top
> >>>>>> // of the event stack (most quality controlled) or the bottom
of the
> >>>>>> // event stack (most raw).  A value of 1 indicates that the
top of
> the
> >>>>>> // event stack should be used while a value of zero indicates
that
> the
> >>>>>> // bottom should be used.
> >>>>>> //
> >>>>>> event_stack_flag = 1;
> >>>>>>
> >>>>>> //
> >>>>>> // Space comma-separated list of data level categorie values
to
> >> retain,
> >>>>>> // where a value of:
> >>>>>> //    0 = Surface level (mass reports only)
> >>>>>> //    1 = Mandatory level (upper-air profile reports)
> >>>>>> //    2 = Significant temperature level (upper-air profile
reports)
> >>>>>> //    2 = Significant temperature and winds-by-pressure level
> >>>>>> //        (future combined mass and wind upper-air reports)
> >>>>>> //    3 = Winds-by-pressure level (upper-air profile reports)
> >>>>>> //    4 = Winds-by-height level (upper-air profile reports)
> >>>>>> //    5 = Tropopause level (upper-air profile reports)
> >>>>>> //    6 = Reports on a single level
> >>>>>> //        (e.g., aircraft, satellite-wind, surface wind,
> >>>>>> //         precipitable water retrievals, etc.)
> >>>>>> //    7 = Auxiliary levels generated via interpolation from
spanning
> >>>> levels
> >>>>>> //        (upper-air profile reports)
> >>>>>> // An empty list indicates that all should be retained.
> >>>>>> //
> >>>>>> //
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>>>> //
> >>>>>> // e.g. level_category[] = [ 0, 1 ];
> >>>>>> //
> >>>>>> level_category[] = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Directory where temp files should be written by the PB2NC
tool
> >>>>>> //
> >>>>>> tmp_dir = "/tmp";
> >>>>>>
> >>>>>> //
> >>>>>> // Indicate a version number for the contents of this
configuration
> >>>> file.
> >>>>>> // The value should generally not be modified.
> >>>>>> //
> >>>>>> version = "V3.0";
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>> //
> >>>>>> // Default point_stat configuration file
> >>>>>> //
> >>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a name to designate the model being verified.
This name
> >>>> will be
> >>>>>> // written to the second column of the ASCII output
generated.
> >>>>>> //
> >>>>>> model = "WRF";
> >>>>>>
> >>>>>> //
> >>>>>> // Beginning and ending time offset values in seconds for
> observations
> >>>>>> // to be used.  These time offsets are defined in reference
to the
> >>>>>> // forecast valid time, v.  Observations with a valid time
falling
> in
> >>>> the
> >>>>>> // window [v+beg_ds, v+end_ds] will be used.
> >>>>>> // These selections are overridden by the command line
arguments
> >>>>>> // -obs_valid_beg and -obs_valid_end.
> >>>>>> //
> >>>>>> beg_ds = -1800;
> >>>>>> end_ds =  1800;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of fields to be verified.
The
> >>>> forecast
> >>>>>> and
> >>>>>> // observation fields may be specified separately.  If the
obs_field
> >>>>>> parameter
> >>>>>> // is left blank, it will default to the contents of
fcst_field.
> >>>>>> //
> >>>>>> // Each field is specified as a GRIB code or abbreviation
followed
> by
> >> an
> >>>>>> // accumulation or vertical level indicator for GRIB files or
as a
> >>>>>> variable name
> >>>>>> // followed by a list of dimensions for NetCDF files output
from
> >>>> p_interp
> >>>>>> or MET.
> >>>>>> //
> >>>>>> // Specifying verification fields for GRIB files:
> >>>>>> //    GC/ANNN for accumulation interval NNN
> >>>>>> //    GC/ZNNN for vertical level NNN
> >>>>>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
> >>>>>> //    GC/PNNN for pressure level NNN in hPa
> >>>>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >>>>>> //    GC/LNNN for a generic level type
> >>>>>> //    GC/RNNN for a specific GRIB record number
> >>>>>> //    Where GC is the number of or abbreviation for the grib
code
> >>>>>> //    to be verified.
> >>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>>>> //
> >>>>>> // Specifying verification fields for NetCDF files:
> >>>>>> //    var_name(i,...,j,*,*) for a single field
> >>>>>> //    var_name(i-j,*,*) for a range of fields
> >>>>>> //    Where var_name is the name of the NetCDF variable,
> >>>>>> //    and i,...,j specifies fixed dimension values,
> >>>>>> //    and i-j specifies a range of values for a single
dimension,
> >>>>>> //    and *,* specifies the two dimensions for the gridded
field.
> >>>>>> //
> >>>>>> //    NOTE: To verify winds as vectors rather than scalars,
> >>>>>> //          specify UGRD (or 33) followed by VGRD (or 34)
with the
> >>>>>> //          same level values.
> >>>>>> //
> >>>>>> //    NOTE: To process a probability field, add "/PROB", such
as
> >>>>>> "POP/Z0/PROB".
> >>>>>> //
> >>>>>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a
GRIB input
> >>>>>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ];
for
> >> NetCDF
> >>>>>> input
> >>>>>> //
> >>>>>>
> >>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
> >>>>>> obs_field[]  = [ "TMP/Z2" ];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of groups of thresholds to
be
> >> applied
> >>>> to
> >>>>>> the
> >>>>>> // fields listed above.  Thresholds for the forecast and
observation
> >>>> fields
> >>>>>> // may be specified separately.  If the obs_thresh parameter
is left
> >>>> blank,
> >>>>>> // it will default to the contents of fcst_thresh.
> >>>>>> //
> >>>>>> // At least one threshold must be provided for each field
listed
> >> above.
> >>>>>>  The
> >>>>>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match,
> as
> >>>> must
> >>>>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To
apply
> >>>> multiple
> >>>>>> // thresholds to a field, separate the threshold values with
a
> space.
> >>>>>> //
> >>>>>> // Each threshold must be preceded by a two letter indicator
for the
> >>>> type
> >>>>>> of
> >>>>>> // thresholding to be performed:
> >>>>>> //    'lt' for less than     'le' for less than or equal to
> >>>>>> //    'eq' for equal to      'ne' for not equal to
> >>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
> >>>>>> //
> >>>>>> // NOTE: Thresholds for probabilities must begin with 0.0,
end with
> >> 1.0,
> >>>>>> //       and be preceeded by "ge".
> >>>>>> //
> >>>>>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
> >>>>>> //
> >>>>>> fcst_thresh[] = [ "le273" ];
> >>>>>> obs_thresh[]  = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of thresholds to be used
when
> >>>> computing
> >>>>>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds
are
> >> applied
> >>>> to
> >>>>>> the
> >>>>>> // wind speed values derived from each U/V pair.  Only those
U/V
> pairs
> >>>>>> which meet
> >>>>>> // the wind speed threshold criteria are retained.  If the
> >>>> obs_wind_thresh
> >>>>>> // parameter is left blank, it will default to the contents
of
> >>>>>> fcst_wind_thresh.
> >>>>>> //
> >>>>>> // To apply multiple wind speed thresholds, separate the
threshold
> >>>> values
> >>>>>> with a
> >>>>>> // space.  Use "NA" to indicate that no wind speed threshold
should
> be
> >>>>>> applied.
> >>>>>> //
> >>>>>> // Each threshold must be preceded by a two letter indicator
for the
> >>>> type
> >>>>>> of
> >>>>>> // thresholding to be performed:
> >>>>>> //    'lt' for less than     'le' for less than or equal to
> >>>>>> //    'eq' for equal to      'ne' for not equal to
> >>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
> >>>>>> //    'NA' for no threshold
> >>>>>> //
> >>>>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >>>>>> //
> >>>>>> fcst_wind_thresh[] = [ "NA" ];
> >>>>>> obs_wind_thresh[]  = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of PrepBufr message types
with
> which
> >>>>>> // to perform the verification.  Statistics will be computed
> >> separately
> >>>>>> // for each message type specified.  At least one PrepBufr
message
> >> type
> >>>>>> // must be provided.
> >>>>>> // List of valid message types:
> >>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
> >>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
> >>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
> >>>>>> //
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>>>> //
> >>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >>>>>> //
> >>>>>> message_type[] = [ "ADPSFC" ];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of grids to be used in
masking the
> >>>> data
> >>>>>> over
> >>>>>> // which to perform scoring.  An empty list indicates that no
> masking
> >>>> grid
> >>>>>> // should be performed.  The standard NCEP grids are named
"GNNN"
> >> where
> >>>> NNN
> >>>>>> // indicates the three digit grid number.  Enter "FULL" to
score
> over
> >>>> the
> >>>>>> // entire domain.
> >>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>>>>> //
> >>>>>> // e.g. mask_grid[] = [ "FULL" ];
> >>>>>> //
> >>>>>> mask_grid[] = [ "FULL" ];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of masking regions to be
applied.
> >>>>>> // An empty list indicates that no additional masks should be
used.
> >>>>>> // The masking regions may be defined in one of 4 ways:
> >>>>>> //
> >>>>>> // (1) An ASCII file containing a lat/lon polygon.
> >>>>>> //     Latitude in degrees north and longitude in degrees
east.
> >>>>>> //     By default, the first and last polygon points are
connected.
> >>>>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of
n
> points:
> >>>>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >>>>>> //
> >>>>>> // (2) The NetCDF output of the gen_poly_mask tool.
> >>>>>> //
> >>>>>> // (3) A NetCDF data file, followed by the name of the NetCDF
> variable
> >>>>>> //     to be used, and optionally, a threshold to be applied
to the
> >>>> field.
> >>>>>> //     e.g. "sample.nc var_name gt0.00"
> >>>>>> //
> >>>>>> // (4) A GRIB data file, followed by a description of the
field
> >>>>>> //     to be used, and optionally, a threshold to be applied
to the
> >>>> field.
> >>>>>> //     e.g. "sample.grb APCP/A3 gt0.00"
> >>>>>> //
> >>>>>> // Any NetCDF or GRIB file used must have the same grid
dimensions
> as
> >>>> the
> >>>>>> // data being verified.
> >>>>>> //
> >>>>>> // MET_BASE may be used in the path for the files above.
> >>>>>> //
> >>>>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >>>>>> //                      "poly_mask.ncf",
> >>>>>> //                      "sample.nc APCP",
> >>>>>> //                      "sample.grb HGT/Z0 gt100.0" ];
> >>>>>> //
> >>>>>> mask_poly[] = [];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify the name of an ASCII file containing a space-
separated
> list
> >>>> of
> >>>>>> // station ID's at which to perform verification.  Each
station ID
> >>>>>> specified
> >>>>>> // is treated as an individual masking region.
> >>>>>> //
> >>>>>> // An empty list file name indicates that no station ID masks
should
> >> be
> >>>>>> used.
> >>>>>> //
> >>>>>> // MET_BASE may be used in the path for the station ID mask
file
> name.
> >>>>>> //
> >>>>>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
> >>>>>> //
> >>>>>> mask_sid = "";
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of values for alpha to be
used
> when
> >>>>>> computing
> >>>>>> // confidence intervals.  Values of alpha must be between 0
and 1.
> >>>>>> //
> >>>>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >>>>>> //
> >>>>>> ci_alpha[] = [ 0.05 ];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify the method to be used for computing bootstrap
confidence
> >>>>>> intervals.
> >>>>>> // The value for this is interpreted as follows:
> >>>>>> //    (0) Use the BCa interval method (computationally
intensive)
> >>>>>> //    (1) Use the percentile interval method
> >>>>>> //
> >>>>>> boot_interval = 1;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a proportion between 0 and 1 to define the
replicate
> sample
> >>>> size
> >>>>>> // to be used when computing percentile intervals.  The
replicate
> >> sample
> >>>>>> // size is set to boot_rep_prop * n, where n is the number of
raw
> data
> >>>>>> points.
> >>>>>> //
> >>>>>> // e.g boot_rep_prop = 0.80;
> >>>>>> //
> >>>>>> boot_rep_prop = 1.0;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify the number of times each set of matched pair data
should
> be
> >>>>>> // resampled when computing bootstrap confidence intervals.
A value
> >> of
> >>>>>> // zero disables the computation of bootstrap condifence
intervals.
> >>>>>> //
> >>>>>> // e.g. n_boot_rep = 1000;
> >>>>>> //
> >>>>>> n_boot_rep = 1000;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify the name of the random number generator to be
used.  See
> >> the
> >>>> MET
> >>>>>> // Users Guide for a list of possible random number
generators.
> >>>>>> //
> >>>>>> boot_rng = "mt19937";
> >>>>>>
> >>>>>> //
> >>>>>> // Specify the seed value to be used when computing bootstrap
> >> confidence
> >>>>>> // intervals.  If left unspecified, the seed will change for
each
> run
> >>>> and
> >>>>>> // the computed bootstrap confidence intervals will not be
> >> reproducable.
> >>>>>> //
> >>>>>> boot_seed = "";
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of interpolation method(s)
to be
> >> used
> >>>>>> // for comparing the forecast grid to the observation points.
>  String
> >>>>>> values
> >>>>>> // are interpreted as follows:
> >>>>>> //    MIN     = Minimum in the neighborhood
> >>>>>> //    MAX     = Maximum in the neighborhood
> >>>>>> //    MEDIAN  = Median in the neighborhood
> >>>>>> //    UW_MEAN = Unweighted mean in the neighborhood
> >>>>>> //    DW_MEAN = Distance-weighted mean in the neighborhood
> >>>>>> //    LS_FIT  = Least-squares fit in the neighborhood
> >>>>>> //    BILIN   = Bilinear interpolation using the 4 closest
points
> >>>>>> //
> >>>>>> // In all cases, vertical interpolation is performed in the
natural
> >> log
> >>>>>> // of pressure of the levels above and below the observation.
> >>>>>> //
> >>>>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >>>>>> //
> >>>>>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
> >>>>>>
> >>>>>> //
> >>>>>> // Specify a comma-separated list of box widths to be used by
the
> >>>>>> // interpolation techniques listed above.  A value of 1
indicates
> that
> >>>>>> // the nearest neighbor approach should be used.  For a value
of n
> >>>>>> // greater than 1, the n*n grid points closest to the
observation
> >> define
> >>>>>> // the neighborhood.
> >>>>>> //
> >>>>>> // e.g. interp_width = [ 1, 3, 5 ];
> >>>>>> //
> >>>>>> interp_width[] = [ 1, 3 ];
> >>>>>>
> >>>>>> //
> >>>>>> // When interpolating, compute a ratio of the number of valid
data
> >>>> points
> >>>>>> // to the total number of points in the neighborhood.  If
that ratio
> >> is
> >>>>>> // less than this threshold, do not include the observation.
This
> >>>>>> // threshold must be between 0 and 1.  Setting this threshold
to 1
> >> will
> >>>>>> // require that each observation be surrounded by n*n valid
forecast
> >>>>>> // points.
> >>>>>> //
> >>>>>> // e.g. interp_thresh = 1.0;
> >>>>>> //
> >>>>>> interp_thresh = 1.0;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify flags to indicate the type of data to be output:
> >>>>>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Forecast Rate (F_RATE),
> >>>>>> //           Hit Rate (H_RATE),
> >>>>>> //           Observation Rate (O_RATE)
> >>>>>> //
> >>>>>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>>>>> //           Forecast Yes and Observation No Count (FY_ON),
> >>>>>> //           Forecast No and Observation Yes Count (FN_OY),
> >>>>>> //           Forecast No and Observation No Count (FN_ON)
> >>>>>> //
> >>>>>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Base Rate (BASER),
> >>>>>> //           Forecast Mean (FMEAN),
> >>>>>> //           Accuracy (ACC),
> >>>>>> //           Frequency Bias (FBIAS),
> >>>>>> //           Probability of Detecting Yes (PODY),
> >>>>>> //           Probability of Detecting No (PODN),
> >>>>>> //           Probability of False Detection (POFD),
> >>>>>> //           False Alarm Ratio (FAR),
> >>>>>> //           Critical Success Index (CSI),
> >>>>>> //           Gilbert Skill Score (GSS),
> >>>>>> //           Hanssen and Kuipers Discriminant (HK),
> >>>>>> //           Heidke Skill Score (HSS),
> >>>>>> //           Odds Ratio (ODDS),
> >>>>>> //           NOTE: All statistics listed above contain
parametric
> >> and/or
> >>>>>> //                 non-parametric confidence interval limits.
> >>>>>> //
> >>>>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency
> >> Table
> >>>>>> Counts:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Number of Categories (N_CAT),
> >>>>>> //           Contingency Table Count columns repeated
N_CAT*N_CAT
> >> times
> >>>>>> //
> >>>>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency
> >> Table
> >>>>>> Scores:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Number of Categories (N_CAT),
> >>>>>> //           Accuracy (ACC),
> >>>>>> //           Hanssen and Kuipers Discriminant (HK),
> >>>>>> //           Heidke Skill Score (HSS),
> >>>>>> //           Gerrity Score (GER),
> >>>>>> //           NOTE: All statistics listed above contain
parametric
> >> and/or
> >>>>>> //                 non-parametric confidence interval limits.
> >>>>>> //
> >>>>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
> Variables:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Forecast Mean (FBAR),
> >>>>>> //           Forecast Standard Deviation (FSTDEV),
> >>>>>> //           Observation Mean (OBAR),
> >>>>>> //           Observation Standard Deviation (OSTDEV),
> >>>>>> //           Pearson's Correlation Coefficient (PR_CORR),
> >>>>>> //           Spearman's Rank Correlation Coefficient
(SP_CORR),
> >>>>>> //           Kendall Tau Rank Correlation Coefficient
(KT_CORR),
> >>>>>> //           Number of ranks compared (RANKS),
> >>>>>> //           Number of tied ranks in the forecast field
> (FRANK_TIES),
> >>>>>> //           Number of tied ranks in the observation field
> >> (ORANK_TIES),
> >>>>>> //           Mean Error (ME),
> >>>>>> //           Standard Deviation of the Error (ESTDEV),
> >>>>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >>>>>> //           Mean Absolute Error (MAE),
> >>>>>> //           Mean Squared Error (MSE),
> >>>>>> //           Bias-Corrected Mean Squared Error (BCMSE),
> >>>>>> //           Root Mean Squared Error (RMSE),
> >>>>>> //           Percentiles of the Error (E10, E25, E50, E75,
E90)
> >>>>>> //           NOTE: Most statistics listed above contain
parametric
> >>>> and/or
> >>>>>> //                 non-parametric confidence interval limits.
> >>>>>> //
> >>>>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Forecast Mean (FBAR),
> >>>>>> //              = mean(f)
> >>>>>> //           Observation Mean (OBAR),
> >>>>>> //              = mean(o)
> >>>>>> //           Forecast*Observation Product Mean (FOBAR),
> >>>>>> //              = mean(f*o)
> >>>>>> //           Forecast Squared Mean (FFBAR),
> >>>>>> //              = mean(f^2)
> >>>>>> //           Observation Squared Mean (OOBAR)
> >>>>>> //              = mean(o^2)
> >>>>>> //
> >>>>>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial
Sums:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Forecast Anomaly Mean (FABAR),
> >>>>>> //              = mean(f-c)
> >>>>>> //           Observation Anomaly Mean (OABAR),
> >>>>>> //              = mean(o-c)
> >>>>>> //           Product of Forecast and Observation Anomalies
Mean
> >>>> (FOABAR),
> >>>>>> //              = mean((f-c)*(o-c))
> >>>>>> //           Forecast Anomaly Squared Mean (FFABAR),
> >>>>>> //              = mean((f-c)^2)
> >>>>>> //           Observation Anomaly Squared Mean (OOABAR)
> >>>>>> //              = mean((o-c)^2)
> >>>>>> //
> >>>>>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           U-Forecast Mean (UFBAR),
> >>>>>> //              = mean(uf)
> >>>>>> //           V-Forecast Mean (VFBAR),
> >>>>>> //              = mean(vf)
> >>>>>> //           U-Observation Mean (UOBAR),
> >>>>>> //              = mean(uo)
> >>>>>> //           V-Observation Mean (VOBAR),
> >>>>>> //              = mean(vo)
> >>>>>> //           U-Product Plus V-Product (UVFOBAR),
> >>>>>> //              = mean(uf*uo+vf*vo)
> >>>>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >>>>>> //              = mean(uf^2+vf^2)
> >>>>>> //           U-Observation Squared Plus V-Observation Squared
> >> (UVOOBAR)
> >>>>>> //              = mean(uo^2+vo^2)
> >>>>>> //
> >>>>>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial
Sums:
> >>>>>> //           U-Forecast Anomaly Mean (UFABAR),
> >>>>>> //              = mean(uf-uc)
> >>>>>> //           V-Forecast Anomaly Mean (VFABAR),
> >>>>>> //              = mean(vf-vc)
> >>>>>> //           U-Observation Anomaly Mean (UOABAR),
> >>>>>> //              = mean(uo-uc)
> >>>>>> //           V-Observation Anomaly Mean (VOABAR),
> >>>>>> //              = mean(vo-vc)
> >>>>>> //           U-Anomaly Product Plus V-Anomaly Product
(UVFOABAR),
> >>>>>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
> >>>>>> //           U-Forecast Anomaly Squared Plus V-Forecast
Anomaly
> >> Squared
> >>>>>> (UVFFABAR),
> >>>>>> //              = mean((uf-uc)^2+(vf-vc)^2)
> >>>>>> //           U-Observation Anomaly Squared Plus V-Observation
> Anomaly
> >>>>>> Squared (UVOOABAR)
> >>>>>> //              = mean((uo-uc)^2+(vo-vc)^2)
> >>>>>> //
> >>>>>> //   (11) STAT and PCT Text Files, Nx2 Probability
Contingency Table
> >>>>>> Counts:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>> //           Probability Threshold Value (THRESH_i),
> >>>>>> //           Row Observation Yes Count (OY_i),
> >>>>>> //           Row Observation No Count (ON_i),
> >>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
> >>>> table.
> >>>>>> //           Last Probability Threshold Value (THRESH_n)
> >>>>>> //
> >>>>>> //   (12) STAT and PSTD Text Files, Nx2 Probability
Contingency
> Table
> >>>>>> Scores:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>> //           Base Rate (BASER) with confidence interval
limits,
> >>>>>> //           Reliability (RELIABILITY),
> >>>>>> //           Resolution (RESOLUTION),
> >>>>>> //           Uncertainty (UNCERTAINTY),
> >>>>>> //           Area Under the ROC Curve (ROC_AUC),
> >>>>>> //           Brier Score (BRIER) with confidence interval
limits,
> >>>>>> //           Probability Threshold Value (THRESH_i)
> >>>>>> //           NOTE: Previous column repeated for each
probability
> >>>> threshold.
> >>>>>> //
> >>>>>> //   (13) STAT and PJC Text Files, Joint/Continuous
Statistics of
> >>>>>> //                                 Probabilistic Variables:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>> //           Probability Threshold Value (THRESH_i),
> >>>>>> //           Observation Yes Count Divided by Total
(OY_TP_i),
> >>>>>> //           Observation No Count Divided by Total (ON_TP_i),
> >>>>>> //           Calibration (CALIBRATION_i),
> >>>>>> //           Refinement (REFINEMENT_i),
> >>>>>> //           Likelikhood (LIKELIHOOD_i),
> >>>>>> //           Base Rate (BASER_i),
> >>>>>> //           NOTE: Previous 7 columns repeated for each row
in the
> >>>> table.
> >>>>>> //           Last Probability Threshold Value (THRESH_n)
> >>>>>> //
> >>>>>> //   (14) STAT and PRC Text Files, ROC Curve Points for
> >>>>>> //                                 Probabilistic Variables:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>> //           Probability Threshold Value (THRESH_i),
> >>>>>> //           Probability of Detecting Yes (PODY_i),
> >>>>>> //           Probability of False Detection (POFD_i),
> >>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
> >>>> table.
> >>>>>> //           Last Probability Threshold Value (THRESH_n)
> >>>>>> //
> >>>>>> //   (15) STAT and MPR Text Files, Matched Pair Data:
> >>>>>> //           Total (TOTAL),
> >>>>>> //           Index (INDEX),
> >>>>>> //           Observation Station ID (OBS_SID),
> >>>>>> //           Observation Latitude (OBS_LAT),
> >>>>>> //           Observation Longitude (OBS_LON),
> >>>>>> //           Observation Level (OBS_LVL),
> >>>>>> //           Observation Elevation (OBS_ELV),
> >>>>>> //           Forecast Value (FCST),
> >>>>>> //           Observation Value (OBS),
> >>>>>> //           Climatological Value (CLIMO)
> >>>>>> //
> >>>>>> //   In the expressions above, f are forecast values, o are
observed
> >>>>>> values,
> >>>>>> //   and c are climatological values.
> >>>>>> //
> >>>>>> // Values for these flags are interpreted as follows:
> >>>>>> //    (0) Do not generate output of this type
> >>>>>> //    (1) Write output to a STAT file
> >>>>>> //    (2) Write output to a STAT file and a text file
> >>>>>> //
> >>>>>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1
];
> >>>>>>
> >>>>>> //
> >>>>>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
> >>>> Correlation
> >>>>>> // Coefficients should be computed.  Computing them over
large
> >> datasets
> >>>> is
> >>>>>> // computationally intensive and slows down the runtime
execution
> >>>>>> significantly.
> >>>>>> //    (0) Do not compute these correlation coefficients
> >>>>>> //    (1) Compute these correlation coefficients
> >>>>>> //
> >>>>>> rank_corr_flag = 1;
> >>>>>>
> >>>>>> //
> >>>>>> // Specify the GRIB Table 2 parameter table version number to
be
> used
> >>>>>> // for interpreting GRIB codes.
> >>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>>>> //
> >>>>>> grib_ptv = 2;
> >>>>>>
> >>>>>> //
> >>>>>> // Directory where temporary files should be written.
> >>>>>> //
> >>>>>> tmp_dir = "/tmp";
> >>>>>>
> >>>>>> //
> >>>>>> // Prefix to be used for the output file names.
> >>>>>> //
> >>>>>> output_prefix = "";
> >>>>>>
> >>>>>> //
> >>>>>> // Indicate a version number for the contents of this
configuration
> >>>> file.
> >>>>>> // The value should generally not be modified.
> >>>>>> //
> >>>>>> version = "V3.0.1";
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>>>
> >>
> >>
> >>
>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51928] Re: METV3 Issue
From: Paul Oldenburg
Time: Fri Dec 09 14:46:57 2011

Tim,

We did some testing and found that when MET is compiled with the intel
compilers using the -g (debug) flag, point_stat
runs with no error.  You can apply this by adding the -g flag to the
CXX_FLAGS and FC_FLAGS in the top-level MET
Makefile and then doing a clean remake.  We can't explain why this is
the case, but we will do more testing and let you
know if we turn up anything.  It seems like you turned up a fairly
subtle bug in either MET or the intel compiler!
Please let me know if you have any questions.

Thanks,

Paul


On 12/09/2011 02:12 PM, Tim Melino via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>
> Thanks, Paul for all you help on this. I installed everything with
gfortran
> and gcc , which seems to work. I will switch back to intel when the
issue
> is resolved.
>
> - Tim
>
> On Fri, Dec 9, 2011 at 4:04 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:
>
>> Tim,
>>
>> I was able to reproduce the error you reported when running
METv3.0.1
>> compiled with intel compilers.  This may take us a
>> little time to sort out.  Thanks for reporting this issue, and
we'll let
>> you know when we have a solution for you.
>>
>> Thanks,
>>
>> Paul
>>
>>
>> On 12/09/2011 12:49 PM, Tim Melino via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>
>>> Ok,
>>> Here are some of the specifications and I will install the latest
patch
>>> now.
>>>
>>> netCDF version 4.1.1
>>> INTEL-11.1.072 Compilers
>>>
>>>
>>> - Tim
>>>
>>>
>>> On Fri, Dec 9, 2011 at 1:49 PM, Paul Oldenburg via RT
<met_help at ucar.edu
>>> wrote:
>>>
>>>> Tim,
>>>>
>>>> I'm still not able to reproduce the error that you reported.
Have you
>>>> applied all of the latest patches to METv3.0.1?
>>>> The latest patch tarball and instructions on how to apply it can
be
>> found
>>>> here:
>>>>
>>
http://www.dtcenter.org/met/users/support/known_issues/METv3.0.1/index.php
>> .
>>>>  Can you tell me what version of NetCDF you
>>>> linked MET against?  What family of compilers did you use to
compile MET
>>>> (e.g. GNU/PGI/intel)?  I think we are down to a
>>>> configuration/environment problem at this point.  Sorry for the
trouble.
>>>>
>>>> Paul
>>>>
>>>>
>>>> On 12/09/2011 11:38 AM, Tim Melino via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>>>
>>>>> Paul,
>>>>> I put everything into a tar file and uploaded it.
>>>>>
>>>>> - Tim
>>>>>
>>>>>
>>>>> On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT <
>>>> met_help at ucar.edu>wrote:
>>>>>
>>>>>> Tim,
>>>>>>
>>>>>> We are not able to reproduce the error that you are reporting.
Are
>> you
>>>>>> using the same exact data and config files that
>>>>>> you sent me and I tested with?  In any case, can you create a
tar
>>>> archive
>>>>>> of all the files involved in the point_stat
>>>>>> command that throws the error and put it on the FTP site?  I
will need
>>>> to
>>>>>> be able to reproduce this error, otherwise it
>>>>>> will be difficult for me to diagnose the problem.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Paul
>>>>>>
>>>>>>
>>>>>> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928
>
>>>>>>>
>>>>>>> Paul,
>>>>>>> I tried running again using your configuration settings, but
while
>>>>>> running
>>>>>>> pointstat I am still receiving errors. The error comes up as
the
>>>>>> following
>>>>>>> ....
>>>>>>>
>>>>>>> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
>>>>>>> ndas.t12z.ncPointStatConfig -outdir . -v 99
>>>>>>> GSL_RNG_TYPE=mt19937
>>>>>>> GSL_RNG_SEED=18446744071864509006
>>>>>>> Forecast File: wrf.nc
>>>>>>> Climatology File: none
>>>>>>> Configuration File: PointStatConfig
>>>>>>> Observation File: ndas.t12z.nc
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>
--------------------------------------------------------------------------------
>>>>>>>
>>>>>>> Reading records for TT(0,0,*,*).
>>>>>>>
>>>>>>>
>>>>>>>   LongArray::operator[](int) -> range check error ... 4
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> - Tim
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT <
>>>> met_help at ucar.edu
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Tim,
>>>>>>>>
>>>>>>>> I ran the following pb2nc and point_stat commands using the
attached
>>>>>>>> config files to generate point verification data
>>>>>>>> with your PrepBUFR obs and p_interp model data.  Note that
MET_BASE
>> is
>>>>>> set
>>>>>>>> to the base folder of an instance of
>>>>>>>> METv3.0.1.  I pulled both config files, with slight
modifications,
>>>> from
>>>>>>>> $MET_BASE/scripts/config.
>>>>>>>>
>>>>>>>> $MET_BASE/bin/pb2nc
>>>> ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
>>>>>>>>
>>>>>>>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc PointStatConfig
>> -outdir
>>>> .
>>>>>> -v
>>>>>>>> 99
>>>>>>>>
>>>>>>>> In PointStatConfig, you will see the following settings.  The
>>>> fcst_field
>>>>>>>> setting format is due to the fact that fields
>>>>>>>> in wrf.nc are four dimensional, with the last two dimensions
being
>>>> the
>>>>>>>> spatial (x,y) dimensions.  The obs_field
>>>>>>>> specifies surface temperature using a GRIB-style format,
because
>> pb2nc
>>>>>>>> indexes fields in its output by GRIB code.  You
>>>>>>>> should follow a similar paradigm to verify additional fields
beyond
>>>>>>>> temperature.
>>>>>>>>
>>>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>>>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>>>>>
>>>>>>>> fcst_thresh[] = [ "le273" ];
>>>>>>>> obs_thresh[]  = [];
>>>>>>>>
>>>>>>>> If you have any questions or problems, please let me know.
>>>>>>>>
>>>>>>>> Good luck,
>>>>>>>>
>>>>>>>> Paul
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
>>>>>>>>>
>>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>>>>>>>
>>>>>>>>> I just put the file on the server that I have been using. As
far as
>>>>>>>> running
>>>>>>>>> the UPP software, that is not really possible at the moment.
I do
>> not
>>>>>>>> have
>>>>>>>>> any of that software installed or configured as I have never
had a
>>>>>> reason
>>>>>>>>> to use it .
>>>>>>>>>
>>>>>>>>> - Tim
>>>>>>>>>
>>>>>>>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
>>>>>> met_help at ucar.edu
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Tim,
>>>>>>>>>>
>>>>>>>>>> Can you please put the input PrepBUFR file that you pass to
pb2nc
>> on
>>>>>> the
>>>>>>>>>> FTP site?  When I look at the contents of
>>>>>>>>>> out.nc, it does not appear that there are any observations
in
>> that
>>>>>>>> file.
>>>>>>>>>>  I would like to run pb2nc myself to see what
>>>>>>>>>> is going on.
>>>>>>>>>>
>>>>>>>>>> I made an incorrect assumption in my earlier emails that
you were
>>>>>> trying
>>>>>>>>>> to verify model data in GRIB format.  Now that
>>>>>>>>>> I have your data in hand, I see that it is p_interp output,
as you
>>>>>>>>>> mentioned in your initial email.  MET support for
>>>>>>>>>> p_interp is not as robust as for GRIB.  In particular,
>> grid-relative
>>>>>>>> wind
>>>>>>>>>> directions in p_interp data files should not
>>>>>>>>>> be compared to lat-long relative wind directions in the
PrepBUFR
>>>> obs.
>>>>>>>>>>  Would it be possible for you to run your WRF
>>>>>>>>>> output through the Unified Post Processor (UPP -
>>>>>>>>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php)
>>>>>>>>>> instead of or in addition to p_interp?  That would simplify
MET
>>>>>>>>>> verification tasks.  Please let me know if you have any
>>>>>>>>>> questions.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>> Paul
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
>>>>>>>>>>>
>>>>>>>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
>>>>>>>>>>> Transaction: Ticket created by tmelino at meso.com
>>>>>>>>>>>        Queue: met_help
>>>>>>>>>>>      Subject: Re: METV3 Issue
>>>>>>>>>>>        Owner: Nobody
>>>>>>>>>>>   Requestors: tmelino at meso.com
>>>>>>>>>>>       Status: new
>>>>>>>>>>>  Ticket <URL:
>>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Ok,
>>>>>>>>>>>
>>>>>>>>>>> The data should be there now. With out.nc being the obs
and
>>>>>> wrf.ncbeing
>>>>>>>>>>> the forecast
>>>>>>>>>>>
>>>>>>>>>>> - Tim
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino
<tmelino at meso.com>
>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>> I have recently been doing some work with WRF and am
trying to
>> add
>>>>>> the
>>>>>>>>>> the
>>>>>>>>>>>> model evaluation tools to our standard model verification
>> system.
>>>>  I
>>>>>>>>>>>> started the process by running the pressure interpolation
>> program
>>>>>> on a
>>>>>>>>>>>> single wrfout file, which appeared to finish correctly. I
have
>>>>>>>> attached
>>>>>>>>>> an
>>>>>>>>>>>> ncdump of the file header to this email it is called
>>>>>>>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I
downloaded a
>>>> single
>>>>>>>>>> prebufr
>>>>>>>>>>>> file from an NCEP repository for the time centered on the
>> forecast
>>>>>>>>>> period
>>>>>>>>>>>> and ran PB2NC and this also appeared to finish correctly
and
>>>> output
>>>>>> a
>>>>>>>>>>>> single netcdf file, the header information is also
attached
>>>>>>>> (PB2NC.txt).
>>>>>>>>>>>> Then I attempted to run the point stat utility on these
two
>> files
>>>>>> but
>>>>>>>>>> the
>>>>>>>>>>>> program errors out telling me that there are more
forecast field
>>>>>> that
>>>>>>>>>>>> observational fields "ERROR:
PointStatConfInfo::process_config()
>>>> ->
>>>>>>>> The
>>>>>>>>>>>> number fcst_thresh entries provided must match the number
of
>>>> fields
>>>>>>>>>>>> provided in fcst_field.". I ran the following command
from the
>>>>>>>> terminal
>>>>>>>>>> to
>>>>>>>>>>>> run point stat "bin/point_stat
>> wrfout_d02_2011-12-07_00:00:00_PLEV
>>>>>>>>>> out.nc
>>>>>>>>>>>> PointStatConfig".  I am not sure what the problem is I
have red
>>>> the
>>>>>>>>>>>> documentation and it appears to be setup correctly but I
am not
>>>>>>>>>> completely
>>>>>>>>>>>> sure as I have never used this software before.  What
should
>> these
>>>>>>>>>> namelist
>>>>>>>>>>>> fields look like using 2 netcdf files (1.Forecast 1.Obs),
trying
>>>> to
>>>>>>>>>> verify
>>>>>>>>>>>> 10 meter winds? I appreciate your help!
>>>>>>>>>>>>
>>>>>>>>>>>> Also ... I ran the test all scripts after compilation ,
and the
>>>> code
>>>>>>>>>>>> completed successfully with no errors.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks ,
>>>>>>>>>>>> Tim
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>> //
>>>>>>>> // Default pb2nc configuration file
>>>>>>>> //
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Stratify the observation data in the PrepBufr files in the
>>>> following
>>>>>>>> // ways:
>>>>>>>> //  (1) by message type: supply a list of PrepBufr message
types
>>>>>>>> //      to retain (i.e. AIRCFT)
>>>>>>>> //  (2) by station id: supply a list of observation stations
to
>> retain
>>>>>>>> //  (3) by valid time: supply starting and ending times in
form
>>>>>>>> //      YYYY-MM-DD HH:MM:SS UTC
>>>>>>>> //  (4) by location: supply either an NCEP masking grid, a
masking
>>>>>>>> //      lat/lon polygon or a file to a mask lat/lon polygon
>>>>>>>> //  (5) by elevation: supply min/max elevation values
>>>>>>>> //  (6) by report type (typ): supply a list of report types
to
>> retain
>>>>>>>> //  (7) by instrument type (itp): supply a list of instrument
type
>> to
>>>>>>>> //      retain
>>>>>>>> //  (8) by vertical level: supply min/max vertical levels
>>>>>>>> //  (9) by variable type: supply a list of variable types to
retain
>>>>>>>> //      P, Q, T, Z, U, V
>>>>>>>> // (11) by quality mark: supply a quality mark threshold
>>>>>>>> // (12) Flag to retain values for all quality marks, or just
the
>> first
>>>>>>>> //      quality mark (highest)
>>>>>>>> // (13) by data level category: supply a list of category
types to
>>>>>>>> //      retain.
>>>>>>>> //
>>>>>>>> //      0 - Surface level (mass reports only)
>>>>>>>> //      1 - Mandatory level (upper-air profile reports)
>>>>>>>> //      2 - Significant temperature level (upper-air profile
>> reports)
>>>>>>>> //      2 - Significant temperature and winds-by-pressure
level
>>>>>>>> //          (future combined mass and wind upper-air reports)
>>>>>>>> //      3 - Winds-by-pressure level (upper-air profile
reports)
>>>>>>>> //      4 - Winds-by-height level (upper-air profile reports)
>>>>>>>> //      5 - Tropopause level (upper-air profile reports)
>>>>>>>> //      6 - Reports on a single level
>>>>>>>> //          (e.g., aircraft, satellite-wind, surface wind,
>>>>>>>> //           precipitable water retrievals, etc.)
>>>>>>>> //      7 - Auxiliary levels generated via interpolation from
>> spanning
>>>>>>>> levels
>>>>>>>> //          (upper-air profile reports)
>>>>>>>> //
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of PrepBufr message type
strings
>> to
>>>>>>>> retain.
>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>> // List of valid message types:
>>>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>>>>>> //
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>>>> //
>>>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>>>>>> //
>>>>>>>> message_type[] = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of station ID strings to
retain.
>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>> //
>>>>>>>> // e.g. station_id[] = [ "KDEN" ];
>>>>>>>> //
>>>>>>>> station_id[] = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Beginning and ending time offset values in seconds for
>> observations
>>>>>>>> // to retain.  The valid time window for retaining
observations is
>>>>>>>> // defined in reference to the observation time.  So
observations
>> with
>>>>>>>> // a valid time falling in the window [obs_time+beg_ds,
>>>> obs_time+end_ds]
>>>>>>>> // will be retained.
>>>>>>>> //
>>>>>>>> beg_ds = -1800;
>>>>>>>> end_ds =  1800;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify the name of a single grid to be used in masking
the data.
>>>>>>>> // An empty string indicates that no grid should be used.
The
>>>> standard
>>>>>>>> // NCEP grids are named "GNNN" where NNN indicates the three
digit
>>>> grid
>>>>>>>> number.
>>>>>>>> //
>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>>>>>> //
>>>>>>>> // e.g. mask_grid = "G212";
>>>>>>>> //
>>>>>>>> mask_grid = "G212";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a single ASCII file containing a lat/lon polygon.
>>>>>>>> // Latitude in degrees north and longitude in degrees east.
>>>>>>>> // By default, the first and last polygon points are
connected.
>>>>>>>> //
>>>>>>>> // The lat/lon polygon file should contain a name for the
polygon
>>>>>>>> // followed by a space-separated list of lat/lon points:
>>>>>>>> //    "name lat1 lon1 lat2 lon2... latn lonn"
>>>>>>>> //
>>>>>>>> // MET_BASE may be used in the path for the lat/lon polygon
file.
>>>>>>>> //
>>>>>>>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
>>>>>>>> //
>>>>>>>> mask_poly = "";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Beginning and ending elevation values in meters for
observations
>>>>>>>> // to retain.
>>>>>>>> //
>>>>>>>> beg_elev = -1000;
>>>>>>>> end_elev = 100000;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of PrepBufr report type
values to
>>>>>> retain.
>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>> //
>>>>>>>> //
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
>>>>>>>> //
>>>>>>>> // e.g. pb_report_type[] = [ 120, 133 ];
>>>>>>>> //
>>>>>>>> pb_report_type[] = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of input report type values
to
>>>> retain.
>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>> //
>>>>>>>> //
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
>>>>>>>> //
>>>>>>>> // e.g. in_report_type[] = [ 11, 22, 23 ];
>>>>>>>> //
>>>>>>>> in_report_type[] = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of instrument type values
to
>> retain.
>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>> //
>>>>>>>> // e.g. instrument_type[] = [ 52, 87 ];
>>>>>>>> //
>>>>>>>> instrument_type[] = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Beginning and ending vertical levels to retain.
>>>>>>>> //
>>>>>>>> beg_level = 1;
>>>>>>>> end_level = 255;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of strings containing grib
codes
>> or
>>>>>>>> // corresponding grib code abbreviations to retain or be
derived
>> from
>>>>>>>> // the available observations.
>>>>>>>> //
>>>>>>>> // Grib Codes to be RETAINED:
>>>>>>>> //    SPFH or 51 for Specific Humidity in kg/kg
>>>>>>>> //    TMP  or 11 for Temperature in K
>>>>>>>> //    HGT  or 7  for Height in meters
>>>>>>>> //    UGRD or 33 for the East-West component of the wind in
m/s
>>>>>>>> //    VGRD or 34 for the North-South component of the wind in
m/s
>>>>>>>> //
>>>>>>>> // Grib Codes to be DERIVED:
>>>>>>>> //    DPT   or 17 for Dewpoint Temperature in K
>>>>>>>> //    WIND  or 32 for Wind Speed in m/s
>>>>>>>> //    RH    or 52 for Relative Humidity in %
>>>>>>>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
>>>>>>>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in
Pa
>>>>>>>> //
>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>>>> //
>>>>>>>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND" ];
>>>>>>>> //
>>>>>>>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
>>>>>>>>                    "DPT",  "WIND", "RH",   "MIXR" ];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Quality mark threshold to indicate which observations to
retain.
>>>>>>>> // Observations with a quality mark equal to or LESS THAN
this
>>>> threshold
>>>>>>>> // will be retained, while observations with a quality mark
GREATER
>>>> THAN
>>>>>>>> // this threshold will be discarded.
>>>>>>>> //
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
>>>>>>>> //
>>>>>>>> quality_mark_thresh = 2;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Flag to indicate whether observations should be drawn from
the
>> top
>>>>>>>> // of the event stack (most quality controlled) or the bottom
of the
>>>>>>>> // event stack (most raw).  A value of 1 indicates that the
top of
>> the
>>>>>>>> // event stack should be used while a value of zero indicates
that
>> the
>>>>>>>> // bottom should be used.
>>>>>>>> //
>>>>>>>> event_stack_flag = 1;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Space comma-separated list of data level categorie values
to
>>>> retain,
>>>>>>>> // where a value of:
>>>>>>>> //    0 = Surface level (mass reports only)
>>>>>>>> //    1 = Mandatory level (upper-air profile reports)
>>>>>>>> //    2 = Significant temperature level (upper-air profile
reports)
>>>>>>>> //    2 = Significant temperature and winds-by-pressure level
>>>>>>>> //        (future combined mass and wind upper-air reports)
>>>>>>>> //    3 = Winds-by-pressure level (upper-air profile reports)
>>>>>>>> //    4 = Winds-by-height level (upper-air profile reports)
>>>>>>>> //    5 = Tropopause level (upper-air profile reports)
>>>>>>>> //    6 = Reports on a single level
>>>>>>>> //        (e.g., aircraft, satellite-wind, surface wind,
>>>>>>>> //         precipitable water retrievals, etc.)
>>>>>>>> //    7 = Auxiliary levels generated via interpolation from
spanning
>>>>>> levels
>>>>>>>> //        (upper-air profile reports)
>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>> //
>>>>>>>> //
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>>>> //
>>>>>>>> // e.g. level_category[] = [ 0, 1 ];
>>>>>>>> //
>>>>>>>> level_category[] = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Directory where temp files should be written by the PB2NC
tool
>>>>>>>> //
>>>>>>>> tmp_dir = "/tmp";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Indicate a version number for the contents of this
configuration
>>>>>> file.
>>>>>>>> // The value should generally not be modified.
>>>>>>>> //
>>>>>>>> version = "V3.0";
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>> //
>>>>>>>> // Default point_stat configuration file
>>>>>>>> //
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a name to designate the model being verified.
This name
>>>>>> will be
>>>>>>>> // written to the second column of the ASCII output
generated.
>>>>>>>> //
>>>>>>>> model = "WRF";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Beginning and ending time offset values in seconds for
>> observations
>>>>>>>> // to be used.  These time offsets are defined in reference
to the
>>>>>>>> // forecast valid time, v.  Observations with a valid time
falling
>> in
>>>>>> the
>>>>>>>> // window [v+beg_ds, v+end_ds] will be used.
>>>>>>>> // These selections are overridden by the command line
arguments
>>>>>>>> // -obs_valid_beg and -obs_valid_end.
>>>>>>>> //
>>>>>>>> beg_ds = -1800;
>>>>>>>> end_ds =  1800;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of fields to be verified.
The
>>>>>> forecast
>>>>>>>> and
>>>>>>>> // observation fields may be specified separately.  If the
obs_field
>>>>>>>> parameter
>>>>>>>> // is left blank, it will default to the contents of
fcst_field.
>>>>>>>> //
>>>>>>>> // Each field is specified as a GRIB code or abbreviation
followed
>> by
>>>> an
>>>>>>>> // accumulation or vertical level indicator for GRIB files or
as a
>>>>>>>> variable name
>>>>>>>> // followed by a list of dimensions for NetCDF files output
from
>>>>>> p_interp
>>>>>>>> or MET.
>>>>>>>> //
>>>>>>>> // Specifying verification fields for GRIB files:
>>>>>>>> //    GC/ANNN for accumulation interval NNN
>>>>>>>> //    GC/ZNNN for vertical level NNN
>>>>>>>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or AGL)
>>>>>>>> //    GC/PNNN for pressure level NNN in hPa
>>>>>>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>>>>>>>> //    GC/LNNN for a generic level type
>>>>>>>> //    GC/RNNN for a specific GRIB record number
>>>>>>>> //    Where GC is the number of or abbreviation for the grib
code
>>>>>>>> //    to be verified.
>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>>>> //
>>>>>>>> // Specifying verification fields for NetCDF files:
>>>>>>>> //    var_name(i,...,j,*,*) for a single field
>>>>>>>> //    var_name(i-j,*,*) for a range of fields
>>>>>>>> //    Where var_name is the name of the NetCDF variable,
>>>>>>>> //    and i,...,j specifies fixed dimension values,
>>>>>>>> //    and i-j specifies a range of values for a single
dimension,
>>>>>>>> //    and *,* specifies the two dimensions for the gridded
field.
>>>>>>>> //
>>>>>>>> //    NOTE: To verify winds as vectors rather than scalars,
>>>>>>>> //          specify UGRD (or 33) followed by VGRD (or 34)
with the
>>>>>>>> //          same level values.
>>>>>>>> //
>>>>>>>> //    NOTE: To process a probability field, add "/PROB", such
as
>>>>>>>> "POP/Z0/PROB".
>>>>>>>> //
>>>>>>>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a
GRIB input
>>>>>>>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)" ];
for
>>>> NetCDF
>>>>>>>> input
>>>>>>>> //
>>>>>>>>
>>>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>>>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of groups of thresholds to
be
>>>> applied
>>>>>> to
>>>>>>>> the
>>>>>>>> // fields listed above.  Thresholds for the forecast and
observation
>>>>>> fields
>>>>>>>> // may be specified separately.  If the obs_thresh parameter
is left
>>>>>> blank,
>>>>>>>> // it will default to the contents of fcst_thresh.
>>>>>>>> //
>>>>>>>> // At least one threshold must be provided for each field
listed
>>>> above.
>>>>>>>>  The
>>>>>>>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match,
>> as
>>>>>> must
>>>>>>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To
apply
>>>>>> multiple
>>>>>>>> // thresholds to a field, separate the threshold values with
a
>> space.
>>>>>>>> //
>>>>>>>> // Each threshold must be preceded by a two letter indicator
for the
>>>>>> type
>>>>>>>> of
>>>>>>>> // thresholding to be performed:
>>>>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
>>>>>>>> //
>>>>>>>> // NOTE: Thresholds for probabilities must begin with 0.0,
end with
>>>> 1.0,
>>>>>>>> //       and be preceeded by "ge".
>>>>>>>> //
>>>>>>>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
>>>>>>>> //
>>>>>>>> fcst_thresh[] = [ "le273" ];
>>>>>>>> obs_thresh[]  = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of thresholds to be used
when
>>>>>> computing
>>>>>>>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds
are
>>>> applied
>>>>>> to
>>>>>>>> the
>>>>>>>> // wind speed values derived from each U/V pair.  Only those
U/V
>> pairs
>>>>>>>> which meet
>>>>>>>> // the wind speed threshold criteria are retained.  If the
>>>>>> obs_wind_thresh
>>>>>>>> // parameter is left blank, it will default to the contents
of
>>>>>>>> fcst_wind_thresh.
>>>>>>>> //
>>>>>>>> // To apply multiple wind speed thresholds, separate the
threshold
>>>>>> values
>>>>>>>> with a
>>>>>>>> // space.  Use "NA" to indicate that no wind speed threshold
should
>> be
>>>>>>>> applied.
>>>>>>>> //
>>>>>>>> // Each threshold must be preceded by a two letter indicator
for the
>>>>>> type
>>>>>>>> of
>>>>>>>> // thresholding to be performed:
>>>>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
>>>>>>>> //    'NA' for no threshold
>>>>>>>> //
>>>>>>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>>>>>>>> //
>>>>>>>> fcst_wind_thresh[] = [ "NA" ];
>>>>>>>> obs_wind_thresh[]  = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of PrepBufr message types
with
>> which
>>>>>>>> // to perform the verification.  Statistics will be computed
>>>> separately
>>>>>>>> // for each message type specified.  At least one PrepBufr
message
>>>> type
>>>>>>>> // must be provided.
>>>>>>>> // List of valid message types:
>>>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>>>>>> //
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>>>> //
>>>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>>>>>> //
>>>>>>>> message_type[] = [ "ADPSFC" ];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of grids to be used in
masking the
>>>>>> data
>>>>>>>> over
>>>>>>>> // which to perform scoring.  An empty list indicates that no
>> masking
>>>>>> grid
>>>>>>>> // should be performed.  The standard NCEP grids are named
"GNNN"
>>>> where
>>>>>> NNN
>>>>>>>> // indicates the three digit grid number.  Enter "FULL" to
score
>> over
>>>>>> the
>>>>>>>> // entire domain.
>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>>>>>> //
>>>>>>>> // e.g. mask_grid[] = [ "FULL" ];
>>>>>>>> //
>>>>>>>> mask_grid[] = [ "FULL" ];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of masking regions to be
applied.
>>>>>>>> // An empty list indicates that no additional masks should be
used.
>>>>>>>> // The masking regions may be defined in one of 4 ways:
>>>>>>>> //
>>>>>>>> // (1) An ASCII file containing a lat/lon polygon.
>>>>>>>> //     Latitude in degrees north and longitude in degrees
east.
>>>>>>>> //     By default, the first and last polygon points are
connected.
>>>>>>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of
n
>> points:
>>>>>>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>>>>>>>> //
>>>>>>>> // (2) The NetCDF output of the gen_poly_mask tool.
>>>>>>>> //
>>>>>>>> // (3) A NetCDF data file, followed by the name of the NetCDF
>> variable
>>>>>>>> //     to be used, and optionally, a threshold to be applied
to the
>>>>>> field.
>>>>>>>> //     e.g. "sample.nc var_name gt0.00"
>>>>>>>> //
>>>>>>>> // (4) A GRIB data file, followed by a description of the
field
>>>>>>>> //     to be used, and optionally, a threshold to be applied
to the
>>>>>> field.
>>>>>>>> //     e.g. "sample.grb APCP/A3 gt0.00"
>>>>>>>> //
>>>>>>>> // Any NetCDF or GRIB file used must have the same grid
dimensions
>> as
>>>>>> the
>>>>>>>> // data being verified.
>>>>>>>> //
>>>>>>>> // MET_BASE may be used in the path for the files above.
>>>>>>>> //
>>>>>>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>>>>>>>> //                      "poly_mask.ncf",
>>>>>>>> //                      "sample.nc APCP",
>>>>>>>> //                      "sample.grb HGT/Z0 gt100.0" ];
>>>>>>>> //
>>>>>>>> mask_poly[] = [];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify the name of an ASCII file containing a space-
separated
>> list
>>>>>> of
>>>>>>>> // station ID's at which to perform verification.  Each
station ID
>>>>>>>> specified
>>>>>>>> // is treated as an individual masking region.
>>>>>>>> //
>>>>>>>> // An empty list file name indicates that no station ID masks
should
>>>> be
>>>>>>>> used.
>>>>>>>> //
>>>>>>>> // MET_BASE may be used in the path for the station ID mask
file
>> name.
>>>>>>>> //
>>>>>>>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
>>>>>>>> //
>>>>>>>> mask_sid = "";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of values for alpha to be
used
>> when
>>>>>>>> computing
>>>>>>>> // confidence intervals.  Values of alpha must be between 0
and 1.
>>>>>>>> //
>>>>>>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>>>>>>>> //
>>>>>>>> ci_alpha[] = [ 0.05 ];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify the method to be used for computing bootstrap
confidence
>>>>>>>> intervals.
>>>>>>>> // The value for this is interpreted as follows:
>>>>>>>> //    (0) Use the BCa interval method (computationally
intensive)
>>>>>>>> //    (1) Use the percentile interval method
>>>>>>>> //
>>>>>>>> boot_interval = 1;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a proportion between 0 and 1 to define the
replicate
>> sample
>>>>>> size
>>>>>>>> // to be used when computing percentile intervals.  The
replicate
>>>> sample
>>>>>>>> // size is set to boot_rep_prop * n, where n is the number of
raw
>> data
>>>>>>>> points.
>>>>>>>> //
>>>>>>>> // e.g boot_rep_prop = 0.80;
>>>>>>>> //
>>>>>>>> boot_rep_prop = 1.0;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify the number of times each set of matched pair data
should
>> be
>>>>>>>> // resampled when computing bootstrap confidence intervals.
A value
>>>> of
>>>>>>>> // zero disables the computation of bootstrap condifence
intervals.
>>>>>>>> //
>>>>>>>> // e.g. n_boot_rep = 1000;
>>>>>>>> //
>>>>>>>> n_boot_rep = 1000;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify the name of the random number generator to be
used.  See
>>>> the
>>>>>> MET
>>>>>>>> // Users Guide for a list of possible random number
generators.
>>>>>>>> //
>>>>>>>> boot_rng = "mt19937";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify the seed value to be used when computing bootstrap
>>>> confidence
>>>>>>>> // intervals.  If left unspecified, the seed will change for
each
>> run
>>>>>> and
>>>>>>>> // the computed bootstrap confidence intervals will not be
>>>> reproducable.
>>>>>>>> //
>>>>>>>> boot_seed = "";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of interpolation method(s)
to be
>>>> used
>>>>>>>> // for comparing the forecast grid to the observation points.
>>  String
>>>>>>>> values
>>>>>>>> // are interpreted as follows:
>>>>>>>> //    MIN     = Minimum in the neighborhood
>>>>>>>> //    MAX     = Maximum in the neighborhood
>>>>>>>> //    MEDIAN  = Median in the neighborhood
>>>>>>>> //    UW_MEAN = Unweighted mean in the neighborhood
>>>>>>>> //    DW_MEAN = Distance-weighted mean in the neighborhood
>>>>>>>> //    LS_FIT  = Least-squares fit in the neighborhood
>>>>>>>> //    BILIN   = Bilinear interpolation using the 4 closest
points
>>>>>>>> //
>>>>>>>> // In all cases, vertical interpolation is performed in the
natural
>>>> log
>>>>>>>> // of pressure of the levels above and below the observation.
>>>>>>>> //
>>>>>>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>>>>>>>> //
>>>>>>>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify a comma-separated list of box widths to be used by
the
>>>>>>>> // interpolation techniques listed above.  A value of 1
indicates
>> that
>>>>>>>> // the nearest neighbor approach should be used.  For a value
of n
>>>>>>>> // greater than 1, the n*n grid points closest to the
observation
>>>> define
>>>>>>>> // the neighborhood.
>>>>>>>> //
>>>>>>>> // e.g. interp_width = [ 1, 3, 5 ];
>>>>>>>> //
>>>>>>>> interp_width[] = [ 1, 3 ];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // When interpolating, compute a ratio of the number of valid
data
>>>>>> points
>>>>>>>> // to the total number of points in the neighborhood.  If
that ratio
>>>> is
>>>>>>>> // less than this threshold, do not include the observation.
This
>>>>>>>> // threshold must be between 0 and 1.  Setting this threshold
to 1
>>>> will
>>>>>>>> // require that each observation be surrounded by n*n valid
forecast
>>>>>>>> // points.
>>>>>>>> //
>>>>>>>> // e.g. interp_thresh = 1.0;
>>>>>>>> //
>>>>>>>> interp_thresh = 1.0;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify flags to indicate the type of data to be output:
>>>>>>>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Forecast Rate (F_RATE),
>>>>>>>> //           Hit Rate (H_RATE),
>>>>>>>> //           Observation Rate (O_RATE)
>>>>>>>> //
>>>>>>>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Forecast Yes and Observation Yes Count (FY_OY),
>>>>>>>> //           Forecast Yes and Observation No Count (FY_ON),
>>>>>>>> //           Forecast No and Observation Yes Count (FN_OY),
>>>>>>>> //           Forecast No and Observation No Count (FN_ON)
>>>>>>>> //
>>>>>>>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Base Rate (BASER),
>>>>>>>> //           Forecast Mean (FMEAN),
>>>>>>>> //           Accuracy (ACC),
>>>>>>>> //           Frequency Bias (FBIAS),
>>>>>>>> //           Probability of Detecting Yes (PODY),
>>>>>>>> //           Probability of Detecting No (PODN),
>>>>>>>> //           Probability of False Detection (POFD),
>>>>>>>> //           False Alarm Ratio (FAR),
>>>>>>>> //           Critical Success Index (CSI),
>>>>>>>> //           Gilbert Skill Score (GSS),
>>>>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>>>>> //           Heidke Skill Score (HSS),
>>>>>>>> //           Odds Ratio (ODDS),
>>>>>>>> //           NOTE: All statistics listed above contain
parametric
>>>> and/or
>>>>>>>> //                 non-parametric confidence interval limits.
>>>>>>>> //
>>>>>>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency
>>>> Table
>>>>>>>> Counts:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Number of Categories (N_CAT),
>>>>>>>> //           Contingency Table Count columns repeated
N_CAT*N_CAT
>>>> times
>>>>>>>> //
>>>>>>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency
>>>> Table
>>>>>>>> Scores:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Number of Categories (N_CAT),
>>>>>>>> //           Accuracy (ACC),
>>>>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>>>>> //           Heidke Skill Score (HSS),
>>>>>>>> //           Gerrity Score (GER),
>>>>>>>> //           NOTE: All statistics listed above contain
parametric
>>>> and/or
>>>>>>>> //                 non-parametric confidence interval limits.
>>>>>>>> //
>>>>>>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
>> Variables:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Forecast Mean (FBAR),
>>>>>>>> //           Forecast Standard Deviation (FSTDEV),
>>>>>>>> //           Observation Mean (OBAR),
>>>>>>>> //           Observation Standard Deviation (OSTDEV),
>>>>>>>> //           Pearson's Correlation Coefficient (PR_CORR),
>>>>>>>> //           Spearman's Rank Correlation Coefficient
(SP_CORR),
>>>>>>>> //           Kendall Tau Rank Correlation Coefficient
(KT_CORR),
>>>>>>>> //           Number of ranks compared (RANKS),
>>>>>>>> //           Number of tied ranks in the forecast field
>> (FRANK_TIES),
>>>>>>>> //           Number of tied ranks in the observation field
>>>> (ORANK_TIES),
>>>>>>>> //           Mean Error (ME),
>>>>>>>> //           Standard Deviation of the Error (ESTDEV),
>>>>>>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>>>>>>>> //           Mean Absolute Error (MAE),
>>>>>>>> //           Mean Squared Error (MSE),
>>>>>>>> //           Bias-Corrected Mean Squared Error (BCMSE),
>>>>>>>> //           Root Mean Squared Error (RMSE),
>>>>>>>> //           Percentiles of the Error (E10, E25, E50, E75,
E90)
>>>>>>>> //           NOTE: Most statistics listed above contain
parametric
>>>>>> and/or
>>>>>>>> //                 non-parametric confidence interval limits.
>>>>>>>> //
>>>>>>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Forecast Mean (FBAR),
>>>>>>>> //              = mean(f)
>>>>>>>> //           Observation Mean (OBAR),
>>>>>>>> //              = mean(o)
>>>>>>>> //           Forecast*Observation Product Mean (FOBAR),
>>>>>>>> //              = mean(f*o)
>>>>>>>> //           Forecast Squared Mean (FFBAR),
>>>>>>>> //              = mean(f^2)
>>>>>>>> //           Observation Squared Mean (OOBAR)
>>>>>>>> //              = mean(o^2)
>>>>>>>> //
>>>>>>>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly Partial
Sums:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Forecast Anomaly Mean (FABAR),
>>>>>>>> //              = mean(f-c)
>>>>>>>> //           Observation Anomaly Mean (OABAR),
>>>>>>>> //              = mean(o-c)
>>>>>>>> //           Product of Forecast and Observation Anomalies
Mean
>>>>>> (FOABAR),
>>>>>>>> //              = mean((f-c)*(o-c))
>>>>>>>> //           Forecast Anomaly Squared Mean (FFABAR),
>>>>>>>> //              = mean((f-c)^2)
>>>>>>>> //           Observation Anomaly Squared Mean (OOABAR)
>>>>>>>> //              = mean((o-c)^2)
>>>>>>>> //
>>>>>>>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           U-Forecast Mean (UFBAR),
>>>>>>>> //              = mean(uf)
>>>>>>>> //           V-Forecast Mean (VFBAR),
>>>>>>>> //              = mean(vf)
>>>>>>>> //           U-Observation Mean (UOBAR),
>>>>>>>> //              = mean(uo)
>>>>>>>> //           V-Observation Mean (VOBAR),
>>>>>>>> //              = mean(vo)
>>>>>>>> //           U-Product Plus V-Product (UVFOBAR),
>>>>>>>> //              = mean(uf*uo+vf*vo)
>>>>>>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
>>>>>>>> //              = mean(uf^2+vf^2)
>>>>>>>> //           U-Observation Squared Plus V-Observation Squared
>>>> (UVOOBAR)
>>>>>>>> //              = mean(uo^2+vo^2)
>>>>>>>> //
>>>>>>>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly Partial
Sums:
>>>>>>>> //           U-Forecast Anomaly Mean (UFABAR),
>>>>>>>> //              = mean(uf-uc)
>>>>>>>> //           V-Forecast Anomaly Mean (VFABAR),
>>>>>>>> //              = mean(vf-vc)
>>>>>>>> //           U-Observation Anomaly Mean (UOABAR),
>>>>>>>> //              = mean(uo-uc)
>>>>>>>> //           V-Observation Anomaly Mean (VOABAR),
>>>>>>>> //              = mean(vo-vc)
>>>>>>>> //           U-Anomaly Product Plus V-Anomaly Product
(UVFOABAR),
>>>>>>>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
>>>>>>>> //           U-Forecast Anomaly Squared Plus V-Forecast
Anomaly
>>>> Squared
>>>>>>>> (UVFFABAR),
>>>>>>>> //              = mean((uf-uc)^2+(vf-vc)^2)
>>>>>>>> //           U-Observation Anomaly Squared Plus V-Observation
>> Anomaly
>>>>>>>> Squared (UVOOABAR)
>>>>>>>> //              = mean((uo-uc)^2+(vo-vc)^2)
>>>>>>>> //
>>>>>>>> //   (11) STAT and PCT Text Files, Nx2 Probability
Contingency Table
>>>>>>>> Counts:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>>>> //           Row Observation Yes Count (OY_i),
>>>>>>>> //           Row Observation No Count (ON_i),
>>>>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
>>>>>> table.
>>>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>>>> //
>>>>>>>> //   (12) STAT and PSTD Text Files, Nx2 Probability
Contingency
>> Table
>>>>>>>> Scores:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>> //           Base Rate (BASER) with confidence interval
limits,
>>>>>>>> //           Reliability (RELIABILITY),
>>>>>>>> //           Resolution (RESOLUTION),
>>>>>>>> //           Uncertainty (UNCERTAINTY),
>>>>>>>> //           Area Under the ROC Curve (ROC_AUC),
>>>>>>>> //           Brier Score (BRIER) with confidence interval
limits,
>>>>>>>> //           Probability Threshold Value (THRESH_i)
>>>>>>>> //           NOTE: Previous column repeated for each
probability
>>>>>> threshold.
>>>>>>>> //
>>>>>>>> //   (13) STAT and PJC Text Files, Joint/Continuous
Statistics of
>>>>>>>> //                                 Probabilistic Variables:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>>>> //           Observation Yes Count Divided by Total
(OY_TP_i),
>>>>>>>> //           Observation No Count Divided by Total (ON_TP_i),
>>>>>>>> //           Calibration (CALIBRATION_i),
>>>>>>>> //           Refinement (REFINEMENT_i),
>>>>>>>> //           Likelikhood (LIKELIHOOD_i),
>>>>>>>> //           Base Rate (BASER_i),
>>>>>>>> //           NOTE: Previous 7 columns repeated for each row
in the
>>>>>> table.
>>>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>>>> //
>>>>>>>> //   (14) STAT and PRC Text Files, ROC Curve Points for
>>>>>>>> //                                 Probabilistic Variables:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>>>> //           Probability of Detecting Yes (PODY_i),
>>>>>>>> //           Probability of False Detection (POFD_i),
>>>>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
>>>>>> table.
>>>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>>>> //
>>>>>>>> //   (15) STAT and MPR Text Files, Matched Pair Data:
>>>>>>>> //           Total (TOTAL),
>>>>>>>> //           Index (INDEX),
>>>>>>>> //           Observation Station ID (OBS_SID),
>>>>>>>> //           Observation Latitude (OBS_LAT),
>>>>>>>> //           Observation Longitude (OBS_LON),
>>>>>>>> //           Observation Level (OBS_LVL),
>>>>>>>> //           Observation Elevation (OBS_ELV),
>>>>>>>> //           Forecast Value (FCST),
>>>>>>>> //           Observation Value (OBS),
>>>>>>>> //           Climatological Value (CLIMO)
>>>>>>>> //
>>>>>>>> //   In the expressions above, f are forecast values, o are
observed
>>>>>>>> values,
>>>>>>>> //   and c are climatological values.
>>>>>>>> //
>>>>>>>> // Values for these flags are interpreted as follows:
>>>>>>>> //    (0) Do not generate output of this type
>>>>>>>> //    (1) Write output to a STAT file
>>>>>>>> //    (2) Write output to a STAT file and a text file
>>>>>>>> //
>>>>>>>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1
];
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
>>>>>> Correlation
>>>>>>>> // Coefficients should be computed.  Computing them over
large
>>>> datasets
>>>>>> is
>>>>>>>> // computationally intensive and slows down the runtime
execution
>>>>>>>> significantly.
>>>>>>>> //    (0) Do not compute these correlation coefficients
>>>>>>>> //    (1) Compute these correlation coefficients
>>>>>>>> //
>>>>>>>> rank_corr_flag = 1;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Specify the GRIB Table 2 parameter table version number to
be
>> used
>>>>>>>> // for interpreting GRIB codes.
>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>>>> //
>>>>>>>> grib_ptv = 2;
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Directory where temporary files should be written.
>>>>>>>> //
>>>>>>>> tmp_dir = "/tmp";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Prefix to be used for the output file names.
>>>>>>>> //
>>>>>>>> output_prefix = "";
>>>>>>>>
>>>>>>>> //
>>>>>>>> // Indicate a version number for the contents of this
configuration
>>>>>> file.
>>>>>>>> // The value should generally not be modified.
>>>>>>>> //
>>>>>>>> version = "V3.0.1";
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>>
>>
>>
>>


------------------------------------------------
Subject: Re: METV3 Issue
From: Tim Melino
Time: Mon Dec 12 12:59:06 2011

Hi Paul,
I have everything running now and it seems to be working ok. Just one
final
question for you. Is there a way to verify 10 meter winds and not the
individual U and V components? I saw in the documentation if you used
WIND
in the option file it should do this but it doesnt seem to work.

- Tim

On Fri, Dec 9, 2011 at 4:46 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:

> Tim,
>
> We did some testing and found that when MET is compiled with the
intel
> compilers using the -g (debug) flag, point_stat
> runs with no error.  You can apply this by adding the -g flag to the
> CXX_FLAGS and FC_FLAGS in the top-level MET
> Makefile and then doing a clean remake.  We can't explain why this
is the
> case, but we will do more testing and let you
> know if we turn up anything.  It seems like you turned up a fairly
subtle
> bug in either MET or the intel compiler!
> Please let me know if you have any questions.
>
> Thanks,
>
> Paul
>
>
> On 12/09/2011 02:12 PM, Tim Melino via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >
> > Thanks, Paul for all you help on this. I installed everything with
> gfortran
> > and gcc , which seems to work. I will switch back to intel when
the issue
> > is resolved.
> >
> > - Tim
> >
> > On Fri, Dec 9, 2011 at 4:04 PM, Paul Oldenburg via RT
<met_help at ucar.edu
> >wrote:
> >
> >> Tim,
> >>
> >> I was able to reproduce the error you reported when running
METv3.0.1
> >> compiled with intel compilers.  This may take us a
> >> little time to sort out.  Thanks for reporting this issue, and
we'll let
> >> you know when we have a solution for you.
> >>
> >> Thanks,
> >>
> >> Paul
> >>
> >>
> >> On 12/09/2011 12:49 PM, Tim Melino via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >>>
> >>> Ok,
> >>> Here are some of the specifications and I will install the
latest patch
> >>> now.
> >>>
> >>> netCDF version 4.1.1
> >>> INTEL-11.1.072 Compilers
> >>>
> >>>
> >>> - Tim
> >>>
> >>>
> >>> On Fri, Dec 9, 2011 at 1:49 PM, Paul Oldenburg via RT <
> met_help at ucar.edu
> >>> wrote:
> >>>
> >>>> Tim,
> >>>>
> >>>> I'm still not able to reproduce the error that you reported.
Have you
> >>>> applied all of the latest patches to METv3.0.1?
> >>>> The latest patch tarball and instructions on how to apply it
can be
> >> found
> >>>> here:
> >>>>
> >>
>
http://www.dtcenter.org/met/users/support/known_issues/METv3.0.1/index.php
> >> .
> >>>>  Can you tell me what version of NetCDF you
> >>>> linked MET against?  What family of compilers did you use to
compile
> MET
> >>>> (e.g. GNU/PGI/intel)?  I think we are down to a
> >>>> configuration/environment problem at this point.  Sorry for the
> trouble.
> >>>>
> >>>> Paul
> >>>>
> >>>>
> >>>> On 12/09/2011 11:38 AM, Tim Melino via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928
>
> >>>>>
> >>>>> Paul,
> >>>>> I put everything into a tar file and uploaded it.
> >>>>>
> >>>>> - Tim
> >>>>>
> >>>>>
> >>>>> On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT <
> >>>> met_help at ucar.edu>wrote:
> >>>>>
> >>>>>> Tim,
> >>>>>>
> >>>>>> We are not able to reproduce the error that you are
reporting.  Are
> >> you
> >>>>>> using the same exact data and config files that
> >>>>>> you sent me and I tested with?  In any case, can you create a
tar
> >>>> archive
> >>>>>> of all the files involved in the point_stat
> >>>>>> command that throws the error and put it on the FTP site?  I
will
> need
> >>>> to
> >>>>>> be able to reproduce this error, otherwise it
> >>>>>> will be difficult for me to diagnose the problem.
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>> Paul
> >>>>>>
> >>>>>>
> >>>>>> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >>>>>>>
> >>>>>>> Paul,
> >>>>>>> I tried running again using your configuration settings, but
while
> >>>>>> running
> >>>>>>> pointstat I am still receiving errors. The error comes up as
the
> >>>>>> following
> >>>>>>> ....
> >>>>>>>
> >>>>>>> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
> >>>>>>> ndas.t12z.ncPointStatConfig -outdir . -v 99
> >>>>>>> GSL_RNG_TYPE=mt19937
> >>>>>>> GSL_RNG_SEED=18446744071864509006
> >>>>>>> Forecast File: wrf.nc
> >>>>>>> Climatology File: none
> >>>>>>> Configuration File: PointStatConfig
> >>>>>>> Observation File: ndas.t12z.nc
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >>
>
--------------------------------------------------------------------------------
> >>>>>>>
> >>>>>>> Reading records for TT(0,0,*,*).
> >>>>>>>
> >>>>>>>
> >>>>>>>   LongArray::operator[](int) -> range check error ... 4
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> - Tim
> >>>>>>>
> >>>>>>>
> >>>>>>> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT <
> >>>> met_help at ucar.edu
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Tim,
> >>>>>>>>
> >>>>>>>> I ran the following pb2nc and point_stat commands using the
> attached
> >>>>>>>> config files to generate point verification data
> >>>>>>>> with your PrepBUFR obs and p_interp model data.  Note that
> MET_BASE
> >> is
> >>>>>> set
> >>>>>>>> to the base folder of an instance of
> >>>>>>>> METv3.0.1.  I pulled both config files, with slight
modifications,
> >>>> from
> >>>>>>>> $MET_BASE/scripts/config.
> >>>>>>>>
> >>>>>>>> $MET_BASE/bin/pb2nc
> >>>> ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
> >>>>>>>>
> >>>>>>>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc
PointStatConfig
> >> -outdir
> >>>> .
> >>>>>> -v
> >>>>>>>> 99
> >>>>>>>>
> >>>>>>>> In PointStatConfig, you will see the following settings.
The
> >>>> fcst_field
> >>>>>>>> setting format is due to the fact that fields
> >>>>>>>> in wrf.nc are four dimensional, with the last two
dimensions
> being
> >>>> the
> >>>>>>>> spatial (x,y) dimensions.  The obs_field
> >>>>>>>> specifies surface temperature using a GRIB-style format,
because
> >> pb2nc
> >>>>>>>> indexes fields in its output by GRIB code.  You
> >>>>>>>> should follow a similar paradigm to verify additional
fields
> beyond
> >>>>>>>> temperature.
> >>>>>>>>
> >>>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
> >>>>>>>> obs_field[]  = [ "TMP/Z2" ];
> >>>>>>>>
> >>>>>>>> fcst_thresh[] = [ "le273" ];
> >>>>>>>> obs_thresh[]  = [];
> >>>>>>>>
> >>>>>>>> If you have any questions or problems, please let me know.
> >>>>>>>>
> >>>>>>>> Good luck,
> >>>>>>>>
> >>>>>>>> Paul
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
> >>>>>>>>>
> >>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
> >>>>>>>>>
> >>>>>>>>> I just put the file on the server that I have been using.
As far
> as
> >>>>>>>> running
> >>>>>>>>> the UPP software, that is not really possible at the
moment. I do
> >> not
> >>>>>>>> have
> >>>>>>>>> any of that software installed or configured as I have
never had
> a
> >>>>>> reason
> >>>>>>>>> to use it .
> >>>>>>>>>
> >>>>>>>>> - Tim
> >>>>>>>>>
> >>>>>>>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
> >>>>>> met_help at ucar.edu
> >>>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> Tim,
> >>>>>>>>>>
> >>>>>>>>>> Can you please put the input PrepBUFR file that you pass
to
> pb2nc
> >> on
> >>>>>> the
> >>>>>>>>>> FTP site?  When I look at the contents of
> >>>>>>>>>> out.nc, it does not appear that there are any
observations in
> >> that
> >>>>>>>> file.
> >>>>>>>>>>  I would like to run pb2nc myself to see what
> >>>>>>>>>> is going on.
> >>>>>>>>>>
> >>>>>>>>>> I made an incorrect assumption in my earlier emails that
you
> were
> >>>>>> trying
> >>>>>>>>>> to verify model data in GRIB format.  Now that
> >>>>>>>>>> I have your data in hand, I see that it is p_interp
output, as
> you
> >>>>>>>>>> mentioned in your initial email.  MET support for
> >>>>>>>>>> p_interp is not as robust as for GRIB.  In particular,
> >> grid-relative
> >>>>>>>> wind
> >>>>>>>>>> directions in p_interp data files should not
> >>>>>>>>>> be compared to lat-long relative wind directions in the
PrepBUFR
> >>>> obs.
> >>>>>>>>>>  Would it be possible for you to run your WRF
> >>>>>>>>>> output through the Unified Post Processor (UPP -
> >>>>>>>>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php
> )
> >>>>>>>>>> instead of or in addition to p_interp?  That would
simplify MET
> >>>>>>>>>> verification tasks.  Please let me know if you have any
> >>>>>>>>>> questions.
> >>>>>>>>>>
> >>>>>>>>>> Thanks,
> >>>>>>>>>>
> >>>>>>>>>> Paul
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
> >>>>>>>>>>> Transaction: Ticket created by tmelino at meso.com
> >>>>>>>>>>>        Queue: met_help
> >>>>>>>>>>>      Subject: Re: METV3 Issue
> >>>>>>>>>>>        Owner: Nobody
> >>>>>>>>>>>   Requestors: tmelino at meso.com
> >>>>>>>>>>>       Status: new
> >>>>>>>>>>>  Ticket <URL:
> >>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> Ok,
> >>>>>>>>>>>
> >>>>>>>>>>> The data should be there now. With out.nc being the obs
and
> >>>>>> wrf.ncbeing
> >>>>>>>>>>> the forecast
> >>>>>>>>>>>
> >>>>>>>>>>> - Tim
> >>>>>>>>>>>
> >>>>>>>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino
<tmelino at meso.com>
> >>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> Hi,
> >>>>>>>>>>>> I have recently been doing some work with WRF and am
trying to
> >> add
> >>>>>> the
> >>>>>>>>>> the
> >>>>>>>>>>>> model evaluation tools to our standard model
verification
> >> system.
> >>>>  I
> >>>>>>>>>>>> started the process by running the pressure
interpolation
> >> program
> >>>>>> on a
> >>>>>>>>>>>> single wrfout file, which appeared to finish correctly.
I have
> >>>>>>>> attached
> >>>>>>>>>> an
> >>>>>>>>>>>> ncdump of the file header to this email it is called
> >>>>>>>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I
downloaded a
> >>>> single
> >>>>>>>>>> prebufr
> >>>>>>>>>>>> file from an NCEP repository for the time centered on
the
> >> forecast
> >>>>>>>>>> period
> >>>>>>>>>>>> and ran PB2NC and this also appeared to finish
correctly and
> >>>> output
> >>>>>> a
> >>>>>>>>>>>> single netcdf file, the header information is also
attached
> >>>>>>>> (PB2NC.txt).
> >>>>>>>>>>>> Then I attempted to run the point stat utility on these
two
> >> files
> >>>>>> but
> >>>>>>>>>> the
> >>>>>>>>>>>> program errors out telling me that there are more
forecast
> field
> >>>>>> that
> >>>>>>>>>>>> observational fields "ERROR:
> PointStatConfInfo::process_config()
> >>>> ->
> >>>>>>>> The
> >>>>>>>>>>>> number fcst_thresh entries provided must match the
number of
> >>>> fields
> >>>>>>>>>>>> provided in fcst_field.". I ran the following command
from the
> >>>>>>>> terminal
> >>>>>>>>>> to
> >>>>>>>>>>>> run point stat "bin/point_stat
> >> wrfout_d02_2011-12-07_00:00:00_PLEV
> >>>>>>>>>> out.nc
> >>>>>>>>>>>> PointStatConfig".  I am not sure what the problem is I
have
> red
> >>>> the
> >>>>>>>>>>>> documentation and it appears to be setup correctly but
I am
> not
> >>>>>>>>>> completely
> >>>>>>>>>>>> sure as I have never used this software before.  What
should
> >> these
> >>>>>>>>>> namelist
> >>>>>>>>>>>> fields look like using 2 netcdf files (1.Forecast
1.Obs),
> trying
> >>>> to
> >>>>>>>>>> verify
> >>>>>>>>>>>> 10 meter winds? I appreciate your help!
> >>>>>>>>>>>>
> >>>>>>>>>>>> Also ... I ran the test all scripts after compilation ,
and
> the
> >>>> code
> >>>>>>>>>>>> completed successfully with no errors.
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> Thanks ,
> >>>>>>>>>>>> Tim
> >>>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>>>> //
> >>>>>>>> // Default pb2nc configuration file
> >>>>>>>> //
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Stratify the observation data in the PrepBufr files in
the
> >>>> following
> >>>>>>>> // ways:
> >>>>>>>> //  (1) by message type: supply a list of PrepBufr message
types
> >>>>>>>> //      to retain (i.e. AIRCFT)
> >>>>>>>> //  (2) by station id: supply a list of observation
stations to
> >> retain
> >>>>>>>> //  (3) by valid time: supply starting and ending times in
form
> >>>>>>>> //      YYYY-MM-DD HH:MM:SS UTC
> >>>>>>>> //  (4) by location: supply either an NCEP masking grid, a
masking
> >>>>>>>> //      lat/lon polygon or a file to a mask lat/lon polygon
> >>>>>>>> //  (5) by elevation: supply min/max elevation values
> >>>>>>>> //  (6) by report type (typ): supply a list of report types
to
> >> retain
> >>>>>>>> //  (7) by instrument type (itp): supply a list of
instrument type
> >> to
> >>>>>>>> //      retain
> >>>>>>>> //  (8) by vertical level: supply min/max vertical levels
> >>>>>>>> //  (9) by variable type: supply a list of variable types
to
> retain
> >>>>>>>> //      P, Q, T, Z, U, V
> >>>>>>>> // (11) by quality mark: supply a quality mark threshold
> >>>>>>>> // (12) Flag to retain values for all quality marks, or
just the
> >> first
> >>>>>>>> //      quality mark (highest)
> >>>>>>>> // (13) by data level category: supply a list of category
types to
> >>>>>>>> //      retain.
> >>>>>>>> //
> >>>>>>>> //      0 - Surface level (mass reports only)
> >>>>>>>> //      1 - Mandatory level (upper-air profile reports)
> >>>>>>>> //      2 - Significant temperature level (upper-air
profile
> >> reports)
> >>>>>>>> //      2 - Significant temperature and winds-by-pressure
level
> >>>>>>>> //          (future combined mass and wind upper-air
reports)
> >>>>>>>> //      3 - Winds-by-pressure level (upper-air profile
reports)
> >>>>>>>> //      4 - Winds-by-height level (upper-air profile
reports)
> >>>>>>>> //      5 - Tropopause level (upper-air profile reports)
> >>>>>>>> //      6 - Reports on a single level
> >>>>>>>> //          (e.g., aircraft, satellite-wind, surface wind,
> >>>>>>>> //           precipitable water retrievals, etc.)
> >>>>>>>> //      7 - Auxiliary levels generated via interpolation
from
> >> spanning
> >>>>>>>> levels
> >>>>>>>> //          (upper-air profile reports)
> >>>>>>>> //
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of PrepBufr message type
strings
> >> to
> >>>>>>>> retain.
> >>>>>>>> // An empty list indicates that all should be retained.
> >>>>>>>> // List of valid message types:
> >>>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >>>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >>>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
> >>>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
> >>>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >>>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
> >>>>>>>> //
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>>>>>> //
> >>>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >>>>>>>> //
> >>>>>>>> message_type[] = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of station ID strings to
retain.
> >>>>>>>> // An empty list indicates that all should be retained.
> >>>>>>>> //
> >>>>>>>> // e.g. station_id[] = [ "KDEN" ];
> >>>>>>>> //
> >>>>>>>> station_id[] = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Beginning and ending time offset values in seconds for
> >> observations
> >>>>>>>> // to retain.  The valid time window for retaining
observations is
> >>>>>>>> // defined in reference to the observation time.  So
observations
> >> with
> >>>>>>>> // a valid time falling in the window [obs_time+beg_ds,
> >>>> obs_time+end_ds]
> >>>>>>>> // will be retained.
> >>>>>>>> //
> >>>>>>>> beg_ds = -1800;
> >>>>>>>> end_ds =  1800;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify the name of a single grid to be used in masking
the
> data.
> >>>>>>>> // An empty string indicates that no grid should be used.
The
> >>>> standard
> >>>>>>>> // NCEP grids are named "GNNN" where NNN indicates the
three digit
> >>>> grid
> >>>>>>>> number.
> >>>>>>>> //
> >>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>>>>>>> //
> >>>>>>>> // e.g. mask_grid = "G212";
> >>>>>>>> //
> >>>>>>>> mask_grid = "G212";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a single ASCII file containing a lat/lon
polygon.
> >>>>>>>> // Latitude in degrees north and longitude in degrees east.
> >>>>>>>> // By default, the first and last polygon points are
connected.
> >>>>>>>> //
> >>>>>>>> // The lat/lon polygon file should contain a name for the
polygon
> >>>>>>>> // followed by a space-separated list of lat/lon points:
> >>>>>>>> //    "name lat1 lon1 lat2 lon2... latn lonn"
> >>>>>>>> //
> >>>>>>>> // MET_BASE may be used in the path for the lat/lon polygon
file.
> >>>>>>>> //
> >>>>>>>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
> >>>>>>>> //
> >>>>>>>> mask_poly = "";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Beginning and ending elevation values in meters for
> observations
> >>>>>>>> // to retain.
> >>>>>>>> //
> >>>>>>>> beg_elev = -1000;
> >>>>>>>> end_elev = 100000;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of PrepBufr report type
values
> to
> >>>>>> retain.
> >>>>>>>> // An empty list indicates that all should be retained.
> >>>>>>>> //
> >>>>>>>> //
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
> >>>>>>>> //
> >>>>>>>> // e.g. pb_report_type[] = [ 120, 133 ];
> >>>>>>>> //
> >>>>>>>> pb_report_type[] = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of input report type
values to
> >>>> retain.
> >>>>>>>> // An empty list indicates that all should be retained.
> >>>>>>>> //
> >>>>>>>> //
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
> >>>>>>>> //
> >>>>>>>> // e.g. in_report_type[] = [ 11, 22, 23 ];
> >>>>>>>> //
> >>>>>>>> in_report_type[] = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of instrument type values
to
> >> retain.
> >>>>>>>> // An empty list indicates that all should be retained.
> >>>>>>>> //
> >>>>>>>> // e.g. instrument_type[] = [ 52, 87 ];
> >>>>>>>> //
> >>>>>>>> instrument_type[] = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Beginning and ending vertical levels to retain.
> >>>>>>>> //
> >>>>>>>> beg_level = 1;
> >>>>>>>> end_level = 255;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of strings containing
grib codes
> >> or
> >>>>>>>> // corresponding grib code abbreviations to retain or be
derived
> >> from
> >>>>>>>> // the available observations.
> >>>>>>>> //
> >>>>>>>> // Grib Codes to be RETAINED:
> >>>>>>>> //    SPFH or 51 for Specific Humidity in kg/kg
> >>>>>>>> //    TMP  or 11 for Temperature in K
> >>>>>>>> //    HGT  or 7  for Height in meters
> >>>>>>>> //    UGRD or 33 for the East-West component of the wind in
m/s
> >>>>>>>> //    VGRD or 34 for the North-South component of the wind
in m/s
> >>>>>>>> //
> >>>>>>>> // Grib Codes to be DERIVED:
> >>>>>>>> //    DPT   or 17 for Dewpoint Temperature in K
> >>>>>>>> //    WIND  or 32 for Wind Speed in m/s
> >>>>>>>> //    RH    or 52 for Relative Humidity in %
> >>>>>>>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
> >>>>>>>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in
Pa
> >>>>>>>> //
> >>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>>>>>> //
> >>>>>>>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND"
];
> >>>>>>>> //
> >>>>>>>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
> >>>>>>>>                    "DPT",  "WIND", "RH",   "MIXR" ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Quality mark threshold to indicate which observations to
> retain.
> >>>>>>>> // Observations with a quality mark equal to or LESS THAN
this
> >>>> threshold
> >>>>>>>> // will be retained, while observations with a quality mark
> GREATER
> >>>> THAN
> >>>>>>>> // this threshold will be discarded.
> >>>>>>>> //
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
> >>>>>>>> //
> >>>>>>>> quality_mark_thresh = 2;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Flag to indicate whether observations should be drawn
from the
> >> top
> >>>>>>>> // of the event stack (most quality controlled) or the
bottom of
> the
> >>>>>>>> // event stack (most raw).  A value of 1 indicates that the
top of
> >> the
> >>>>>>>> // event stack should be used while a value of zero
indicates that
> >> the
> >>>>>>>> // bottom should be used.
> >>>>>>>> //
> >>>>>>>> event_stack_flag = 1;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Space comma-separated list of data level categorie
values to
> >>>> retain,
> >>>>>>>> // where a value of:
> >>>>>>>> //    0 = Surface level (mass reports only)
> >>>>>>>> //    1 = Mandatory level (upper-air profile reports)
> >>>>>>>> //    2 = Significant temperature level (upper-air profile
> reports)
> >>>>>>>> //    2 = Significant temperature and winds-by-pressure
level
> >>>>>>>> //        (future combined mass and wind upper-air reports)
> >>>>>>>> //    3 = Winds-by-pressure level (upper-air profile
reports)
> >>>>>>>> //    4 = Winds-by-height level (upper-air profile reports)
> >>>>>>>> //    5 = Tropopause level (upper-air profile reports)
> >>>>>>>> //    6 = Reports on a single level
> >>>>>>>> //        (e.g., aircraft, satellite-wind, surface wind,
> >>>>>>>> //         precipitable water retrievals, etc.)
> >>>>>>>> //    7 = Auxiliary levels generated via interpolation from
> spanning
> >>>>>> levels
> >>>>>>>> //        (upper-air profile reports)
> >>>>>>>> // An empty list indicates that all should be retained.
> >>>>>>>> //
> >>>>>>>> //
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>>>>>> //
> >>>>>>>> // e.g. level_category[] = [ 0, 1 ];
> >>>>>>>> //
> >>>>>>>> level_category[] = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Directory where temp files should be written by the
PB2NC tool
> >>>>>>>> //
> >>>>>>>> tmp_dir = "/tmp";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Indicate a version number for the contents of this
> configuration
> >>>>>> file.
> >>>>>>>> // The value should generally not be modified.
> >>>>>>>> //
> >>>>>>>> version = "V3.0";
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>>>> //
> >>>>>>>> // Default point_stat configuration file
> >>>>>>>> //
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
////////////////////////////////////////////////////////////////////////////////
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a name to designate the model being verified.
This
> name
> >>>>>> will be
> >>>>>>>> // written to the second column of the ASCII output
generated.
> >>>>>>>> //
> >>>>>>>> model = "WRF";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Beginning and ending time offset values in seconds for
> >> observations
> >>>>>>>> // to be used.  These time offsets are defined in reference
to the
> >>>>>>>> // forecast valid time, v.  Observations with a valid time
falling
> >> in
> >>>>>> the
> >>>>>>>> // window [v+beg_ds, v+end_ds] will be used.
> >>>>>>>> // These selections are overridden by the command line
arguments
> >>>>>>>> // -obs_valid_beg and -obs_valid_end.
> >>>>>>>> //
> >>>>>>>> beg_ds = -1800;
> >>>>>>>> end_ds =  1800;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of fields to be verified.
The
> >>>>>> forecast
> >>>>>>>> and
> >>>>>>>> // observation fields may be specified separately.  If the
> obs_field
> >>>>>>>> parameter
> >>>>>>>> // is left blank, it will default to the contents of
fcst_field.
> >>>>>>>> //
> >>>>>>>> // Each field is specified as a GRIB code or abbreviation
followed
> >> by
> >>>> an
> >>>>>>>> // accumulation or vertical level indicator for GRIB files
or as a
> >>>>>>>> variable name
> >>>>>>>> // followed by a list of dimensions for NetCDF files output
from
> >>>>>> p_interp
> >>>>>>>> or MET.
> >>>>>>>> //
> >>>>>>>> // Specifying verification fields for GRIB files:
> >>>>>>>> //    GC/ANNN for accumulation interval NNN
> >>>>>>>> //    GC/ZNNN for vertical level NNN
> >>>>>>>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or
AGL)
> >>>>>>>> //    GC/PNNN for pressure level NNN in hPa
> >>>>>>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >>>>>>>> //    GC/LNNN for a generic level type
> >>>>>>>> //    GC/RNNN for a specific GRIB record number
> >>>>>>>> //    Where GC is the number of or abbreviation for the
grib code
> >>>>>>>> //    to be verified.
> >>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>>>>>> //
> >>>>>>>> // Specifying verification fields for NetCDF files:
> >>>>>>>> //    var_name(i,...,j,*,*) for a single field
> >>>>>>>> //    var_name(i-j,*,*) for a range of fields
> >>>>>>>> //    Where var_name is the name of the NetCDF variable,
> >>>>>>>> //    and i,...,j specifies fixed dimension values,
> >>>>>>>> //    and i-j specifies a range of values for a single
dimension,
> >>>>>>>> //    and *,* specifies the two dimensions for the gridded
field.
> >>>>>>>> //
> >>>>>>>> //    NOTE: To verify winds as vectors rather than scalars,
> >>>>>>>> //          specify UGRD (or 33) followed by VGRD (or 34)
with the
> >>>>>>>> //          same level values.
> >>>>>>>> //
> >>>>>>>> //    NOTE: To process a probability field, add "/PROB",
such as
> >>>>>>>> "POP/Z0/PROB".
> >>>>>>>> //
> >>>>>>>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a
GRIB
> input
> >>>>>>>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)"
]; for
> >>>> NetCDF
> >>>>>>>> input
> >>>>>>>> //
> >>>>>>>>
> >>>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
> >>>>>>>> obs_field[]  = [ "TMP/Z2" ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of groups of thresholds
to be
> >>>> applied
> >>>>>> to
> >>>>>>>> the
> >>>>>>>> // fields listed above.  Thresholds for the forecast and
> observation
> >>>>>> fields
> >>>>>>>> // may be specified separately.  If the obs_thresh
parameter is
> left
> >>>>>> blank,
> >>>>>>>> // it will default to the contents of fcst_thresh.
> >>>>>>>> //
> >>>>>>>> // At least one threshold must be provided for each field
listed
> >>>> above.
> >>>>>>>>  The
> >>>>>>>> // lengths of the "fcst_field" and "fcst_thresh" arrays
must
> match,
> >> as
> >>>>>> must
> >>>>>>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To
apply
> >>>>>> multiple
> >>>>>>>> // thresholds to a field, separate the threshold values
with a
> >> space.
> >>>>>>>> //
> >>>>>>>> // Each threshold must be preceded by a two letter
indicator for
> the
> >>>>>> type
> >>>>>>>> of
> >>>>>>>> // thresholding to be performed:
> >>>>>>>> //    'lt' for less than     'le' for less than or equal to
> >>>>>>>> //    'eq' for equal to      'ne' for not equal to
> >>>>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
> >>>>>>>> //
> >>>>>>>> // NOTE: Thresholds for probabilities must begin with 0.0,
end
> with
> >>>> 1.0,
> >>>>>>>> //       and be preceeded by "ge".
> >>>>>>>> //
> >>>>>>>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
> >>>>>>>> //
> >>>>>>>> fcst_thresh[] = [ "le273" ];
> >>>>>>>> obs_thresh[]  = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of thresholds to be used
when
> >>>>>> computing
> >>>>>>>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds
are
> >>>> applied
> >>>>>> to
> >>>>>>>> the
> >>>>>>>> // wind speed values derived from each U/V pair.  Only
those U/V
> >> pairs
> >>>>>>>> which meet
> >>>>>>>> // the wind speed threshold criteria are retained.  If the
> >>>>>> obs_wind_thresh
> >>>>>>>> // parameter is left blank, it will default to the contents
of
> >>>>>>>> fcst_wind_thresh.
> >>>>>>>> //
> >>>>>>>> // To apply multiple wind speed thresholds, separate the
threshold
> >>>>>> values
> >>>>>>>> with a
> >>>>>>>> // space.  Use "NA" to indicate that no wind speed
threshold
> should
> >> be
> >>>>>>>> applied.
> >>>>>>>> //
> >>>>>>>> // Each threshold must be preceded by a two letter
indicator for
> the
> >>>>>> type
> >>>>>>>> of
> >>>>>>>> // thresholding to be performed:
> >>>>>>>> //    'lt' for less than     'le' for less than or equal to
> >>>>>>>> //    'eq' for equal to      'ne' for not equal to
> >>>>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
> >>>>>>>> //    'NA' for no threshold
> >>>>>>>> //
> >>>>>>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >>>>>>>> //
> >>>>>>>> fcst_wind_thresh[] = [ "NA" ];
> >>>>>>>> obs_wind_thresh[]  = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of PrepBufr message types
with
> >> which
> >>>>>>>> // to perform the verification.  Statistics will be
computed
> >>>> separately
> >>>>>>>> // for each message type specified.  At least one PrepBufr
message
> >>>> type
> >>>>>>>> // must be provided.
> >>>>>>>> // List of valid message types:
> >>>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
> >>>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
> >>>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
> >>>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
> >>>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
> >>>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
> >>>>>>>> //
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
> >>>>>>>> //
> >>>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
> >>>>>>>> //
> >>>>>>>> message_type[] = [ "ADPSFC" ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of grids to be used in
masking
> the
> >>>>>> data
> >>>>>>>> over
> >>>>>>>> // which to perform scoring.  An empty list indicates that
no
> >> masking
> >>>>>> grid
> >>>>>>>> // should be performed.  The standard NCEP grids are named
"GNNN"
> >>>> where
> >>>>>> NNN
> >>>>>>>> // indicates the three digit grid number.  Enter "FULL" to
score
> >> over
> >>>>>> the
> >>>>>>>> // entire domain.
> >>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>>>>>>> //
> >>>>>>>> // e.g. mask_grid[] = [ "FULL" ];
> >>>>>>>> //
> >>>>>>>> mask_grid[] = [ "FULL" ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of masking regions to be
> applied.
> >>>>>>>> // An empty list indicates that no additional masks should
be
> used.
> >>>>>>>> // The masking regions may be defined in one of 4 ways:
> >>>>>>>> //
> >>>>>>>> // (1) An ASCII file containing a lat/lon polygon.
> >>>>>>>> //     Latitude in degrees north and longitude in degrees
east.
> >>>>>>>> //     By default, the first and last polygon points are
> connected.
> >>>>>>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists
of n
> >> points:
> >>>>>>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >>>>>>>> //
> >>>>>>>> // (2) The NetCDF output of the gen_poly_mask tool.
> >>>>>>>> //
> >>>>>>>> // (3) A NetCDF data file, followed by the name of the
NetCDF
> >> variable
> >>>>>>>> //     to be used, and optionally, a threshold to be
applied to
> the
> >>>>>> field.
> >>>>>>>> //     e.g. "sample.nc var_name gt0.00"
> >>>>>>>> //
> >>>>>>>> // (4) A GRIB data file, followed by a description of the
field
> >>>>>>>> //     to be used, and optionally, a threshold to be
applied to
> the
> >>>>>> field.
> >>>>>>>> //     e.g. "sample.grb APCP/A3 gt0.00"
> >>>>>>>> //
> >>>>>>>> // Any NetCDF or GRIB file used must have the same grid
dimensions
> >> as
> >>>>>> the
> >>>>>>>> // data being verified.
> >>>>>>>> //
> >>>>>>>> // MET_BASE may be used in the path for the files above.
> >>>>>>>> //
> >>>>>>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >>>>>>>> //                      "poly_mask.ncf",
> >>>>>>>> //                      "sample.nc APCP",
> >>>>>>>> //                      "sample.grb HGT/Z0 gt100.0" ];
> >>>>>>>> //
> >>>>>>>> mask_poly[] = [];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify the name of an ASCII file containing a space-
separated
> >> list
> >>>>>> of
> >>>>>>>> // station ID's at which to perform verification.  Each
station ID
> >>>>>>>> specified
> >>>>>>>> // is treated as an individual masking region.
> >>>>>>>> //
> >>>>>>>> // An empty list file name indicates that no station ID
masks
> should
> >>>> be
> >>>>>>>> used.
> >>>>>>>> //
> >>>>>>>> // MET_BASE may be used in the path for the station ID mask
file
> >> name.
> >>>>>>>> //
> >>>>>>>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
> >>>>>>>> //
> >>>>>>>> mask_sid = "";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of values for alpha to be
used
> >> when
> >>>>>>>> computing
> >>>>>>>> // confidence intervals.  Values of alpha must be between 0
and 1.
> >>>>>>>> //
> >>>>>>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >>>>>>>> //
> >>>>>>>> ci_alpha[] = [ 0.05 ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify the method to be used for computing bootstrap
> confidence
> >>>>>>>> intervals.
> >>>>>>>> // The value for this is interpreted as follows:
> >>>>>>>> //    (0) Use the BCa interval method (computationally
intensive)
> >>>>>>>> //    (1) Use the percentile interval method
> >>>>>>>> //
> >>>>>>>> boot_interval = 1;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a proportion between 0 and 1 to define the
replicate
> >> sample
> >>>>>> size
> >>>>>>>> // to be used when computing percentile intervals.  The
replicate
> >>>> sample
> >>>>>>>> // size is set to boot_rep_prop * n, where n is the number
of raw
> >> data
> >>>>>>>> points.
> >>>>>>>> //
> >>>>>>>> // e.g boot_rep_prop = 0.80;
> >>>>>>>> //
> >>>>>>>> boot_rep_prop = 1.0;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify the number of times each set of matched pair
data
> should
> >> be
> >>>>>>>> // resampled when computing bootstrap confidence intervals.
A
> value
> >>>> of
> >>>>>>>> // zero disables the computation of bootstrap condifence
> intervals.
> >>>>>>>> //
> >>>>>>>> // e.g. n_boot_rep = 1000;
> >>>>>>>> //
> >>>>>>>> n_boot_rep = 1000;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify the name of the random number generator to be
used.
>  See
> >>>> the
> >>>>>> MET
> >>>>>>>> // Users Guide for a list of possible random number
generators.
> >>>>>>>> //
> >>>>>>>> boot_rng = "mt19937";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify the seed value to be used when computing
bootstrap
> >>>> confidence
> >>>>>>>> // intervals.  If left unspecified, the seed will change
for each
> >> run
> >>>>>> and
> >>>>>>>> // the computed bootstrap confidence intervals will not be
> >>>> reproducable.
> >>>>>>>> //
> >>>>>>>> boot_seed = "";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of interpolation
method(s) to be
> >>>> used
> >>>>>>>> // for comparing the forecast grid to the observation
points.
> >>  String
> >>>>>>>> values
> >>>>>>>> // are interpreted as follows:
> >>>>>>>> //    MIN     = Minimum in the neighborhood
> >>>>>>>> //    MAX     = Maximum in the neighborhood
> >>>>>>>> //    MEDIAN  = Median in the neighborhood
> >>>>>>>> //    UW_MEAN = Unweighted mean in the neighborhood
> >>>>>>>> //    DW_MEAN = Distance-weighted mean in the neighborhood
> >>>>>>>> //    LS_FIT  = Least-squares fit in the neighborhood
> >>>>>>>> //    BILIN   = Bilinear interpolation using the 4 closest
points
> >>>>>>>> //
> >>>>>>>> // In all cases, vertical interpolation is performed in the
> natural
> >>>> log
> >>>>>>>> // of pressure of the levels above and below the
observation.
> >>>>>>>> //
> >>>>>>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >>>>>>>> //
> >>>>>>>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify a comma-separated list of box widths to be used
by the
> >>>>>>>> // interpolation techniques listed above.  A value of 1
indicates
> >> that
> >>>>>>>> // the nearest neighbor approach should be used.  For a
value of n
> >>>>>>>> // greater than 1, the n*n grid points closest to the
observation
> >>>> define
> >>>>>>>> // the neighborhood.
> >>>>>>>> //
> >>>>>>>> // e.g. interp_width = [ 1, 3, 5 ];
> >>>>>>>> //
> >>>>>>>> interp_width[] = [ 1, 3 ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // When interpolating, compute a ratio of the number of
valid data
> >>>>>> points
> >>>>>>>> // to the total number of points in the neighborhood.  If
that
> ratio
> >>>> is
> >>>>>>>> // less than this threshold, do not include the
observation.  This
> >>>>>>>> // threshold must be between 0 and 1.  Setting this
threshold to 1
> >>>> will
> >>>>>>>> // require that each observation be surrounded by n*n valid
> forecast
> >>>>>>>> // points.
> >>>>>>>> //
> >>>>>>>> // e.g. interp_thresh = 1.0;
> >>>>>>>> //
> >>>>>>>> interp_thresh = 1.0;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify flags to indicate the type of data to be output:
> >>>>>>>> //    (1) STAT and FHO Text Files, Forecast, Hit,
Observation
> Rates:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Forecast Rate (F_RATE),
> >>>>>>>> //           Hit Rate (H_RATE),
> >>>>>>>> //           Observation Rate (O_RATE)
> >>>>>>>> //
> >>>>>>>> //    (2) STAT and CTC Text Files, Contingency Table
Counts:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Forecast Yes and Observation Yes Count
(FY_OY),
> >>>>>>>> //           Forecast Yes and Observation No Count (FY_ON),
> >>>>>>>> //           Forecast No and Observation Yes Count (FN_OY),
> >>>>>>>> //           Forecast No and Observation No Count (FN_ON)
> >>>>>>>> //
> >>>>>>>> //    (3) STAT and CTS Text Files, Contingency Table
Scores:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Base Rate (BASER),
> >>>>>>>> //           Forecast Mean (FMEAN),
> >>>>>>>> //           Accuracy (ACC),
> >>>>>>>> //           Frequency Bias (FBIAS),
> >>>>>>>> //           Probability of Detecting Yes (PODY),
> >>>>>>>> //           Probability of Detecting No (PODN),
> >>>>>>>> //           Probability of False Detection (POFD),
> >>>>>>>> //           False Alarm Ratio (FAR),
> >>>>>>>> //           Critical Success Index (CSI),
> >>>>>>>> //           Gilbert Skill Score (GSS),
> >>>>>>>> //           Hanssen and Kuipers Discriminant (HK),
> >>>>>>>> //           Heidke Skill Score (HSS),
> >>>>>>>> //           Odds Ratio (ODDS),
> >>>>>>>> //           NOTE: All statistics listed above contain
parametric
> >>>> and/or
> >>>>>>>> //                 non-parametric confidence interval
limits.
> >>>>>>>> //
> >>>>>>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency
> >>>> Table
> >>>>>>>> Counts:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Number of Categories (N_CAT),
> >>>>>>>> //           Contingency Table Count columns repeated
N_CAT*N_CAT
> >>>> times
> >>>>>>>> //
> >>>>>>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency
> >>>> Table
> >>>>>>>> Scores:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Number of Categories (N_CAT),
> >>>>>>>> //           Accuracy (ACC),
> >>>>>>>> //           Hanssen and Kuipers Discriminant (HK),
> >>>>>>>> //           Heidke Skill Score (HSS),
> >>>>>>>> //           Gerrity Score (GER),
> >>>>>>>> //           NOTE: All statistics listed above contain
parametric
> >>>> and/or
> >>>>>>>> //                 non-parametric confidence interval
limits.
> >>>>>>>> //
> >>>>>>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
> >> Variables:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Forecast Mean (FBAR),
> >>>>>>>> //           Forecast Standard Deviation (FSTDEV),
> >>>>>>>> //           Observation Mean (OBAR),
> >>>>>>>> //           Observation Standard Deviation (OSTDEV),
> >>>>>>>> //           Pearson's Correlation Coefficient (PR_CORR),
> >>>>>>>> //           Spearman's Rank Correlation Coefficient
(SP_CORR),
> >>>>>>>> //           Kendall Tau Rank Correlation Coefficient
(KT_CORR),
> >>>>>>>> //           Number of ranks compared (RANKS),
> >>>>>>>> //           Number of tied ranks in the forecast field
> >> (FRANK_TIES),
> >>>>>>>> //           Number of tied ranks in the observation field
> >>>> (ORANK_TIES),
> >>>>>>>> //           Mean Error (ME),
> >>>>>>>> //           Standard Deviation of the Error (ESTDEV),
> >>>>>>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >>>>>>>> //           Mean Absolute Error (MAE),
> >>>>>>>> //           Mean Squared Error (MSE),
> >>>>>>>> //           Bias-Corrected Mean Squared Error (BCMSE),
> >>>>>>>> //           Root Mean Squared Error (RMSE),
> >>>>>>>> //           Percentiles of the Error (E10, E25, E50, E75,
E90)
> >>>>>>>> //           NOTE: Most statistics listed above contain
parametric
> >>>>>> and/or
> >>>>>>>> //                 non-parametric confidence interval
limits.
> >>>>>>>> //
> >>>>>>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Forecast Mean (FBAR),
> >>>>>>>> //              = mean(f)
> >>>>>>>> //           Observation Mean (OBAR),
> >>>>>>>> //              = mean(o)
> >>>>>>>> //           Forecast*Observation Product Mean (FOBAR),
> >>>>>>>> //              = mean(f*o)
> >>>>>>>> //           Forecast Squared Mean (FFBAR),
> >>>>>>>> //              = mean(f^2)
> >>>>>>>> //           Observation Squared Mean (OOBAR)
> >>>>>>>> //              = mean(o^2)
> >>>>>>>> //
> >>>>>>>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly
Partial Sums:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Forecast Anomaly Mean (FABAR),
> >>>>>>>> //              = mean(f-c)
> >>>>>>>> //           Observation Anomaly Mean (OABAR),
> >>>>>>>> //              = mean(o-c)
> >>>>>>>> //           Product of Forecast and Observation Anomalies
Mean
> >>>>>> (FOABAR),
> >>>>>>>> //              = mean((f-c)*(o-c))
> >>>>>>>> //           Forecast Anomaly Squared Mean (FFABAR),
> >>>>>>>> //              = mean((f-c)^2)
> >>>>>>>> //           Observation Anomaly Squared Mean (OOABAR)
> >>>>>>>> //              = mean((o-c)^2)
> >>>>>>>> //
> >>>>>>>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           U-Forecast Mean (UFBAR),
> >>>>>>>> //              = mean(uf)
> >>>>>>>> //           V-Forecast Mean (VFBAR),
> >>>>>>>> //              = mean(vf)
> >>>>>>>> //           U-Observation Mean (UOBAR),
> >>>>>>>> //              = mean(uo)
> >>>>>>>> //           V-Observation Mean (VOBAR),
> >>>>>>>> //              = mean(vo)
> >>>>>>>> //           U-Product Plus V-Product (UVFOBAR),
> >>>>>>>> //              = mean(uf*uo+vf*vo)
> >>>>>>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >>>>>>>> //              = mean(uf^2+vf^2)
> >>>>>>>> //           U-Observation Squared Plus V-Observation
Squared
> >>>> (UVOOBAR)
> >>>>>>>> //              = mean(uo^2+vo^2)
> >>>>>>>> //
> >>>>>>>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly
Partial Sums:
> >>>>>>>> //           U-Forecast Anomaly Mean (UFABAR),
> >>>>>>>> //              = mean(uf-uc)
> >>>>>>>> //           V-Forecast Anomaly Mean (VFABAR),
> >>>>>>>> //              = mean(vf-vc)
> >>>>>>>> //           U-Observation Anomaly Mean (UOABAR),
> >>>>>>>> //              = mean(uo-uc)
> >>>>>>>> //           V-Observation Anomaly Mean (VOABAR),
> >>>>>>>> //              = mean(vo-vc)
> >>>>>>>> //           U-Anomaly Product Plus V-Anomaly Product
(UVFOABAR),
> >>>>>>>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
> >>>>>>>> //           U-Forecast Anomaly Squared Plus V-Forecast
Anomaly
> >>>> Squared
> >>>>>>>> (UVFFABAR),
> >>>>>>>> //              = mean((uf-uc)^2+(vf-vc)^2)
> >>>>>>>> //           U-Observation Anomaly Squared Plus V-
Observation
> >> Anomaly
> >>>>>>>> Squared (UVOOABAR)
> >>>>>>>> //              = mean((uo-uc)^2+(vo-vc)^2)
> >>>>>>>> //
> >>>>>>>> //   (11) STAT and PCT Text Files, Nx2 Probability
Contingency
> Table
> >>>>>>>> Counts:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>>>> //           Probability Threshold Value (THRESH_i),
> >>>>>>>> //           Row Observation Yes Count (OY_i),
> >>>>>>>> //           Row Observation No Count (ON_i),
> >>>>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
> >>>>>> table.
> >>>>>>>> //           Last Probability Threshold Value (THRESH_n)
> >>>>>>>> //
> >>>>>>>> //   (12) STAT and PSTD Text Files, Nx2 Probability
Contingency
> >> Table
> >>>>>>>> Scores:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>>>> //           Base Rate (BASER) with confidence interval
limits,
> >>>>>>>> //           Reliability (RELIABILITY),
> >>>>>>>> //           Resolution (RESOLUTION),
> >>>>>>>> //           Uncertainty (UNCERTAINTY),
> >>>>>>>> //           Area Under the ROC Curve (ROC_AUC),
> >>>>>>>> //           Brier Score (BRIER) with confidence interval
limits,
> >>>>>>>> //           Probability Threshold Value (THRESH_i)
> >>>>>>>> //           NOTE: Previous column repeated for each
probability
> >>>>>> threshold.
> >>>>>>>> //
> >>>>>>>> //   (13) STAT and PJC Text Files, Joint/Continuous
Statistics of
> >>>>>>>> //                                 Probabilistic Variables:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>>>> //           Probability Threshold Value (THRESH_i),
> >>>>>>>> //           Observation Yes Count Divided by Total
(OY_TP_i),
> >>>>>>>> //           Observation No Count Divided by Total
(ON_TP_i),
> >>>>>>>> //           Calibration (CALIBRATION_i),
> >>>>>>>> //           Refinement (REFINEMENT_i),
> >>>>>>>> //           Likelikhood (LIKELIHOOD_i),
> >>>>>>>> //           Base Rate (BASER_i),
> >>>>>>>> //           NOTE: Previous 7 columns repeated for each row
in the
> >>>>>> table.
> >>>>>>>> //           Last Probability Threshold Value (THRESH_n)
> >>>>>>>> //
> >>>>>>>> //   (14) STAT and PRC Text Files, ROC Curve Points for
> >>>>>>>> //                                 Probabilistic Variables:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>>>>>>> //           Probability Threshold Value (THRESH_i),
> >>>>>>>> //           Probability of Detecting Yes (PODY_i),
> >>>>>>>> //           Probability of False Detection (POFD_i),
> >>>>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
> >>>>>> table.
> >>>>>>>> //           Last Probability Threshold Value (THRESH_n)
> >>>>>>>> //
> >>>>>>>> //   (15) STAT and MPR Text Files, Matched Pair Data:
> >>>>>>>> //           Total (TOTAL),
> >>>>>>>> //           Index (INDEX),
> >>>>>>>> //           Observation Station ID (OBS_SID),
> >>>>>>>> //           Observation Latitude (OBS_LAT),
> >>>>>>>> //           Observation Longitude (OBS_LON),
> >>>>>>>> //           Observation Level (OBS_LVL),
> >>>>>>>> //           Observation Elevation (OBS_ELV),
> >>>>>>>> //           Forecast Value (FCST),
> >>>>>>>> //           Observation Value (OBS),
> >>>>>>>> //           Climatological Value (CLIMO)
> >>>>>>>> //
> >>>>>>>> //   In the expressions above, f are forecast values, o are
> observed
> >>>>>>>> values,
> >>>>>>>> //   and c are climatological values.
> >>>>>>>> //
> >>>>>>>> // Values for these flags are interpreted as follows:
> >>>>>>>> //    (0) Do not generate output of this type
> >>>>>>>> //    (1) Write output to a STAT file
> >>>>>>>> //    (2) Write output to a STAT file and a text file
> >>>>>>>> //
> >>>>>>>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0,
1 ];
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Flag to indicate whether Kendall's Tau and Spearman's
Rank
> >>>>>> Correlation
> >>>>>>>> // Coefficients should be computed.  Computing them over
large
> >>>> datasets
> >>>>>> is
> >>>>>>>> // computationally intensive and slows down the runtime
execution
> >>>>>>>> significantly.
> >>>>>>>> //    (0) Do not compute these correlation coefficients
> >>>>>>>> //    (1) Compute these correlation coefficients
> >>>>>>>> //
> >>>>>>>> rank_corr_flag = 1;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Specify the GRIB Table 2 parameter table version number
to be
> >> used
> >>>>>>>> // for interpreting GRIB codes.
> >>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>>>>>>> //
> >>>>>>>> grib_ptv = 2;
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Directory where temporary files should be written.
> >>>>>>>> //
> >>>>>>>> tmp_dir = "/tmp";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Prefix to be used for the output file names.
> >>>>>>>> //
> >>>>>>>> output_prefix = "";
> >>>>>>>>
> >>>>>>>> //
> >>>>>>>> // Indicate a version number for the contents of this
> configuration
> >>>>>> file.
> >>>>>>>> // The value should generally not be modified.
> >>>>>>>> //
> >>>>>>>> version = "V3.0.1";
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>>>
> >>
> >>
> >>
>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51928] Re: METV3 Issue
From: Paul Oldenburg
Time: Mon Dec 12 13:32:06 2011

Tim,

MET does not support deriving wind speed (WIND) from p_interp UU and
VV fields at this time.  There are not many MET
users who have p_interp data, so there has not been very much demand
for this derivation to date.  MET will perform this
derivation for GRIB UGRD and VGRD, but that won't help you unless you
use the WRF UPP.

One possible way to verify wind speed in p_interp data would be to
modify the NetCDF model data files, adding a derived
wind speed field that is calculated from UU and VV.  If you choose to
do this, I recommend the R package ncdf.  Also, I
know of a NetCDF package for java as well, so there are a couple
options.  Sorry I don't have an easy answer.  Please
let me know if you have any other questions.

Paul


On 12/12/2011 12:59 PM, Tim Melino via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>
> Hi Paul,
> I have everything running now and it seems to be working ok. Just
one final
> question for you. Is there a way to verify 10 meter winds and not
the
> individual U and V components? I saw in the documentation if you
used WIND
> in the option file it should do this but it doesnt seem to work.
>
> - Tim
>
> On Fri, Dec 9, 2011 at 4:46 PM, Paul Oldenburg via RT
<met_help at ucar.edu>wrote:
>
>> Tim,
>>
>> We did some testing and found that when MET is compiled with the
intel
>> compilers using the -g (debug) flag, point_stat
>> runs with no error.  You can apply this by adding the -g flag to
the
>> CXX_FLAGS and FC_FLAGS in the top-level MET
>> Makefile and then doing a clean remake.  We can't explain why this
is the
>> case, but we will do more testing and let you
>> know if we turn up anything.  It seems like you turned up a fairly
subtle
>> bug in either MET or the intel compiler!
>> Please let me know if you have any questions.
>>
>> Thanks,
>>
>> Paul
>>
>>
>> On 12/09/2011 02:12 PM, Tim Melino via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>
>>> Thanks, Paul for all you help on this. I installed everything with
>> gfortran
>>> and gcc , which seems to work. I will switch back to intel when
the issue
>>> is resolved.
>>>
>>> - Tim
>>>
>>> On Fri, Dec 9, 2011 at 4:04 PM, Paul Oldenburg via RT
<met_help at ucar.edu
>>> wrote:
>>>
>>>> Tim,
>>>>
>>>> I was able to reproduce the error you reported when running
METv3.0.1
>>>> compiled with intel compilers.  This may take us a
>>>> little time to sort out.  Thanks for reporting this issue, and
we'll let
>>>> you know when we have a solution for you.
>>>>
>>>> Thanks,
>>>>
>>>> Paul
>>>>
>>>>
>>>> On 12/09/2011 12:49 PM, Tim Melino via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>>>
>>>>> Ok,
>>>>> Here are some of the specifications and I will install the
latest patch
>>>>> now.
>>>>>
>>>>> netCDF version 4.1.1
>>>>> INTEL-11.1.072 Compilers
>>>>>
>>>>>
>>>>> - Tim
>>>>>
>>>>>
>>>>> On Fri, Dec 9, 2011 at 1:49 PM, Paul Oldenburg via RT <
>> met_help at ucar.edu
>>>>> wrote:
>>>>>
>>>>>> Tim,
>>>>>>
>>>>>> I'm still not able to reproduce the error that you reported.
Have you
>>>>>> applied all of the latest patches to METv3.0.1?
>>>>>> The latest patch tarball and instructions on how to apply it
can be
>>>> found
>>>>>> here:
>>>>>>
>>>>
>>
http://www.dtcenter.org/met/users/support/known_issues/METv3.0.1/index.php
>>>> .
>>>>>>  Can you tell me what version of NetCDF you
>>>>>> linked MET against?  What family of compilers did you use to
compile
>> MET
>>>>>> (e.g. GNU/PGI/intel)?  I think we are down to a
>>>>>> configuration/environment problem at this point.  Sorry for the
>> trouble.
>>>>>>
>>>>>> Paul
>>>>>>
>>>>>>
>>>>>> On 12/09/2011 11:38 AM, Tim Melino via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928
>
>>>>>>>
>>>>>>> Paul,
>>>>>>> I put everything into a tar file and uploaded it.
>>>>>>>
>>>>>>> - Tim
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Dec 9, 2011 at 11:38 AM, Paul Oldenburg via RT <
>>>>>> met_help at ucar.edu>wrote:
>>>>>>>
>>>>>>>> Tim,
>>>>>>>>
>>>>>>>> We are not able to reproduce the error that you are
reporting.  Are
>>>> you
>>>>>>>> using the same exact data and config files that
>>>>>>>> you sent me and I tested with?  In any case, can you create a
tar
>>>>>> archive
>>>>>>>> of all the files involved in the point_stat
>>>>>>>> command that throws the error and put it on the FTP site?  I
will
>> need
>>>>>> to
>>>>>>>> be able to reproduce this error, otherwise it
>>>>>>>> will be difficult for me to diagnose the problem.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Paul
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12/09/2011 08:16 AM, Tim Melino via RT wrote:
>>>>>>>>>
>>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>>>>>>>
>>>>>>>>> Paul,
>>>>>>>>> I tried running again using your configuration settings, but
while
>>>>>>>> running
>>>>>>>>> pointstat I am still receiving errors. The error comes up as
the
>>>>>>>> following
>>>>>>>>> ....
>>>>>>>>>
>>>>>>>>> [wind at conus1 METv3.0.1]$ $MET_BASE/bin/point_stat wrf.nc
>>>>>>>>> ndas.t12z.ncPointStatConfig -outdir . -v 99
>>>>>>>>> GSL_RNG_TYPE=mt19937
>>>>>>>>> GSL_RNG_SEED=18446744071864509006
>>>>>>>>> Forecast File: wrf.nc
>>>>>>>>> Climatology File: none
>>>>>>>>> Configuration File: PointStatConfig
>>>>>>>>> Observation File: ndas.t12z.nc
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
--------------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>> Reading records for TT(0,0,*,*).
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>   LongArray::operator[](int) -> range check error ... 4
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> - Tim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Dec 8, 2011 at 5:10 PM, Paul Oldenburg via RT <
>>>>>> met_help at ucar.edu
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Tim,
>>>>>>>>>>
>>>>>>>>>> I ran the following pb2nc and point_stat commands using the
>> attached
>>>>>>>>>> config files to generate point verification data
>>>>>>>>>> with your PrepBUFR obs and p_interp model data.  Note that
>> MET_BASE
>>>> is
>>>>>>>> set
>>>>>>>>>> to the base folder of an instance of
>>>>>>>>>> METv3.0.1.  I pulled both config files, with slight
modifications,
>>>>>> from
>>>>>>>>>> $MET_BASE/scripts/config.
>>>>>>>>>>
>>>>>>>>>> $MET_BASE/bin/pb2nc
>>>>>> ndas.t12z.prepbufr.tm12.nrndas.t12z.ncPB2NCConfig_G212 -v 99
>>>>>>>>>>
>>>>>>>>>> $MET_BASE/bin/point_stat wrf.nc ndas.t12z.nc
PointStatConfig
>>>> -outdir
>>>>>> .
>>>>>>>> -v
>>>>>>>>>> 99
>>>>>>>>>>
>>>>>>>>>> In PointStatConfig, you will see the following settings.
The
>>>>>> fcst_field
>>>>>>>>>> setting format is due to the fact that fields
>>>>>>>>>> in wrf.nc are four dimensional, with the last two
dimensions
>> being
>>>>>> the
>>>>>>>>>> spatial (x,y) dimensions.  The obs_field
>>>>>>>>>> specifies surface temperature using a GRIB-style format,
because
>>>> pb2nc
>>>>>>>>>> indexes fields in its output by GRIB code.  You
>>>>>>>>>> should follow a similar paradigm to verify additional
fields
>> beyond
>>>>>>>>>> temperature.
>>>>>>>>>>
>>>>>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>>>>>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>>>>>>>
>>>>>>>>>> fcst_thresh[] = [ "le273" ];
>>>>>>>>>> obs_thresh[]  = [];
>>>>>>>>>>
>>>>>>>>>> If you have any questions or problems, please let me know.
>>>>>>>>>>
>>>>>>>>>> Good luck,
>>>>>>>>>>
>>>>>>>>>> Paul
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 12/08/2011 02:28 PM, Tim Melino via RT wrote:
>>>>>>>>>>>
>>>>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928 >
>>>>>>>>>>>
>>>>>>>>>>> I just put the file on the server that I have been using.
As far
>> as
>>>>>>>>>> running
>>>>>>>>>>> the UPP software, that is not really possible at the
moment. I do
>>>> not
>>>>>>>>>> have
>>>>>>>>>>> any of that software installed or configured as I have
never had
>> a
>>>>>>>> reason
>>>>>>>>>>> to use it .
>>>>>>>>>>>
>>>>>>>>>>> - Tim
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Dec 8, 2011 at 3:42 PM, Paul Oldenburg via RT <
>>>>>>>> met_help at ucar.edu
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Tim,
>>>>>>>>>>>>
>>>>>>>>>>>> Can you please put the input PrepBUFR file that you pass
to
>> pb2nc
>>>> on
>>>>>>>> the
>>>>>>>>>>>> FTP site?  When I look at the contents of
>>>>>>>>>>>> out.nc, it does not appear that there are any
observations in
>>>> that
>>>>>>>>>> file.
>>>>>>>>>>>>  I would like to run pb2nc myself to see what
>>>>>>>>>>>> is going on.
>>>>>>>>>>>>
>>>>>>>>>>>> I made an incorrect assumption in my earlier emails that
you
>> were
>>>>>>>> trying
>>>>>>>>>>>> to verify model data in GRIB format.  Now that
>>>>>>>>>>>> I have your data in hand, I see that it is p_interp
output, as
>> you
>>>>>>>>>>>> mentioned in your initial email.  MET support for
>>>>>>>>>>>> p_interp is not as robust as for GRIB.  In particular,
>>>> grid-relative
>>>>>>>>>> wind
>>>>>>>>>>>> directions in p_interp data files should not
>>>>>>>>>>>> be compared to lat-long relative wind directions in the
PrepBUFR
>>>>>> obs.
>>>>>>>>>>>>  Would it be possible for you to run your WRF
>>>>>>>>>>>> output through the Unified Post Processor (UPP -
>>>>>>>>>>>> http://www.dtcenter.org/wrf-
nmm/users/overview/upp_overview.php
>> )
>>>>>>>>>>>> instead of or in addition to p_interp?  That would
simplify MET
>>>>>>>>>>>> verification tasks.  Please let me know if you have any
>>>>>>>>>>>> questions.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>
>>>>>>>>>>>> Paul
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 12/08/2011 11:07 AM, Tim Melino via RT wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thu Dec 08 11:07:04 2011: Request 51928 was acted upon.
>>>>>>>>>>>>> Transaction: Ticket created by tmelino at meso.com
>>>>>>>>>>>>>        Queue: met_help
>>>>>>>>>>>>>      Subject: Re: METV3 Issue
>>>>>>>>>>>>>        Owner: Nobody
>>>>>>>>>>>>>   Requestors: tmelino at meso.com
>>>>>>>>>>>>>       Status: new
>>>>>>>>>>>>>  Ticket <URL:
>>>>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51928>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ok,
>>>>>>>>>>>>>
>>>>>>>>>>>>> The data should be there now. With out.nc being the obs
and
>>>>>>>> wrf.ncbeing
>>>>>>>>>>>>> the forecast
>>>>>>>>>>>>>
>>>>>>>>>>>>> - Tim
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Dec 8, 2011 at 9:52 AM, Tim Melino
<tmelino at meso.com>
>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>> I have recently been doing some work with WRF and am
trying to
>>>> add
>>>>>>>> the
>>>>>>>>>>>> the
>>>>>>>>>>>>>> model evaluation tools to our standard model
verification
>>>> system.
>>>>>>  I
>>>>>>>>>>>>>> started the process by running the pressure
interpolation
>>>> program
>>>>>>>> on a
>>>>>>>>>>>>>> single wrfout file, which appeared to finish correctly.
I have
>>>>>>>>>> attached
>>>>>>>>>>>> an
>>>>>>>>>>>>>> ncdump of the file header to this email it is called
>>>>>>>>>>>>>> wrfout_d02_2011-12-07_00:00:00_PLEV.txt. Then I
downloaded a
>>>>>> single
>>>>>>>>>>>> prebufr
>>>>>>>>>>>>>> file from an NCEP repository for the time centered on
the
>>>> forecast
>>>>>>>>>>>> period
>>>>>>>>>>>>>> and ran PB2NC and this also appeared to finish
correctly and
>>>>>> output
>>>>>>>> a
>>>>>>>>>>>>>> single netcdf file, the header information is also
attached
>>>>>>>>>> (PB2NC.txt).
>>>>>>>>>>>>>> Then I attempted to run the point stat utility on these
two
>>>> files
>>>>>>>> but
>>>>>>>>>>>> the
>>>>>>>>>>>>>> program errors out telling me that there are more
forecast
>> field
>>>>>>>> that
>>>>>>>>>>>>>> observational fields "ERROR:
>> PointStatConfInfo::process_config()
>>>>>> ->
>>>>>>>>>> The
>>>>>>>>>>>>>> number fcst_thresh entries provided must match the
number of
>>>>>> fields
>>>>>>>>>>>>>> provided in fcst_field.". I ran the following command
from the
>>>>>>>>>> terminal
>>>>>>>>>>>> to
>>>>>>>>>>>>>> run point stat "bin/point_stat
>>>> wrfout_d02_2011-12-07_00:00:00_PLEV
>>>>>>>>>>>> out.nc
>>>>>>>>>>>>>> PointStatConfig".  I am not sure what the problem is I
have
>> red
>>>>>> the
>>>>>>>>>>>>>> documentation and it appears to be setup correctly but
I am
>> not
>>>>>>>>>>>> completely
>>>>>>>>>>>>>> sure as I have never used this software before.  What
should
>>>> these
>>>>>>>>>>>> namelist
>>>>>>>>>>>>>> fields look like using 2 netcdf files (1.Forecast
1.Obs),
>> trying
>>>>>> to
>>>>>>>>>>>> verify
>>>>>>>>>>>>>> 10 meter winds? I appreciate your help!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Also ... I ran the test all scripts after compilation ,
and
>> the
>>>>>> code
>>>>>>>>>>>>>> completed successfully with no errors.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks ,
>>>>>>>>>>>>>> Tim
>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>>>> //
>>>>>>>>>> // Default pb2nc configuration file
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Stratify the observation data in the PrepBufr files in
the
>>>>>> following
>>>>>>>>>> // ways:
>>>>>>>>>> //  (1) by message type: supply a list of PrepBufr message
types
>>>>>>>>>> //      to retain (i.e. AIRCFT)
>>>>>>>>>> //  (2) by station id: supply a list of observation
stations to
>>>> retain
>>>>>>>>>> //  (3) by valid time: supply starting and ending times in
form
>>>>>>>>>> //      YYYY-MM-DD HH:MM:SS UTC
>>>>>>>>>> //  (4) by location: supply either an NCEP masking grid, a
masking
>>>>>>>>>> //      lat/lon polygon or a file to a mask lat/lon polygon
>>>>>>>>>> //  (5) by elevation: supply min/max elevation values
>>>>>>>>>> //  (6) by report type (typ): supply a list of report types
to
>>>> retain
>>>>>>>>>> //  (7) by instrument type (itp): supply a list of
instrument type
>>>> to
>>>>>>>>>> //      retain
>>>>>>>>>> //  (8) by vertical level: supply min/max vertical levels
>>>>>>>>>> //  (9) by variable type: supply a list of variable types
to
>> retain
>>>>>>>>>> //      P, Q, T, Z, U, V
>>>>>>>>>> // (11) by quality mark: supply a quality mark threshold
>>>>>>>>>> // (12) Flag to retain values for all quality marks, or
just the
>>>> first
>>>>>>>>>> //      quality mark (highest)
>>>>>>>>>> // (13) by data level category: supply a list of category
types to
>>>>>>>>>> //      retain.
>>>>>>>>>> //
>>>>>>>>>> //      0 - Surface level (mass reports only)
>>>>>>>>>> //      1 - Mandatory level (upper-air profile reports)
>>>>>>>>>> //      2 - Significant temperature level (upper-air
profile
>>>> reports)
>>>>>>>>>> //      2 - Significant temperature and winds-by-pressure
level
>>>>>>>>>> //          (future combined mass and wind upper-air
reports)
>>>>>>>>>> //      3 - Winds-by-pressure level (upper-air profile
reports)
>>>>>>>>>> //      4 - Winds-by-height level (upper-air profile
reports)
>>>>>>>>>> //      5 - Tropopause level (upper-air profile reports)
>>>>>>>>>> //      6 - Reports on a single level
>>>>>>>>>> //          (e.g., aircraft, satellite-wind, surface wind,
>>>>>>>>>> //           precipitable water retrievals, etc.)
>>>>>>>>>> //      7 - Auxiliary levels generated via interpolation
from
>>>> spanning
>>>>>>>>>> levels
>>>>>>>>>> //          (upper-air profile reports)
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of PrepBufr message type
strings
>>>> to
>>>>>>>>>> retain.
>>>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>>>> // List of valid message types:
>>>>>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>>>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>>>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>>>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>>>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>>>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>>>>>> //
>>>>>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>>>>>>>> //
>>>>>>>>>> message_type[] = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of station ID strings to
retain.
>>>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>>>> //
>>>>>>>>>> // e.g. station_id[] = [ "KDEN" ];
>>>>>>>>>> //
>>>>>>>>>> station_id[] = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Beginning and ending time offset values in seconds for
>>>> observations
>>>>>>>>>> // to retain.  The valid time window for retaining
observations is
>>>>>>>>>> // defined in reference to the observation time.  So
observations
>>>> with
>>>>>>>>>> // a valid time falling in the window [obs_time+beg_ds,
>>>>>> obs_time+end_ds]
>>>>>>>>>> // will be retained.
>>>>>>>>>> //
>>>>>>>>>> beg_ds = -1800;
>>>>>>>>>> end_ds =  1800;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify the name of a single grid to be used in masking
the
>> data.
>>>>>>>>>> // An empty string indicates that no grid should be used.
The
>>>>>> standard
>>>>>>>>>> // NCEP grids are named "GNNN" where NNN indicates the
three digit
>>>>>> grid
>>>>>>>>>> number.
>>>>>>>>>> //
>>>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>>>>>>>> //
>>>>>>>>>> // e.g. mask_grid = "G212";
>>>>>>>>>> //
>>>>>>>>>> mask_grid = "G212";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a single ASCII file containing a lat/lon
polygon.
>>>>>>>>>> // Latitude in degrees north and longitude in degrees east.
>>>>>>>>>> // By default, the first and last polygon points are
connected.
>>>>>>>>>> //
>>>>>>>>>> // The lat/lon polygon file should contain a name for the
polygon
>>>>>>>>>> // followed by a space-separated list of lat/lon points:
>>>>>>>>>> //    "name lat1 lon1 lat2 lon2... latn lonn"
>>>>>>>>>> //
>>>>>>>>>> // MET_BASE may be used in the path for the lat/lon polygon
file.
>>>>>>>>>> //
>>>>>>>>>> // e.g. mask_poly = "MET_BASE/data/poly/EAST.poly";
>>>>>>>>>> //
>>>>>>>>>> mask_poly = "";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Beginning and ending elevation values in meters for
>> observations
>>>>>>>>>> // to retain.
>>>>>>>>>> //
>>>>>>>>>> beg_elev = -1000;
>>>>>>>>>> end_elev = 100000;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of PrepBufr report type
values
>> to
>>>>>>>> retain.
>>>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>>>> //
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_4.htm
>>>>>>>>>> //
>>>>>>>>>> // e.g. pb_report_type[] = [ 120, 133 ];
>>>>>>>>>> //
>>>>>>>>>> pb_report_type[] = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of input report type
values to
>>>>>> retain.
>>>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>>>> //
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_6.htm
>>>>>>>>>> //
>>>>>>>>>> // e.g. in_report_type[] = [ 11, 22, 23 ];
>>>>>>>>>> //
>>>>>>>>>> in_report_type[] = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of instrument type values
to
>>>> retain.
>>>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>>>> //
>>>>>>>>>> // e.g. instrument_type[] = [ 52, 87 ];
>>>>>>>>>> //
>>>>>>>>>> instrument_type[] = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Beginning and ending vertical levels to retain.
>>>>>>>>>> //
>>>>>>>>>> beg_level = 1;
>>>>>>>>>> end_level = 255;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of strings containing
grib codes
>>>> or
>>>>>>>>>> // corresponding grib code abbreviations to retain or be
derived
>>>> from
>>>>>>>>>> // the available observations.
>>>>>>>>>> //
>>>>>>>>>> // Grib Codes to be RETAINED:
>>>>>>>>>> //    SPFH or 51 for Specific Humidity in kg/kg
>>>>>>>>>> //    TMP  or 11 for Temperature in K
>>>>>>>>>> //    HGT  or 7  for Height in meters
>>>>>>>>>> //    UGRD or 33 for the East-West component of the wind in
m/s
>>>>>>>>>> //    VGRD or 34 for the North-South component of the wind
in m/s
>>>>>>>>>> //
>>>>>>>>>> // Grib Codes to be DERIVED:
>>>>>>>>>> //    DPT   or 17 for Dewpoint Temperature in K
>>>>>>>>>> //    WIND  or 32 for Wind Speed in m/s
>>>>>>>>>> //    RH    or 52 for Relative Humidity in %
>>>>>>>>>> //    MIXR  or 53 for Humidity Mixing Ratio in kg/kg
>>>>>>>>>> //    PRMSL or  2 for Pressure Reduced to Mean Sea Level in
Pa
>>>>>>>>>> //
>>>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>>>>>> //
>>>>>>>>>> // e.g. obs_grib_code[] = [ "TMP", "UGRD", "VGRD", "WIND"
];
>>>>>>>>>> //
>>>>>>>>>> obs_grib_code[] = [ "SPFH", "TMP",  "HGT",  "UGRD", "VGRD",
>>>>>>>>>>                    "DPT",  "WIND", "RH",   "MIXR" ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Quality mark threshold to indicate which observations to
>> retain.
>>>>>>>>>> // Observations with a quality mark equal to or LESS THAN
this
>>>>>> threshold
>>>>>>>>>> // will be retained, while observations with a quality mark
>> GREATER
>>>>>> THAN
>>>>>>>>>> // this threshold will be discarded.
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_7.htm
>>>>>>>>>> //
>>>>>>>>>> quality_mark_thresh = 2;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Flag to indicate whether observations should be drawn
from the
>>>> top
>>>>>>>>>> // of the event stack (most quality controlled) or the
bottom of
>> the
>>>>>>>>>> // event stack (most raw).  A value of 1 indicates that the
top of
>>>> the
>>>>>>>>>> // event stack should be used while a value of zero
indicates that
>>>> the
>>>>>>>>>> // bottom should be used.
>>>>>>>>>> //
>>>>>>>>>> event_stack_flag = 1;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Space comma-separated list of data level categorie
values to
>>>>>> retain,
>>>>>>>>>> // where a value of:
>>>>>>>>>> //    0 = Surface level (mass reports only)
>>>>>>>>>> //    1 = Mandatory level (upper-air profile reports)
>>>>>>>>>> //    2 = Significant temperature level (upper-air profile
>> reports)
>>>>>>>>>> //    2 = Significant temperature and winds-by-pressure
level
>>>>>>>>>> //        (future combined mass and wind upper-air reports)
>>>>>>>>>> //    3 = Winds-by-pressure level (upper-air profile
reports)
>>>>>>>>>> //    4 = Winds-by-height level (upper-air profile reports)
>>>>>>>>>> //    5 = Tropopause level (upper-air profile reports)
>>>>>>>>>> //    6 = Reports on a single level
>>>>>>>>>> //        (e.g., aircraft, satellite-wind, surface wind,
>>>>>>>>>> //         precipitable water retrievals, etc.)
>>>>>>>>>> //    7 = Auxiliary levels generated via interpolation from
>> spanning
>>>>>>>> levels
>>>>>>>>>> //        (upper-air profile reports)
>>>>>>>>>> // An empty list indicates that all should be retained.
>>>>>>>>>> //
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>>>>>> //
>>>>>>>>>> // e.g. level_category[] = [ 0, 1 ];
>>>>>>>>>> //
>>>>>>>>>> level_category[] = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Directory where temp files should be written by the
PB2NC tool
>>>>>>>>>> //
>>>>>>>>>> tmp_dir = "/tmp";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Indicate a version number for the contents of this
>> configuration
>>>>>>>> file.
>>>>>>>>>> // The value should generally not be modified.
>>>>>>>>>> //
>>>>>>>>>> version = "V3.0";
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>>>> //
>>>>>>>>>> // Default point_stat configuration file
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
////////////////////////////////////////////////////////////////////////////////
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a name to designate the model being verified.
This
>> name
>>>>>>>> will be
>>>>>>>>>> // written to the second column of the ASCII output
generated.
>>>>>>>>>> //
>>>>>>>>>> model = "WRF";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Beginning and ending time offset values in seconds for
>>>> observations
>>>>>>>>>> // to be used.  These time offsets are defined in reference
to the
>>>>>>>>>> // forecast valid time, v.  Observations with a valid time
falling
>>>> in
>>>>>>>> the
>>>>>>>>>> // window [v+beg_ds, v+end_ds] will be used.
>>>>>>>>>> // These selections are overridden by the command line
arguments
>>>>>>>>>> // -obs_valid_beg and -obs_valid_end.
>>>>>>>>>> //
>>>>>>>>>> beg_ds = -1800;
>>>>>>>>>> end_ds =  1800;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of fields to be verified.
The
>>>>>>>> forecast
>>>>>>>>>> and
>>>>>>>>>> // observation fields may be specified separately.  If the
>> obs_field
>>>>>>>>>> parameter
>>>>>>>>>> // is left blank, it will default to the contents of
fcst_field.
>>>>>>>>>> //
>>>>>>>>>> // Each field is specified as a GRIB code or abbreviation
followed
>>>> by
>>>>>> an
>>>>>>>>>> // accumulation or vertical level indicator for GRIB files
or as a
>>>>>>>>>> variable name
>>>>>>>>>> // followed by a list of dimensions for NetCDF files output
from
>>>>>>>> p_interp
>>>>>>>>>> or MET.
>>>>>>>>>> //
>>>>>>>>>> // Specifying verification fields for GRIB files:
>>>>>>>>>> //    GC/ANNN for accumulation interval NNN
>>>>>>>>>> //    GC/ZNNN for vertical level NNN
>>>>>>>>>> //    GC/ZNNN-NNN for a range of vertical levels (MSL or
AGL)
>>>>>>>>>> //    GC/PNNN for pressure level NNN in hPa
>>>>>>>>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>>>>>>>>>> //    GC/LNNN for a generic level type
>>>>>>>>>> //    GC/RNNN for a specific GRIB record number
>>>>>>>>>> //    Where GC is the number of or abbreviation for the
grib code
>>>>>>>>>> //    to be verified.
>>>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>>>>>> //
>>>>>>>>>> // Specifying verification fields for NetCDF files:
>>>>>>>>>> //    var_name(i,...,j,*,*) for a single field
>>>>>>>>>> //    var_name(i-j,*,*) for a range of fields
>>>>>>>>>> //    Where var_name is the name of the NetCDF variable,
>>>>>>>>>> //    and i,...,j specifies fixed dimension values,
>>>>>>>>>> //    and i-j specifies a range of values for a single
dimension,
>>>>>>>>>> //    and *,* specifies the two dimensions for the gridded
field.
>>>>>>>>>> //
>>>>>>>>>> //    NOTE: To verify winds as vectors rather than scalars,
>>>>>>>>>> //          specify UGRD (or 33) followed by VGRD (or 34)
with the
>>>>>>>>>> //          same level values.
>>>>>>>>>> //
>>>>>>>>>> //    NOTE: To process a probability field, add "/PROB",
such as
>>>>>>>>>> "POP/Z0/PROB".
>>>>>>>>>> //
>>>>>>>>>> // e.g. fcst_field[] = [ "SPFH/P500", "TMP/P500" ]; for a
GRIB
>> input
>>>>>>>>>> // e.g. fcst_field[] = [ "QVAPOR(0,5,*,*)", "TT(0,5,*,*)"
]; for
>>>>>> NetCDF
>>>>>>>>>> input
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>>> fcst_field[] = [ "TT(0,0,*,*)" ];
>>>>>>>>>> obs_field[]  = [ "TMP/Z2" ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of groups of thresholds
to be
>>>>>> applied
>>>>>>>> to
>>>>>>>>>> the
>>>>>>>>>> // fields listed above.  Thresholds for the forecast and
>> observation
>>>>>>>> fields
>>>>>>>>>> // may be specified separately.  If the obs_thresh
parameter is
>> left
>>>>>>>> blank,
>>>>>>>>>> // it will default to the contents of fcst_thresh.
>>>>>>>>>> //
>>>>>>>>>> // At least one threshold must be provided for each field
listed
>>>>>> above.
>>>>>>>>>>  The
>>>>>>>>>> // lengths of the "fcst_field" and "fcst_thresh" arrays
must
>> match,
>>>> as
>>>>>>>> must
>>>>>>>>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To
apply
>>>>>>>> multiple
>>>>>>>>>> // thresholds to a field, separate the threshold values
with a
>>>> space.
>>>>>>>>>> //
>>>>>>>>>> // Each threshold must be preceded by a two letter
indicator for
>> the
>>>>>>>> type
>>>>>>>>>> of
>>>>>>>>>> // thresholding to be performed:
>>>>>>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>>>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>>>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
>>>>>>>>>> //
>>>>>>>>>> // NOTE: Thresholds for probabilities must begin with 0.0,
end
>> with
>>>>>> 1.0,
>>>>>>>>>> //       and be preceeded by "ge".
>>>>>>>>>> //
>>>>>>>>>> // e.g. fcst_thresh[] = [ "gt80", "gt273" ];
>>>>>>>>>> //
>>>>>>>>>> fcst_thresh[] = [ "le273" ];
>>>>>>>>>> obs_thresh[]  = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of thresholds to be used
when
>>>>>>>> computing
>>>>>>>>>> // VL1L2 and VAL1L2 partial sums for winds.  The thresholds
are
>>>>>> applied
>>>>>>>> to
>>>>>>>>>> the
>>>>>>>>>> // wind speed values derived from each U/V pair.  Only
those U/V
>>>> pairs
>>>>>>>>>> which meet
>>>>>>>>>> // the wind speed threshold criteria are retained.  If the
>>>>>>>> obs_wind_thresh
>>>>>>>>>> // parameter is left blank, it will default to the contents
of
>>>>>>>>>> fcst_wind_thresh.
>>>>>>>>>> //
>>>>>>>>>> // To apply multiple wind speed thresholds, separate the
threshold
>>>>>>>> values
>>>>>>>>>> with a
>>>>>>>>>> // space.  Use "NA" to indicate that no wind speed
threshold
>> should
>>>> be
>>>>>>>>>> applied.
>>>>>>>>>> //
>>>>>>>>>> // Each threshold must be preceded by a two letter
indicator for
>> the
>>>>>>>> type
>>>>>>>>>> of
>>>>>>>>>> // thresholding to be performed:
>>>>>>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>>>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>>>>>>> //    'gt' for greater than  'ge' for greater than or equal
to
>>>>>>>>>> //    'NA' for no threshold
>>>>>>>>>> //
>>>>>>>>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>>>>>>>>>> //
>>>>>>>>>> fcst_wind_thresh[] = [ "NA" ];
>>>>>>>>>> obs_wind_thresh[]  = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of PrepBufr message types
with
>>>> which
>>>>>>>>>> // to perform the verification.  Statistics will be
computed
>>>>>> separately
>>>>>>>>>> // for each message type specified.  At least one PrepBufr
message
>>>>>> type
>>>>>>>>>> // must be provided.
>>>>>>>>>> // List of valid message types:
>>>>>>>>>> //    ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW
>>>>>>>>>> //    MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG
>>>>>>>>>> //    SFCSHP SPSSMI SYNDAT VADWND
>>>>>>>>>> //    ANYAIR (= AIRCAR, AIRCFT)
>>>>>>>>>> //    ANYSFC (= ADPSFC, SFCSHP, ADPUPA, PROFLR)
>>>>>>>>>> //    ONLYSF (= ADPSFC, SFCSHP)
>>>>>>>>>> //
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/table_1.htm
>>>>>>>>>> //
>>>>>>>>>> // e.g. message_type[] = [ "ADPUPA", "AIRCAR" ];
>>>>>>>>>> //
>>>>>>>>>> message_type[] = [ "ADPSFC" ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of grids to be used in
masking
>> the
>>>>>>>> data
>>>>>>>>>> over
>>>>>>>>>> // which to perform scoring.  An empty list indicates that
no
>>>> masking
>>>>>>>> grid
>>>>>>>>>> // should be performed.  The standard NCEP grids are named
"GNNN"
>>>>>> where
>>>>>>>> NNN
>>>>>>>>>> // indicates the three digit grid number.  Enter "FULL" to
score
>>>> over
>>>>>>>> the
>>>>>>>>>> // entire domain.
>>>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>>>>>>>> //
>>>>>>>>>> // e.g. mask_grid[] = [ "FULL" ];
>>>>>>>>>> //
>>>>>>>>>> mask_grid[] = [ "FULL" ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of masking regions to be
>> applied.
>>>>>>>>>> // An empty list indicates that no additional masks should
be
>> used.
>>>>>>>>>> // The masking regions may be defined in one of 4 ways:
>>>>>>>>>> //
>>>>>>>>>> // (1) An ASCII file containing a lat/lon polygon.
>>>>>>>>>> //     Latitude in degrees north and longitude in degrees
east.
>>>>>>>>>> //     By default, the first and last polygon points are
>> connected.
>>>>>>>>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists
of n
>>>> points:
>>>>>>>>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>>>>>>>>>> //
>>>>>>>>>> // (2) The NetCDF output of the gen_poly_mask tool.
>>>>>>>>>> //
>>>>>>>>>> // (3) A NetCDF data file, followed by the name of the
NetCDF
>>>> variable
>>>>>>>>>> //     to be used, and optionally, a threshold to be
applied to
>> the
>>>>>>>> field.
>>>>>>>>>> //     e.g. "sample.nc var_name gt0.00"
>>>>>>>>>> //
>>>>>>>>>> // (4) A GRIB data file, followed by a description of the
field
>>>>>>>>>> //     to be used, and optionally, a threshold to be
applied to
>> the
>>>>>>>> field.
>>>>>>>>>> //     e.g. "sample.grb APCP/A3 gt0.00"
>>>>>>>>>> //
>>>>>>>>>> // Any NetCDF or GRIB file used must have the same grid
dimensions
>>>> as
>>>>>>>> the
>>>>>>>>>> // data being verified.
>>>>>>>>>> //
>>>>>>>>>> // MET_BASE may be used in the path for the files above.
>>>>>>>>>> //
>>>>>>>>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>>>>>>>>>> //                      "poly_mask.ncf",
>>>>>>>>>> //                      "sample.nc APCP",
>>>>>>>>>> //                      "sample.grb HGT/Z0 gt100.0" ];
>>>>>>>>>> //
>>>>>>>>>> mask_poly[] = [];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify the name of an ASCII file containing a space-
separated
>>>> list
>>>>>>>> of
>>>>>>>>>> // station ID's at which to perform verification.  Each
station ID
>>>>>>>>>> specified
>>>>>>>>>> // is treated as an individual masking region.
>>>>>>>>>> //
>>>>>>>>>> // An empty list file name indicates that no station ID
masks
>> should
>>>>>> be
>>>>>>>>>> used.
>>>>>>>>>> //
>>>>>>>>>> // MET_BASE may be used in the path for the station ID mask
file
>>>> name.
>>>>>>>>>> //
>>>>>>>>>> // e.g. mask_sid = "MET_BASE/data/stations/CONUS.stations";
>>>>>>>>>> //
>>>>>>>>>> mask_sid = "";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of values for alpha to be
used
>>>> when
>>>>>>>>>> computing
>>>>>>>>>> // confidence intervals.  Values of alpha must be between 0
and 1.
>>>>>>>>>> //
>>>>>>>>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>>>>>>>>>> //
>>>>>>>>>> ci_alpha[] = [ 0.05 ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify the method to be used for computing bootstrap
>> confidence
>>>>>>>>>> intervals.
>>>>>>>>>> // The value for this is interpreted as follows:
>>>>>>>>>> //    (0) Use the BCa interval method (computationally
intensive)
>>>>>>>>>> //    (1) Use the percentile interval method
>>>>>>>>>> //
>>>>>>>>>> boot_interval = 1;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a proportion between 0 and 1 to define the
replicate
>>>> sample
>>>>>>>> size
>>>>>>>>>> // to be used when computing percentile intervals.  The
replicate
>>>>>> sample
>>>>>>>>>> // size is set to boot_rep_prop * n, where n is the number
of raw
>>>> data
>>>>>>>>>> points.
>>>>>>>>>> //
>>>>>>>>>> // e.g boot_rep_prop = 0.80;
>>>>>>>>>> //
>>>>>>>>>> boot_rep_prop = 1.0;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify the number of times each set of matched pair
data
>> should
>>>> be
>>>>>>>>>> // resampled when computing bootstrap confidence intervals.
A
>> value
>>>>>> of
>>>>>>>>>> // zero disables the computation of bootstrap condifence
>> intervals.
>>>>>>>>>> //
>>>>>>>>>> // e.g. n_boot_rep = 1000;
>>>>>>>>>> //
>>>>>>>>>> n_boot_rep = 1000;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify the name of the random number generator to be
used.
>>  See
>>>>>> the
>>>>>>>> MET
>>>>>>>>>> // Users Guide for a list of possible random number
generators.
>>>>>>>>>> //
>>>>>>>>>> boot_rng = "mt19937";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify the seed value to be used when computing
bootstrap
>>>>>> confidence
>>>>>>>>>> // intervals.  If left unspecified, the seed will change
for each
>>>> run
>>>>>>>> and
>>>>>>>>>> // the computed bootstrap confidence intervals will not be
>>>>>> reproducable.
>>>>>>>>>> //
>>>>>>>>>> boot_seed = "";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of interpolation
method(s) to be
>>>>>> used
>>>>>>>>>> // for comparing the forecast grid to the observation
points.
>>>>  String
>>>>>>>>>> values
>>>>>>>>>> // are interpreted as follows:
>>>>>>>>>> //    MIN     = Minimum in the neighborhood
>>>>>>>>>> //    MAX     = Maximum in the neighborhood
>>>>>>>>>> //    MEDIAN  = Median in the neighborhood
>>>>>>>>>> //    UW_MEAN = Unweighted mean in the neighborhood
>>>>>>>>>> //    DW_MEAN = Distance-weighted mean in the neighborhood
>>>>>>>>>> //    LS_FIT  = Least-squares fit in the neighborhood
>>>>>>>>>> //    BILIN   = Bilinear interpolation using the 4 closest
points
>>>>>>>>>> //
>>>>>>>>>> // In all cases, vertical interpolation is performed in the
>> natural
>>>>>> log
>>>>>>>>>> // of pressure of the levels above and below the
observation.
>>>>>>>>>> //
>>>>>>>>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>>>>>>>>>> //
>>>>>>>>>> interp_method[] = [ "MEDIAN", "DW_MEAN" ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify a comma-separated list of box widths to be used
by the
>>>>>>>>>> // interpolation techniques listed above.  A value of 1
indicates
>>>> that
>>>>>>>>>> // the nearest neighbor approach should be used.  For a
value of n
>>>>>>>>>> // greater than 1, the n*n grid points closest to the
observation
>>>>>> define
>>>>>>>>>> // the neighborhood.
>>>>>>>>>> //
>>>>>>>>>> // e.g. interp_width = [ 1, 3, 5 ];
>>>>>>>>>> //
>>>>>>>>>> interp_width[] = [ 1, 3 ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // When interpolating, compute a ratio of the number of
valid data
>>>>>>>> points
>>>>>>>>>> // to the total number of points in the neighborhood.  If
that
>> ratio
>>>>>> is
>>>>>>>>>> // less than this threshold, do not include the
observation.  This
>>>>>>>>>> // threshold must be between 0 and 1.  Setting this
threshold to 1
>>>>>> will
>>>>>>>>>> // require that each observation be surrounded by n*n valid
>> forecast
>>>>>>>>>> // points.
>>>>>>>>>> //
>>>>>>>>>> // e.g. interp_thresh = 1.0;
>>>>>>>>>> //
>>>>>>>>>> interp_thresh = 1.0;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify flags to indicate the type of data to be output:
>>>>>>>>>> //    (1) STAT and FHO Text Files, Forecast, Hit,
Observation
>> Rates:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Forecast Rate (F_RATE),
>>>>>>>>>> //           Hit Rate (H_RATE),
>>>>>>>>>> //           Observation Rate (O_RATE)
>>>>>>>>>> //
>>>>>>>>>> //    (2) STAT and CTC Text Files, Contingency Table
Counts:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Forecast Yes and Observation Yes Count
(FY_OY),
>>>>>>>>>> //           Forecast Yes and Observation No Count (FY_ON),
>>>>>>>>>> //           Forecast No and Observation Yes Count (FN_OY),
>>>>>>>>>> //           Forecast No and Observation No Count (FN_ON)
>>>>>>>>>> //
>>>>>>>>>> //    (3) STAT and CTS Text Files, Contingency Table
Scores:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Base Rate (BASER),
>>>>>>>>>> //           Forecast Mean (FMEAN),
>>>>>>>>>> //           Accuracy (ACC),
>>>>>>>>>> //           Frequency Bias (FBIAS),
>>>>>>>>>> //           Probability of Detecting Yes (PODY),
>>>>>>>>>> //           Probability of Detecting No (PODN),
>>>>>>>>>> //           Probability of False Detection (POFD),
>>>>>>>>>> //           False Alarm Ratio (FAR),
>>>>>>>>>> //           Critical Success Index (CSI),
>>>>>>>>>> //           Gilbert Skill Score (GSS),
>>>>>>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>>>>>>> //           Heidke Skill Score (HSS),
>>>>>>>>>> //           Odds Ratio (ODDS),
>>>>>>>>>> //           NOTE: All statistics listed above contain
parametric
>>>>>> and/or
>>>>>>>>>> //                 non-parametric confidence interval
limits.
>>>>>>>>>> //
>>>>>>>>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency
>>>>>> Table
>>>>>>>>>> Counts:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Number of Categories (N_CAT),
>>>>>>>>>> //           Contingency Table Count columns repeated
N_CAT*N_CAT
>>>>>> times
>>>>>>>>>> //
>>>>>>>>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency
>>>>>> Table
>>>>>>>>>> Scores:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Number of Categories (N_CAT),
>>>>>>>>>> //           Accuracy (ACC),
>>>>>>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>>>>>>> //           Heidke Skill Score (HSS),
>>>>>>>>>> //           Gerrity Score (GER),
>>>>>>>>>> //           NOTE: All statistics listed above contain
parametric
>>>>>> and/or
>>>>>>>>>> //                 non-parametric confidence interval
limits.
>>>>>>>>>> //
>>>>>>>>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
>>>> Variables:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Forecast Mean (FBAR),
>>>>>>>>>> //           Forecast Standard Deviation (FSTDEV),
>>>>>>>>>> //           Observation Mean (OBAR),
>>>>>>>>>> //           Observation Standard Deviation (OSTDEV),
>>>>>>>>>> //           Pearson's Correlation Coefficient (PR_CORR),
>>>>>>>>>> //           Spearman's Rank Correlation Coefficient
(SP_CORR),
>>>>>>>>>> //           Kendall Tau Rank Correlation Coefficient
(KT_CORR),
>>>>>>>>>> //           Number of ranks compared (RANKS),
>>>>>>>>>> //           Number of tied ranks in the forecast field
>>>> (FRANK_TIES),
>>>>>>>>>> //           Number of tied ranks in the observation field
>>>>>> (ORANK_TIES),
>>>>>>>>>> //           Mean Error (ME),
>>>>>>>>>> //           Standard Deviation of the Error (ESTDEV),
>>>>>>>>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>>>>>>>>>> //           Mean Absolute Error (MAE),
>>>>>>>>>> //           Mean Squared Error (MSE),
>>>>>>>>>> //           Bias-Corrected Mean Squared Error (BCMSE),
>>>>>>>>>> //           Root Mean Squared Error (RMSE),
>>>>>>>>>> //           Percentiles of the Error (E10, E25, E50, E75,
E90)
>>>>>>>>>> //           NOTE: Most statistics listed above contain
parametric
>>>>>>>> and/or
>>>>>>>>>> //                 non-parametric confidence interval
limits.
>>>>>>>>>> //
>>>>>>>>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Forecast Mean (FBAR),
>>>>>>>>>> //              = mean(f)
>>>>>>>>>> //           Observation Mean (OBAR),
>>>>>>>>>> //              = mean(o)
>>>>>>>>>> //           Forecast*Observation Product Mean (FOBAR),
>>>>>>>>>> //              = mean(f*o)
>>>>>>>>>> //           Forecast Squared Mean (FFBAR),
>>>>>>>>>> //              = mean(f^2)
>>>>>>>>>> //           Observation Squared Mean (OOBAR)
>>>>>>>>>> //              = mean(o^2)
>>>>>>>>>> //
>>>>>>>>>> //    (8) STAT and SAL1L2 Text Files, Scalar Anomaly
Partial Sums:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Forecast Anomaly Mean (FABAR),
>>>>>>>>>> //              = mean(f-c)
>>>>>>>>>> //           Observation Anomaly Mean (OABAR),
>>>>>>>>>> //              = mean(o-c)
>>>>>>>>>> //           Product of Forecast and Observation Anomalies
Mean
>>>>>>>> (FOABAR),
>>>>>>>>>> //              = mean((f-c)*(o-c))
>>>>>>>>>> //           Forecast Anomaly Squared Mean (FFABAR),
>>>>>>>>>> //              = mean((f-c)^2)
>>>>>>>>>> //           Observation Anomaly Squared Mean (OOABAR)
>>>>>>>>>> //              = mean((o-c)^2)
>>>>>>>>>> //
>>>>>>>>>> //    (9) STAT and VL1L2 Text Files, Vector Partial Sums:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           U-Forecast Mean (UFBAR),
>>>>>>>>>> //              = mean(uf)
>>>>>>>>>> //           V-Forecast Mean (VFBAR),
>>>>>>>>>> //              = mean(vf)
>>>>>>>>>> //           U-Observation Mean (UOBAR),
>>>>>>>>>> //              = mean(uo)
>>>>>>>>>> //           V-Observation Mean (VOBAR),
>>>>>>>>>> //              = mean(vo)
>>>>>>>>>> //           U-Product Plus V-Product (UVFOBAR),
>>>>>>>>>> //              = mean(uf*uo+vf*vo)
>>>>>>>>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
>>>>>>>>>> //              = mean(uf^2+vf^2)
>>>>>>>>>> //           U-Observation Squared Plus V-Observation
Squared
>>>>>> (UVOOBAR)
>>>>>>>>>> //              = mean(uo^2+vo^2)
>>>>>>>>>> //
>>>>>>>>>> //   (10) STAT and VAL1L2 Text Files, Vector Anomaly
Partial Sums:
>>>>>>>>>> //           U-Forecast Anomaly Mean (UFABAR),
>>>>>>>>>> //              = mean(uf-uc)
>>>>>>>>>> //           V-Forecast Anomaly Mean (VFABAR),
>>>>>>>>>> //              = mean(vf-vc)
>>>>>>>>>> //           U-Observation Anomaly Mean (UOABAR),
>>>>>>>>>> //              = mean(uo-uc)
>>>>>>>>>> //           V-Observation Anomaly Mean (VOABAR),
>>>>>>>>>> //              = mean(vo-vc)
>>>>>>>>>> //           U-Anomaly Product Plus V-Anomaly Product
(UVFOABAR),
>>>>>>>>>> //              = mean((uf-uc)*(uo-uc)+(vf-vc)*(vo-vc))
>>>>>>>>>> //           U-Forecast Anomaly Squared Plus V-Forecast
Anomaly
>>>>>> Squared
>>>>>>>>>> (UVFFABAR),
>>>>>>>>>> //              = mean((uf-uc)^2+(vf-vc)^2)
>>>>>>>>>> //           U-Observation Anomaly Squared Plus V-
Observation
>>>> Anomaly
>>>>>>>>>> Squared (UVOOABAR)
>>>>>>>>>> //              = mean((uo-uc)^2+(vo-vc)^2)
>>>>>>>>>> //
>>>>>>>>>> //   (11) STAT and PCT Text Files, Nx2 Probability
Contingency
>> Table
>>>>>>>>>> Counts:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>>>>>> //           Row Observation Yes Count (OY_i),
>>>>>>>>>> //           Row Observation No Count (ON_i),
>>>>>>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
>>>>>>>> table.
>>>>>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>>>>>> //
>>>>>>>>>> //   (12) STAT and PSTD Text Files, Nx2 Probability
Contingency
>>>> Table
>>>>>>>>>> Scores:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>>>> //           Base Rate (BASER) with confidence interval
limits,
>>>>>>>>>> //           Reliability (RELIABILITY),
>>>>>>>>>> //           Resolution (RESOLUTION),
>>>>>>>>>> //           Uncertainty (UNCERTAINTY),
>>>>>>>>>> //           Area Under the ROC Curve (ROC_AUC),
>>>>>>>>>> //           Brier Score (BRIER) with confidence interval
limits,
>>>>>>>>>> //           Probability Threshold Value (THRESH_i)
>>>>>>>>>> //           NOTE: Previous column repeated for each
probability
>>>>>>>> threshold.
>>>>>>>>>> //
>>>>>>>>>> //   (13) STAT and PJC Text Files, Joint/Continuous
Statistics of
>>>>>>>>>> //                                 Probabilistic Variables:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>>>>>> //           Observation Yes Count Divided by Total
(OY_TP_i),
>>>>>>>>>> //           Observation No Count Divided by Total
(ON_TP_i),
>>>>>>>>>> //           Calibration (CALIBRATION_i),
>>>>>>>>>> //           Refinement (REFINEMENT_i),
>>>>>>>>>> //           Likelikhood (LIKELIHOOD_i),
>>>>>>>>>> //           Base Rate (BASER_i),
>>>>>>>>>> //           NOTE: Previous 7 columns repeated for each row
in the
>>>>>>>> table.
>>>>>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>>>>>> //
>>>>>>>>>> //   (14) STAT and PRC Text Files, ROC Curve Points for
>>>>>>>>>> //                                 Probabilistic Variables:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>>>>>>> //           Probability Threshold Value (THRESH_i),
>>>>>>>>>> //           Probability of Detecting Yes (PODY_i),
>>>>>>>>>> //           Probability of False Detection (POFD_i),
>>>>>>>>>> //           NOTE: Previous 3 columns repeated for each row
in the
>>>>>>>> table.
>>>>>>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>>>>>>> //
>>>>>>>>>> //   (15) STAT and MPR Text Files, Matched Pair Data:
>>>>>>>>>> //           Total (TOTAL),
>>>>>>>>>> //           Index (INDEX),
>>>>>>>>>> //           Observation Station ID (OBS_SID),
>>>>>>>>>> //           Observation Latitude (OBS_LAT),
>>>>>>>>>> //           Observation Longitude (OBS_LON),
>>>>>>>>>> //           Observation Level (OBS_LVL),
>>>>>>>>>> //           Observation Elevation (OBS_ELV),
>>>>>>>>>> //           Forecast Value (FCST),
>>>>>>>>>> //           Observation Value (OBS),
>>>>>>>>>> //           Climatological Value (CLIMO)
>>>>>>>>>> //
>>>>>>>>>> //   In the expressions above, f are forecast values, o are
>> observed
>>>>>>>>>> values,
>>>>>>>>>> //   and c are climatological values.
>>>>>>>>>> //
>>>>>>>>>> // Values for these flags are interpreted as follows:
>>>>>>>>>> //    (0) Do not generate output of this type
>>>>>>>>>> //    (1) Write output to a STAT file
>>>>>>>>>> //    (2) Write output to a STAT file and a text file
>>>>>>>>>> //
>>>>>>>>>> output_flag[] = [ 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0,
1 ];
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Flag to indicate whether Kendall's Tau and Spearman's
Rank
>>>>>>>> Correlation
>>>>>>>>>> // Coefficients should be computed.  Computing them over
large
>>>>>> datasets
>>>>>>>> is
>>>>>>>>>> // computationally intensive and slows down the runtime
execution
>>>>>>>>>> significantly.
>>>>>>>>>> //    (0) Do not compute these correlation coefficients
>>>>>>>>>> //    (1) Compute these correlation coefficients
>>>>>>>>>> //
>>>>>>>>>> rank_corr_flag = 1;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Specify the GRIB Table 2 parameter table version number
to be
>>>> used
>>>>>>>>>> // for interpreting GRIB codes.
>>>>>>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>>>>>>> //
>>>>>>>>>> grib_ptv = 2;
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Directory where temporary files should be written.
>>>>>>>>>> //
>>>>>>>>>> tmp_dir = "/tmp";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Prefix to be used for the output file names.
>>>>>>>>>> //
>>>>>>>>>> output_prefix = "";
>>>>>>>>>>
>>>>>>>>>> //
>>>>>>>>>> // Indicate a version number for the contents of this
>> configuration
>>>>>>>> file.
>>>>>>>>>> // The value should generally not be modified.
>>>>>>>>>> //
>>>>>>>>>> version = "V3.0.1";
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>>>
>>
>>
>>


------------------------------------------------


More information about the Met_help mailing list