[Met_help] [rt.rap.ucar.edu #63575] History for MET QUESTION #2

John Halley Gotway via RT met_help at ucar.edu
Wed Oct 30 11:00:29 MDT 2013


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Sorry, two MET questions in one day!

I am working on doing upper air verifications, in which I am verifying different levels of the atmosphere.  I am centering these levels on the model layer.  For example, we have a model layer at 900mb, so I evaluate the 925-875mb layer.  Also, there is a layer at 875mb, so I evaluate 900-850mb.  I have attached a point stat file of my results for a given time point.  Please look at lines 106 and 134.  This observation occurs at 890mb, so it falls into both categories (925-875 and 900-850).  I know that there will be some overlapping matched pairs and that is fine for my purposes as I can make a smooth graph of the 'average' error in that layer.  However, what concerns me is that even though line 106 and 134 contain the EXACT same matched pair, the forecast value is different.  If forecasts are vertically interpolated to the observation point (as is described in the MET users guide), how can these two forecast values be different?

Thank you
Andrew


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #63575] MET QUESTION #2
From: John Halley Gotway
Time: Thu Oct 24 15:21:30 2013

Andrew,

Very good question!  Since I don't have the data you're using, I
reproduced a similar test using some data that's included in the MET
tarball.  When I verified using overlapping forecast level ranges
("P900-800" and "P850-750"), I found that the points common to both
(from 850 to 800) did in fact have the same forecast value - which is
what you're expecting to happen.

So the question is why are you not seeing that?  I suspect it has to
do with how your GRIB records line up with the levels over which
you're verifying.

Try rerunning Point-Stat using verbosity level 3 (-v 3).  That'll tell
you which GRIB records are being used for each verification task.  For
example, when I ran my test case, I see this:

    DEBUG 2: Reading data for TMP/P850-750.
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range match
for VarInfo "TMP/P850-750" in GRIB record 21 of GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range match
for VarInfo "TMP/P850-750" in GRIB record 22 of GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range match
for VarInfo "TMP/P850-750" in GRIB record 23 of GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found 3 GRIB
records matching VarInfo "TMP/P850-750" in GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".

When verifying P850-750, Point-Stat finds 3 records falling in that
range, which are at P850, P800, and P750.  An observation at 825 will
be computed by interpolating between P850 and P800.

When verifying P900-800, Point-Stat again finds 3 records, which are
at P900, P850, and P800.  Again, for an observation at 825, it'll
interpolate between P850 and P800.

But why isn't this working in your case?  I'm not sure.

Could you please send me some sample data that demonstrates this
behavior, and I'll take a look?

You can post data to our anonymous ftp site following these
instructions:
    http://www.dtcenter.org/met/users/support/met_help.php#ftp

I'll need a forecast file, observation file, and Point-Stat config
file.

Thanks,
John Halley Gotway
met_help at ucar.edu

On 10/24/2013 09:38 AM, Andrew J. via RT wrote:
>
> Thu Oct 24 09:38:21 2013: Request 63575 was acted upon.
> Transaction: Ticket created by andrewwx at yahoo.com
>         Queue: met_help
>       Subject: MET QUESTION #2
>         Owner: Nobody
>    Requestors: andrewwx at yahoo.com
>        Status: new
>   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>
>
> Sorry, two MET questions in one day!
>
> I am working on doing upper air verifications, in which I am
verifying different levels of the atmosphere.  I am centering these
levels on the model layer.  For example, we have a model layer at
900mb, so I evaluate the 925-875mb layer.  Also, there is a layer at
875mb, so I evaluate 900-850mb.  I have attached a point stat file of
my results for a given time point.  Please look at lines 106 and 134.
This observation occurs at 890mb, so it falls into both categories
(925-875 and 900-850).  I know that there will be some overlapping
matched pairs and that is fine for my purposes as I can make a smooth
graph of the 'average' error in that layer.  However, what concerns me
is that even though line 106 and 134 contain the EXACT same matched
pair, the forecast value is different.  If forecasts are vertically
interpolated to the observation point (as is described in the MET
users guide), how can these two forecast values be different?
>
> Thank you
> Andrew
>

------------------------------------------------
Subject: MET QUESTION #2
From: Andrew J.
Time: Fri Oct 25 04:21:22 2013

Okay, done.   I have uploaded three files:  the upper air obs file
that I used, the forecast file that was used, and a configuration
file.  This should produce the exact same output that I sent to you
earlier.  Thank you for your help.





John Halley Gotway via RT <met_help at ucar.edu> schrieb am 23:21
Donnerstag, 24.Oktober 2013:

Andrew,

Very good question!  Since I don't have the data you're using, I
reproduced a similar test using some data that's included in the MET
tarball.  When I verified using overlapping forecast level ranges
("P900-800" and "P850-750"), I found that the points common to both
(from 850 to 800) did in fact have the same forecast value - which is
what you're expecting to happen.

So the question is why are you not seeing that?  I suspect it has to
do with how your GRIB records line up with the levels over which
you're verifying.

Try rerunning Point-Stat using verbosity level 3 (-v 3).  That'll tell
you which GRIB records are being used for each verification task.  For
example, when I ran my test case, I see this:

    DEBUG 2: Reading data for TMP/P850-750.
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range match
for VarInfo "TMP/P850-750" in GRIB record 21 of GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range match
for VarInfo "TMP/P850-750" in GRIB record 22 of GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range match
for VarInfo "TMP/P850-750" in GRIB record 23 of GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
    DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found 3 GRIB
records matching VarInfo "TMP/P850-750" in GRIB file
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".

When verifying P850-750, Point-Stat finds 3 records falling in that
range, which are at P850, P800, and P750.  An observation at 825 will
be computed by interpolating between P850 and P800.

When verifying P900-800, Point-Stat again finds 3 records, which are
at P900, P850, and P800.  Again, for an observation at 825, it'll
interpolate between P850 and P800.

But why isn't this working in your case?  I'm not sure.

Could you please send me some sample data that demonstrates this
behavior, and I'll take a look?

You can post data to our anonymous ftp site following these
instructions:
    http://www.dtcenter.org/met/users/support/met_help.php#ftp

I'll need a forecast file, observation file, and Point-Stat config
file.

Thanks,
John Halley Gotway
met_help at ucar.edu

On 10/24/2013 09:38 AM, Andrew J. via RT wrote:
>
> Thu Oct 24 09:38:21 2013: Request 63575 was acted upon.
> Transaction: Ticket created by andrewwx at yahoo.com
>         Queue: met_help
>       Subject: MET QUESTION #2
>         Owner: Nobody
>    Requestors: andrewwx at yahoo.com
>        Status: new
>   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>
>
> Sorry, two MET questions in one day!
>
> I am working on doing upper air verifications, in which I am
verifying different levels of the atmosphere.  I am centering these
levels on the model layer.  For example, we have a model layer at
900mb, so I evaluate the 925-875mb layer.  Also, there is a layer at
875mb, so I evaluate 900-850mb.  I have attached a point stat file of
my results for a given time point.  Please look at lines 106 and 134. 
This observation occurs at 890mb, so it falls into both categories
(925-875 and 900-850).  I know that there will be some overlapping
matched pairs and that is fine for my purposes as I can make a smooth
graph of the 'average' error in that layer.  However, what concerns me
is that even though line 106 and 134 contain the EXACT same matched
pair, the forecast value is different.  If forecasts are vertically
interpolated to the observation point (as is described in the MET
users guide), how can these two forecast values be different?
>
> Thank you
> Andrew
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #63575] MET QUESTION #2
From: John Halley Gotway
Time: Fri Oct 25 11:08:41 2013

Andrew,

Thanks for sending me the data.  Here's are some thoughts...

(1) In the config file, set 'sid = [];' instead of 'sid = "";'.  When
you do the latter, it interprets that as an empty string rather than
an empty list.  That's where you're getting output for the
region named "NA" with 0 matched pairs.  If you change it to 'sid =
[];' that "NA" masking region will go away.

(2) I see that you've listed many combinations of variables and
levels.  How you have it will work fine, but you could shorten it by
listing all levels for each variable.  We made the config file
parser smart enough to handle that.  And since you're reusing the same
list of levels over and over, you could define "level" at a higher
scope.  When the code tries to look up the "level" value for
each field entry, if it can't find it, it'll keep searching higher
scopes of the config file.  So here's how you could define all these
variables and level more succinctly:

fcst = {
    wind_thresh  = [NA];
    message_type = ["ADPUPA"];

    level = ["P1025-975", "P1000-950", "P975-925", "P950-900", "P925-
875", "P900-850", "P875-825","P850-800", "P825-775", "P725-675",
"P625-575", "P525-475"];
    cat_thresh = [];
    field = [
       {name = "WIND";},
       {name = "DPT";},
       {name = "TMP";},
       {name = "UGRD";},
       {name = "VGRD";},
       {name = "RH";}
    ];
};
obs = fcst;

(3) On to the answer of your immediate question ... I looked at
temperature for that location at 890 mb and saw the following:
    FCST_VAR FCST_LEV ... LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT
OBS_LON OBS_LVL   OBS_ELV   FCST      OBS       CLIMO
    TMP      P925-875 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.40844 278.14999 NA
    TMP      P900-850 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.70428 278.14999 NA

So the observation value is the same (278.149), but the forecast
values differ (276.408 vs 276.704) for the two different level
selections.

Looking at the DEBUG level 3 logging info, for the first verification
task (TMP/P925-875), Point-Stat found 2 GRIB records falling in that
pressure range: 187 and 197.
And for the second verification task (TMP/P900-850), Point-Stat found
2 GRIB records falling in that pressure range: 177 and 187.

Using wgrib, I see that GRIB records 177, 187, and 197 contain TMP at
850, 900, and 925 mb respectively:
    [johnhg at rambler]% wgrib wrf_mideuro4x4_2013010800_f012.grib |
egrep -r "^177:|^187:|^197:"
    177:6545466:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=850:TR=0:P1=12:P2=0:TimeU=1:850
mb:12hr fcst:NAve=0
    187:6935690:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=12:P2=0:TimeU=1:900
mb:12hr fcst:NAve=0
    197:7332042:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=12:P2=0:TimeU=1:925
mb:12hr fcst:NAve=0

Also using wgrib, I see that you actually do *NOT* have TMP at 875mb,
as you thought you did.

For the TMP/P925-875 verification task, Point-Stat is using TMP at 900
and 925 mb.  For the observation at 890 mb, it'll just use the
forecast value at 900 mb since there's no other level to be used
in the vertical interpolation.

For the TMP/P900-850 verification, Point-Stat is using TMP at 850 and
900 mb.  For the observation at 890 mb, it will do vertical
interpolation between the two since it has levels on both sides.

So that's the source of the difference we're seeing.  If the 875 level
were actually present, I would expect the forecast values to be the
same.

Thanks,
John

On 10/25/2013 04:21 AM, Andrew J. via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>
> Okay, done.   I have uploaded three files:  the upper air obs file
that I used, the forecast file that was used, and a configuration
file.  This should produce the exact same output that I sent to you
earlier.  Thank you for your help.
>
>
>
>
>
> John Halley Gotway via RT <met_help at ucar.edu> schrieb am 23:21
Donnerstag, 24.Oktober 2013:
>
> Andrew,
>
> Very good question!  Since I don't have the data you're using, I
reproduced a similar test using some data that's included in the MET
tarball.  When I verified using overlapping forecast level ranges
> ("P900-800" and "P850-750"), I found that the points common to both
(from 850 to 800) did in fact have the same forecast value - which is
what you're expecting to happen.
>
> So the question is why are you not seeing that?  I suspect it has to
do with how your GRIB records line up with the levels over which
you're verifying.
>
> Try rerunning Point-Stat using verbosity level 3 (-v 3).  That'll
tell you which GRIB records are being used for each verification task.
For example, when I ran my test case, I see this:
>
>      DEBUG 2: Reading data for TMP/P850-750.
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 21 of GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 22 of GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 23 of GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found 3 GRIB
records matching VarInfo "TMP/P850-750" in GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>
> When verifying P850-750, Point-Stat finds 3 records falling in that
range, which are at P850, P800, and P750.  An observation at 825 will
be computed by interpolating between P850 and P800.
>
> When verifying P900-800, Point-Stat again finds 3 records, which are
at P900, P850, and P800.  Again, for an observation at 825, it'll
interpolate between P850 and P800.
>
> But why isn't this working in your case?  I'm not sure.
>
> Could you please send me some sample data that demonstrates this
behavior, and I'll take a look?
>
> You can post data to our anonymous ftp site following these
instructions:
>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> I'll need a forecast file, observation file, and Point-Stat config
file.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 10/24/2013 09:38 AM, Andrew J. via RT wrote:
>>
>> Thu Oct 24 09:38:21 2013: Request 63575 was acted upon.
>> Transaction: Ticket created by andrewwx at yahoo.com
>>           Queue: met_help
>>         Subject: MET QUESTION #2
>>           Owner: Nobody
>>      Requestors: andrewwx at yahoo.com
>>          Status: new
>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>>
>>
>> Sorry, two MET questions in one day!
>>
>> I am working on doing upper air verifications, in which I am
verifying different levels of the atmosphere.  I am centering these
levels on the model layer.  For example, we have a model layer at
900mb, so I evaluate the 925-875mb layer.  Also, there is a layer at
875mb, so I evaluate 900-850mb.  I have attached a point stat file of
my results for a given time point.  Please look at lines 106 and 134.
This observation occurs at 890mb, so it falls into both categories
(925-875 and 900-850).  I know that there will be some overlapping
matched pairs and that is fine for my purposes as I can make a smooth
graph of the 'average' error in that layer.  However, what concerns me
is that even though line 106 and 134 contain the EXACT same matched
pair, the forecast value is different.  If forecasts are vertically
interpolated to the observation point (as is described in the MET
users guide), how can these two forecast values be different?
>>
>> Thank you
>> Andrew
>>

------------------------------------------------
Subject: MET QUESTION #2
From: Andrew J.
Time: Sun Oct 27 06:00:19 2013

John,

Thank you for the tips, that is very helpful. 

Regarding the error, I suppose I had assumed that the forecast values
were always interpolated from the two or three nearest pressure
levels, regardless of the level that was being analyzed.  This
explanation makes sense though.  We are trying to create vertical
profiles of forecast error.  In your opinion, is this method of
evaluating layers with a forecast level centered in the layer a good
way of evaluating the error?  I had assumed that this would produce an
average error centered around the forecast level...and 50 mb seemed to
be a good depth for including enough observations while not deviating
too far from the forecast level.  I recently thought that there may be
large errors with this method for example around the area where the
boundary layer decouples from the upper atmosphere...but I am not sure
of another good way to create vertical profiles of error.  Just wanted
to get your thoughts on this method when you get a chance.  Thanks!

- Andrew





John Halley Gotway via RT <met_help at ucar.edu> schrieb am 12:08
Freitag, 25.Oktober 2013:

Andrew,

Thanks for sending me the data.  Here's are some thoughts...

(1) In the config file, set 'sid = [];' instead of 'sid = "";'.  When
you do the latter, it interprets that as an empty string rather than
an empty list.  That's where you're getting output for the
region named "NA" with 0 matched pairs.  If you change it to 'sid =
[];' that "NA" masking region will go away.

(2) I see that you've listed many combinations of variables and
levels.  How you have it will work fine, but you could shorten it by
listing all levels for each variable.  We made the config file
parser smart enough to handle that.  And since you're reusing the same
list of levels over and over, you could define "level" at a higher
scope.  When the code tries to look up the "level" value for
each field entry, if it can't find it, it'll keep searching higher
scopes of the config file.  So here's how you could define all these
variables and level more succinctly:

fcst = {
    wind_thresh  = [NA];
    message_type = ["ADPUPA"];

    level = ["P1025-975", "P1000-950", "P975-925", "P950-900", "P925-
875", "P900-850", "P875-825","P850-800", "P825-775", "P725-675",
"P625-575", "P525-475"];
    cat_thresh = [];
    field = [
       {name = "WIND";},
       {name = "DPT";},
       {name = "TMP";},
       {name = "UGRD";},
       {name = "VGRD";},
       {name = "RH";}
    ];
};
obs = fcst;

(3) On to the answer of your immediate question ... I looked at
temperature for that location at 890 mb and saw the following:
    FCST_VAR FCST_LEV ... LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT 
OBS_LON OBS_LVL   OBS_ELV   FCST      OBS       CLIMO
    TMP      P925-875 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.40844 278.14999 NA
    TMP      P900-850 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.70428 278.14999 NA

So the observation value is the same (278.149), but the forecast
values differ (276.408 vs 276.704) for the two different level
selections.

Looking at the DEBUG level 3 logging info, for the first verification
task (TMP/P925-875), Point-Stat found 2 GRIB records falling in that
pressure range: 187 and 197.
And for the second verification task (TMP/P900-850), Point-Stat found
2 GRIB records falling in that pressure range: 177 and 187.

Using wgrib, I see that GRIB records 177, 187, and 197 contain TMP at
850, 900, and 925 mb respectively:
    [johnhg at rambler]% wgrib wrf_mideuro4x4_2013010800_f012.grib |
egrep -r "^177:|^187:|^197:"
   
177:6545466:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=850:TR=0:P1=12:P2=0:TimeU=1:850
mb:12hr fcst:NAve=0
   
187:6935690:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=12:P2=0:TimeU=1:900
mb:12hr fcst:NAve=0
   
197:7332042:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=12:P2=0:TimeU=1:925
mb:12hr fcst:NAve=0

Also using wgrib, I see that you actually do *NOT* have TMP at 875mb,
as you thought you did.

For the TMP/P925-875 verification task, Point-Stat is using TMP at 900
and 925 mb.  For the observation at 890 mb, it'll just use the
forecast value at 900 mb since there's no other level to be used
in the vertical interpolation.

For the TMP/P900-850 verification, Point-Stat is using TMP at 850 and
900 mb.  For the observation at 890 mb, it will do vertical
interpolation between the two since it has levels on both sides.

So that's the source of the difference we're seeing.  If the 875 level
were actually present, I would expect the forecast values to be the
same.

Thanks,
John

On 10/25/2013 04:21 AM, Andrew J. via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>
> Okay, done.   I have uploaded three files:  the upper air obs file
that I used, the forecast file that was used, and a configuration
file.  This should produce the exact same output that I sent to you
earlier.  Thank you for your help.
>
>
>
>
>
> John Halley Gotway via RT <met_help at ucar.edu> schrieb am 23:21
Donnerstag, 24.Oktober 2013:
>
> Andrew,
>
> Very good question!  Since I don't have the data you're using, I
reproduced a similar test using some data that's included in the MET
tarball.  When I verified using overlapping forecast level ranges
> ("P900-800" and "P850-750"), I found that the points common to both
(from 850 to 800) did in fact have the same forecast value - which is
what you're expecting to happen.
>
> So the question is why are you not seeing that?  I suspect it has to
do with how your GRIB records line up with the levels over which
you're verifying.
>
> Try rerunning Point-Stat using verbosity level 3 (-v 3).  That'll
tell you which GRIB records are being used for each verification
task.  For example, when I ran my test case, I see this:
>
>      DEBUG 2: Reading data for TMP/P850-750.
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 21 of GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 22 of GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 23 of GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>      DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found 3 GRIB
records matching VarInfo "TMP/P850-750" in GRIB file
>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>
> When verifying P850-750, Point-Stat finds 3 records falling in that
range, which are at P850, P800, and P750.  An observation at 825 will
be computed by interpolating between P850 and P800.
>
> When verifying P900-800, Point-Stat again finds 3 records, which are
at P900, P850, and P800.  Again, for an observation at 825, it'll
interpolate between P850 and P800.
>
> But why isn't this working in your case?  I'm not sure.
>
> Could you please send me some sample data that demonstrates this
behavior, and I'll take a look?
>
> You can post data to our anonymous ftp site following these
instructions:
>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> I'll need a forecast file, observation file, and Point-Stat config
file.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 10/24/2013 09:38 AM, Andrew J. via RT wrote:
>>
>> Thu Oct 24 09:38:21 2013: Request 63575 was acted upon.
>> Transaction: Ticket created by andrewwx at yahoo.com
>>           Queue: met_help
>>         Subject: MET QUESTION #2
>>           Owner: Nobody
>>      Requestors: andrewwx at yahoo.com
>>          Status: new
>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>>
>>
>> Sorry, two MET questions in one day!
>>
>> I am working on doing upper air verifications, in which I am
verifying different levels of the atmosphere.  I am centering these
levels on the model layer.  For example, we have a model layer at
900mb, so I evaluate the 925-875mb layer.  Also, there is a layer at
875mb, so I evaluate 900-850mb.  I have attached a point stat file of
my results for a given time point.  Please look at lines 106 and 134. 
This observation occurs at 890mb, so it falls into both categories
(925-875 and 900-850).  I know that there will be some overlapping
matched pairs and that is fine for my purposes as I can make a smooth
graph of the 'average' error in that layer.  However, what concerns me
is that even though line 106 and 134 contain the EXACT same matched
pair, the forecast value is different.  If forecasts are vertically
interpolated to the observation point (as is described in the MET
users guide), how can these two forecast values be different?
>>
>> Thank you
>> Andrew
>>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #63575] MET QUESTION #2
From: John Halley Gotway
Time: Mon Oct 28 10:46:05 2013

Andrew,

I don't have any recommendations for "best practices" on this one, but
I'll tell you what we typically do in the DTC.  Most soundings produce
observation reports at a set of mandatory levels.  So we
typically just verify at single vertical levels rather than using
ranges.  Here are the level values we typically use:
    level = [ "P1000", "P850", "P700", "P500", "P400", "P300", "P200",
"P150", "P100" ];

We only verify against observations that fall exactly on those levels
- so no vertical interpolation is involved.

However, using ranges and doing vertical interpolation is certainly a
reasonable thing to do.  Let me comment about you expecting that
Point-Stat would have still picked a level above and below and
done the vertical interpolation.  When you set the range of levels in
the forecast field, that tells Point-Stat how you want to "limit" the
forecast data.  Point-Stat reads the forecast data falling
within that range into memory and then processes all of the point
observations.  Doing it the other way would be much slower.

The missing level at 875mb caused the discrepancy you saw in the
output.  If there's no forecast data on the "other side" of the
observation level, there's no way to do vertical interpolation.  But
please be aware that you do not need to set the forecast and
observation field levels to the same values.  Take a look at the
following example:

fcst = {
      wind_thresh  = [NA];

      level      = ["P1025-475", "P1025-475", "P1025-475", "P1025-
475", "P1025-475", "P1025-475", "P1025-475", "P1025-475", "P1025-475",
"P1025-475", "P1025-475", "P1025-475"];
      cat_thresh = [];
      field      = [ {name = "WIND";}, {name = "DPT";}, {name =
"TMP";}, {name = "UGRD";}, {name = "VGRD";}, {name = "RH";} ];
};
obs = {
      wind_thresh  = [NA];
      message_type = ["ADPUPA"];

      level      = ["P1025-975", "P1000-950", "P975-925", "P950-900",
"P925-875", "P900-850", "P875-825","P850-800", "P825-775", "P725-675",
"P625-575", "P525-475"];
      cat_thresh = [];
      field      = [ {name = "WIND";}, {name = "DPT";}, {name =
"TMP";}, {name = "UGRD";}, {name = "VGRD";}, {name = "RH";} ];
};

Above, I've set the forecast levels to the same range of values 12
different times (P1025-475).  So I'm reading in the forecast data for
all available levels for each of the verification tasks.
However, in the observation field, I'm limiting the observations to 12
different ranges, each 50mb wide, as you'd originally defined them.
Setting it up this way should prevent that discrepancy you
saw output since each verification task would have forecast data on
"both" sides of the observations.

The obvious downside to this approach is that you'll be reading a
*LOT* of forecast data into memory.  Depending on the size of your
domain/computing power, that may or may not be a problem.

Really I'd suggest just taking a careful look at your forecast levels
and carefully defining your verification tasks.  But I just want you
to know that you can define the forecast and observation
levels independently, which might be helpful.

Thanks,
John


On 10/27/2013 06:00 AM, Andrew J. via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>
> John,
>
> Thank you for the tips, that is very helpful.
>
> Regarding the error, I suppose I had assumed that the forecast
values were always interpolated from the two or three nearest pressure
levels, regardless of the level that was being analyzed.  This
explanation makes sense though.  We are trying to create vertical
profiles of forecast error.  In your opinion, is this method of
evaluating layers with a forecast level centered in the layer a good
way of evaluating the error?  I had assumed that this would produce an
average error centered around the forecast level...and 50 mb seemed to
be a good depth for including enough observations while not deviating
too far from the forecast level.  I recently thought that there may be
large errors with this method for example around the area where the
boundary layer decouples from the upper atmosphere...but I am not sure
of another good way to create vertical profiles of error.  Just wanted
to get your thoughts on this method when you get a chance.  Thanks!
>
> - Andrew
>
>
>
>
>
> John Halley Gotway via RT <met_help at ucar.edu> schrieb am 12:08
Freitag, 25.Oktober 2013:
>
> Andrew,
>
> Thanks for sending me the data.  Here's are some thoughts...
>
> (1) In the config file, set 'sid = [];' instead of 'sid = "";'.
When you do the latter, it interprets that as an empty string rather
than an empty list.  That's where you're getting output for the
> region named "NA" with 0 matched pairs.  If you change it to 'sid =
[];' that "NA" masking region will go away.
>
> (2) I see that you've listed many combinations of variables and
levels.  How you have it will work fine, but you could shorten it by
listing all levels for each variable.  We made the config file
> parser smart enough to handle that.  And since you're reusing the
same list of levels over and over, you could define "level" at a
higher scope.  When the code tries to look up the "level" value for
> each field entry, if it can't find it, it'll keep searching higher
scopes of the config file.  So here's how you could define all these
variables and level more succinctly:
>
> fcst = {
>      wind_thresh  = [NA];
>      message_type = ["ADPUPA"];
>
>      level = ["P1025-975", "P1000-950", "P975-925", "P950-900",
"P925-875", "P900-850", "P875-825","P850-800", "P825-775", "P725-675",
"P625-575", "P525-475"];
>      cat_thresh = [];
>      field = [
>         {name = "WIND";},
>         {name = "DPT";},
>         {name = "TMP";},
>         {name = "UGRD";},
>         {name = "VGRD";},
>         {name = "RH";}
>      ];
> };
> obs = fcst;
>
> (3) On to the answer of your immediate question ... I looked at
temperature for that location at 890 mb and saw the following:
>      FCST_VAR FCST_LEV ... LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT
OBS_LON OBS_LVL   OBS_ELV   FCST      OBS       CLIMO
>      TMP      P925-875 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.40844 278.14999 NA
>      TMP      P900-850 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.70428 278.14999 NA
>
> So the observation value is the same (278.149), but the forecast
values differ (276.408 vs 276.704) for the two different level
selections.
>
> Looking at the DEBUG level 3 logging info, for the first
verification task (TMP/P925-875), Point-Stat found 2 GRIB records
falling in that pressure range: 187 and 197.
> And for the second verification task (TMP/P900-850), Point-Stat
found 2 GRIB records falling in that pressure range: 177 and 187.
>
> Using wgrib, I see that GRIB records 177, 187, and 197 contain TMP
at 850, 900, and 925 mb respectively:
>      [johnhg at rambler]% wgrib wrf_mideuro4x4_2013010800_f012.grib |
egrep -r "^177:|^187:|^197:"
>
177:6545466:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=850:TR=0:P1=12:P2=0:TimeU=1:850
mb:12hr fcst:NAve=0
>
187:6935690:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=12:P2=0:TimeU=1:900
mb:12hr fcst:NAve=0
>
197:7332042:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=12:P2=0:TimeU=1:925
mb:12hr fcst:NAve=0
>
> Also using wgrib, I see that you actually do *NOT* have TMP at
875mb, as you thought you did.
>
> For the TMP/P925-875 verification task, Point-Stat is using TMP at
900 and 925 mb.  For the observation at 890 mb, it'll just use the
forecast value at 900 mb since there's no other level to be used
> in the vertical interpolation.
>
> For the TMP/P900-850 verification, Point-Stat is using TMP at 850
and 900 mb.  For the observation at 890 mb, it will do vertical
interpolation between the two since it has levels on both sides.
>
> So that's the source of the difference we're seeing.  If the 875
level were actually present, I would expect the forecast values to be
the same.
>
> Thanks,
> John
>
> On 10/25/2013 04:21 AM, Andrew J. via RT wrote:
>>
>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>>
>> Okay, done.   I have uploaded three files:  the upper air obs file
that I used, the forecast file that was used, and a configuration
file.  This should produce the exact same output that I sent to you
earlier.  Thank you for your help.
>>
>>
>>
>>
>>
>> John Halley Gotway via RT <met_help at ucar.edu> schrieb am 23:21
Donnerstag, 24.Oktober 2013:
>>
>> Andrew,
>>
>> Very good question!  Since I don't have the data you're using, I
reproduced a similar test using some data that's included in the MET
tarball.  When I verified using overlapping forecast level ranges
>> ("P900-800" and "P850-750"), I found that the points common to both
(from 850 to 800) did in fact have the same forecast value - which is
what you're expecting to happen.
>>
>> So the question is why are you not seeing that?  I suspect it has
to do with how your GRIB records line up with the levels over which
you're verifying.
>>
>> Try rerunning Point-Stat using verbosity level 3 (-v 3).  That'll
tell you which GRIB records are being used for each verification task.
For example, when I ran my test case, I see this:
>>
>>        DEBUG 2: Reading data for TMP/P850-750.
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 21 of GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 22 of GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 23 of GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found 3
GRIB records matching VarInfo "TMP/P850-750" in GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>
>> When verifying P850-750, Point-Stat finds 3 records falling in that
range, which are at P850, P800, and P750.  An observation at 825 will
be computed by interpolating between P850 and P800.
>>
>> When verifying P900-800, Point-Stat again finds 3 records, which
are at P900, P850, and P800.  Again, for an observation at 825, it'll
interpolate between P850 and P800.
>>
>> But why isn't this working in your case?  I'm not sure.
>>
>> Could you please send me some sample data that demonstrates this
behavior, and I'll take a look?
>>
>> You can post data to our anonymous ftp site following these
instructions:
>>        http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> I'll need a forecast file, observation file, and Point-Stat config
file.
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>> On 10/24/2013 09:38 AM, Andrew J. via RT wrote:
>>>
>>> Thu Oct 24 09:38:21 2013: Request 63575 was acted upon.
>>> Transaction: Ticket created by andrewwx at yahoo.com
>>>             Queue: met_help
>>>           Subject: MET QUESTION #2
>>>             Owner: Nobody
>>>        Requestors: andrewwx at yahoo.com
>>>            Status: new
>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>>>
>>>
>>> Sorry, two MET questions in one day!
>>>
>>> I am working on doing upper air verifications, in which I am
verifying different levels of the atmosphere.  I am centering these
levels on the model layer.  For example, we have a model layer at
900mb, so I evaluate the 925-875mb layer.  Also, there is a layer at
875mb, so I evaluate 900-850mb.  I have attached a point stat file of
my results for a given time point.  Please look at lines 106 and 134.
This observation occurs at 890mb, so it falls into both categories
(925-875 and 900-850).  I know that there will be some overlapping
matched pairs and that is fine for my purposes as I can make a smooth
graph of the 'average' error in that layer.  However, what concerns me
is that even though line 106 and 134 contain the EXACT same matched
pair, the forecast value is different.  If forecasts are vertically
interpolated to the observation point (as is described in the MET
users guide), how can these two forecast values be different?
>>>
>>> Thank you
>>> Andrew
>>>

------------------------------------------------
Subject: MET QUESTION #2
From: Andrew J.
Time: Wed Oct 30 10:54:34 2013

Okay, I will look into all of these things and see which ones work the
best for our purposes.  Thank you as always for your timely help.

- Andrew





John Halley Gotway via RT <met_help at ucar.edu> schrieb am 17:46 Montag,
28.Oktober 2013:

Andrew,

I don't have any recommendations for "best practices" on this one, but
I'll tell you what we typically do in the DTC.  Most soundings produce
observation reports at a set of mandatory levels.  So we
typically just verify at single vertical levels rather than using
ranges.  Here are the level values we typically use:
    level = [ "P1000", "P850", "P700", "P500", "P400", "P300", "P200",
"P150", "P100" ];

We only verify against observations that fall exactly on those levels
- so no vertical interpolation is involved.

However, using ranges and doing vertical interpolation is certainly a
reasonable thing to do.  Let me comment about you expecting that
Point-Stat would have still picked a level above and below and
done the vertical interpolation.  When you set the range of levels in
the forecast field, that tells Point-Stat how you want to "limit" the
forecast data.  Point-Stat reads the forecast data falling
within that range into memory and then processes all of the point
observations.  Doing it the other way would be much slower.

The missing level at 875mb caused the discrepancy you saw in the
output.  If there's no forecast data on the "other side" of the
observation level, there's no way to do vertical interpolation.  But
please be aware that you do not need to set the forecast and
observation field levels to the same values.  Take a look at the
following example:

fcst = {
      wind_thresh  = [NA];

      level      = ["P1025-475", "P1025-475", "P1025-475", "P1025-
475", "P1025-475", "P1025-475", "P1025-475", "P1025-475", "P1025-475",
"P1025-475", "P1025-475", "P1025-475"];
      cat_thresh = [];
      field      = [ {name = "WIND";}, {name = "DPT";}, {name =
"TMP";}, {name = "UGRD";}, {name = "VGRD";}, {name = "RH";} ];
};
obs = {
      wind_thresh  = [NA];
      message_type = ["ADPUPA"];

      level      = ["P1025-975", "P1000-950", "P975-925", "P950-900",
"P925-875", "P900-850", "P875-825","P850-800", "P825-775", "P725-675",
"P625-575", "P525-475"];
      cat_thresh = [];
      field      = [ {name = "WIND";}, {name = "DPT";}, {name =
"TMP";}, {name = "UGRD";}, {name = "VGRD";}, {name = "RH";} ];
};

Above, I've set the forecast levels to the same range of values 12
different times (P1025-475).  So I'm reading in the forecast data for
all available levels for each of the verification tasks.
However, in the observation field, I'm limiting the observations to 12
different ranges, each 50mb wide, as you'd originally defined them. 
Setting it up this way should prevent that discrepancy you
saw output since each verification task would have forecast data on
"both" sides of the observations.

The obvious downside to this approach is that you'll be reading a
*LOT* of forecast data into memory.  Depending on the size of your
domain/computing power, that may or may not be a problem.

Really I'd suggest just taking a careful look at your forecast levels
and carefully defining your verification tasks.  But I just want you
to know that you can define the forecast and observation
levels independently, which might be helpful.

Thanks,
John


On 10/27/2013 06:00 AM, Andrew J. via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>
> John,
>
> Thank you for the tips, that is very helpful.
>
> Regarding the error, I suppose I had assumed that the forecast
values were always interpolated from the two or three nearest pressure
levels, regardless of the level that was being analyzed.  This
explanation makes sense though.  We are trying to create vertical
profiles of forecast error.  In your opinion, is this method of
evaluating layers with a forecast level centered in the layer a good
way of evaluating the error?  I had assumed that this would produce an
average error centered around the forecast level...and 50 mb seemed to
be a good depth for including enough observations while not deviating
too far from the forecast level.  I recently thought that there may be
large errors with this method for example around the area where the
boundary layer decouples from the upper atmosphere...but I am not sure
of another good way to create vertical profiles of error.  Just wanted
to get your thoughts on this method when you get a chance.  Thanks!
>
> - Andrew
>
>
>
>
>
> John Halley Gotway via RT <met_help at ucar.edu> schrieb am 12:08
Freitag, 25.Oktober 2013:
>
> Andrew,
>
> Thanks for sending me the data.  Here's are some thoughts...
>
> (1) In the config file, set 'sid = [];' instead of 'sid = "";'. 
When you do the latter, it interprets that as an empty string rather
than an empty list.  That's where you're getting output for the
> region named "NA" with 0 matched pairs.  If you change it to 'sid =
[];' that "NA" masking region will go away.
>
> (2) I see that you've listed many combinations of variables and
levels.  How you have it will work fine, but you could shorten it by
listing all levels for each variable.  We made the config file
> parser smart enough to handle that.  And since you're reusing the
same list of levels over and over, you could define "level" at a
higher scope.  When the code tries to look up the "level" value for
> each field entry, if it can't find it, it'll keep searching higher
scopes of the config file.  So here's how you could define all these
variables and level more succinctly:
>
> fcst = {
>      wind_thresh  = [NA];
>      message_type = ["ADPUPA"];
>
>      level = ["P1025-975", "P1000-950", "P975-925", "P950-900",
"P925-875", "P900-850", "P875-825","P850-800", "P825-775", "P725-675",
"P625-575", "P525-475"];
>      cat_thresh = [];
>      field = [
>         {name = "WIND";},
>         {name = "DPT";},
>         {name = "TMP";},
>         {name = "UGRD";},
>         {name = "VGRD";},
>         {name = "RH";}
>      ];
> };
> obs = fcst;
>
> (3) On to the answer of your immediate question ... I looked at
temperature for that location at 890 mb and saw the following:
>      FCST_VAR FCST_LEV ... LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT 
OBS_LON OBS_LVL   OBS_ELV   FCST      OBS       CLIMO
>      TMP      P925-875 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.40844 278.14999 NA
>      TMP      P900-850 ... MPR       1     1     1       48.83000
9.20000 890.00000 890.00000 276.70428 278.14999 NA
>
> So the observation value is the same (278.149), but the forecast
values differ (276.408 vs 276.704) for the two different level
selections.
>
> Looking at the DEBUG level 3 logging info, for the first
verification task (TMP/P925-875), Point-Stat found 2 GRIB records
falling in that pressure range: 187 and 197.
> And for the second verification task (TMP/P900-850), Point-Stat
found 2 GRIB records falling in that pressure range: 177 and 187.
>
> Using wgrib, I see that GRIB records 177, 187, and 197 contain TMP
at 850, 900, and 925 mb respectively:
>      [johnhg at rambler]% wgrib wrf_mideuro4x4_2013010800_f012.grib |
egrep -r "^177:|^187:|^197:"
>     
177:6545466:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=850:TR=0:P1=12:P2=0:TimeU=1:850
mb:12hr fcst:NAve=0
>     
187:6935690:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=900:TR=0:P1=12:P2=0:TimeU=1:900
mb:12hr fcst:NAve=0
>     
197:7332042:d=13010800:TMP:kpds5=11:kpds6=100:kpds7=925:TR=0:P1=12:P2=0:TimeU=1:925
mb:12hr fcst:NAve=0
>
> Also using wgrib, I see that you actually do *NOT* have TMP at
875mb, as you thought you did.
>
> For the TMP/P925-875 verification task, Point-Stat is using TMP at
900 and 925 mb.  For the observation at 890 mb, it'll just use the
forecast value at 900 mb since there's no other level to be used
> in the vertical interpolation.
>
> For the TMP/P900-850 verification, Point-Stat is using TMP at 850
and 900 mb.  For the observation at 890 mb, it will do vertical
interpolation between the two since it has levels on both sides.
>
> So that's the source of the difference we're seeing.  If the 875
level were actually present, I would expect the forecast values to be
the same.
>
> Thanks,
> John
>
> On 10/25/2013 04:21 AM, Andrew J. via RT wrote:
>>
>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>>
>> Okay, done.   I have uploaded three files:  the upper air obs file
that I used, the forecast file that was used, and a configuration
file.  This should produce the exact same output that I sent to you
earlier.  Thank you for your help.
>>
>>
>>
>>
>>
>> John Halley Gotway via RT <met_help at ucar.edu> schrieb am 23:21
Donnerstag, 24.Oktober 2013:
>>
>> Andrew,
>>
>> Very good question!  Since I don't have the data you're using, I
reproduced a similar test using some data that's included in the MET
tarball.  When I verified using overlapping forecast level ranges
>> ("P900-800" and "P850-750"), I found that the points common to both
(from 850 to 800) did in fact have the same forecast value - which is
what you're expecting to happen.
>>
>> So the question is why are you not seeing that?  I suspect it has
to do with how your GRIB records line up with the levels over which
you're verifying.
>>
>> Try rerunning Point-Stat using verbosity level 3 (-v 3).  That'll
tell you which GRIB records are being used for each verification
task.  For example, when I ran my test case, I see this:
>>
>>        DEBUG 2: Reading data for TMP/P850-750.
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 21 of GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 22 of GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found range
match for VarInfo "TMP/P850-750" in GRIB record 23 of GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>        DEBUG 3: MetGrib1DataFile::data_plane_array() -> Found 3
GRIB records matching VarInfo "TMP/P850-750" in GRIB file
>>
"/d1/johnhg/MET/MET_releases/METv4.1/data/sample_fcst/2007033000/nam.t00z.awip1236.tm00.20070330.grb".
>>
>> When verifying P850-750, Point-Stat finds 3 records falling in that
range, which are at P850, P800, and P750.  An observation at 825 will
be computed by interpolating between P850 and P800.
>>
>> When verifying P900-800, Point-Stat again finds 3 records, which
are at P900, P850, and P800.  Again, for an observation at 825, it'll
interpolate between P850 and P800.
>>
>> But why isn't this working in your case?  I'm not sure.
>>
>> Could you please send me some sample data that demonstrates this
behavior, and I'll take a look?
>>
>> You can post data to our anonymous ftp site following these
instructions:
>>        http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> I'll need a forecast file, observation file, and Point-Stat config
file.
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>> On 10/24/2013 09:38 AM, Andrew J. via RT wrote:
>>>
>>> Thu Oct 24 09:38:21 2013: Request 63575 was acted upon.
>>> Transaction: Ticket created by andrewwx at yahoo.com
>>>             Queue: met_help
>>>           Subject: MET QUESTION #2
>>>             Owner: Nobody
>>>        Requestors: andrewwx at yahoo.com
>>>            Status: new
>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=63575 >
>>>
>>>
>>> Sorry, two MET questions in one day!
>>>
>>> I am working on doing upper air verifications, in which I am
verifying different levels of the atmosphere.  I am centering these
levels on the model layer.  For example, we have a model layer at
900mb, so I evaluate the 925-875mb layer.  Also, there is a layer at
875mb, so I evaluate 900-850mb.  I have attached a point stat file of
my results for a given time point.  Please look at lines 106 and 134. 
This observation occurs at 890mb, so it falls into both categories
(925-875 and 900-850).  I know that there will be some overlapping
matched pairs and that is fine for my purposes as I can make a smooth
graph of the 'average' error in that layer.  However, what concerns me
is that even though line 106 and 134 contain the EXACT same matched
pair, the forecast value is different.  If forecasts are vertically
interpolated to the observation point (as is described in the MET
users guide), how can these two forecast values be different?
>>>
>>> Thank you
>>> Andrew
>>>

------------------------------------------------


More information about the Met_help mailing list