[Met_help] [rt.rap.ucar.edu #51020] History for point_stat problem

John Halley Gotway via RT met_help at ucar.edu
Tue Dec 13 09:07:08 MST 2011


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

I'm trying to run the MET verification package and am running into a 
problem when I run point_stat.  I went through the tutorial, and 
everything ran without a problem.  Now I am trying to work with actual 
data.  I've downloaded a prepbufr file from NCEP, and ran the pb2nc 
tool, and from what I can tell, the data looks good in the netcdf file 
that it creates.  However, when I run point_stat, everything seems to be 
running well, and I get the following output:

Reading records for TMP/P500.
For TMP/P500 found 1 forecast levels and 0 climatology levels.

--------------------------------------------------------------------------------

Searching 400440 observations from 150951 PrepBufr messages.

--------------------------------------------------------------------------------

Processing TMP/P500 versus TMP/P500, for observation type ADPUPA, over 
region FULL, for interpolation method UW_MEAN(1), using 0 pairs.

and the program creates all the point_stat files that it is supposed to 
create, but when I open the .stat file, there is no data in the file, 
just column names without data.  I was wondering if you had any insight 
on what I could be doing wrong.  I appreciate any help you can give.

Thanks,

Joe Pollina


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #51020] point_stat problem
From: John Halley Gotway
Time: Thu Oct 27 13:33:11 2011

Joe,
Please try rerunning Point-Stat using the "-v 3" command line option.
It'll give you more diagnostic info about why observations were or
were
not used. For example it may be an issue with the matching time
window.


Thanks,
John Halley Gotway

>
> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
> Transaction: Ticket created by joseph.pollina at noaa.gov
>        Queue: met_help
>      Subject: point_stat problem
>        Owner: Nobody
>   Requestors: joseph.pollina at noaa.gov
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020 >
>
>
> I'm trying to run the MET verification package and am running into a
> problem when I run point_stat.  I went through the tutorial, and
> everything ran without a problem.  Now I am trying to work with
actual
> data.  I've downloaded a prepbufr file from NCEP, and ran the pb2nc
> tool, and from what I can tell, the data looks good in the netcdf
file
> that it creates.  However, when I run point_stat, everything seems
to be
> running well, and I get the following output:
>
> Reading records for TMP/P500.
> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>
>
--------------------------------------------------------------------------------
>
> Searching 400440 observations from 150951 PrepBufr messages.
>
>
--------------------------------------------------------------------------------
>
> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>
> and the program creates all the point_stat files that it is supposed
to
> create, but when I open the .stat file, there is no data in the
file,
> just column names without data.  I was wondering if you had any
insight
> on what I could be doing wrong.  I appreciate any help you can give.
>
> Thanks,
>
> Joe Pollina
>



------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51020] point_stat problem
From: Joe Pollina
Time: Mon Oct 31 02:02:04 2011

Thanks John.  I've done as you suggested, and I come up with the
following information:

Processing TMP/P500 versus TMP/P500, for observation type ADPUPA, over
region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
Number of matched pairs  = 0
Observations processed   = 400440
Rejected: GRIB code      = 377791
Rejected: valid time     = 22649
Rejected: bad obs value  = 0
Rejected: off the grid   = 0
Rejected: level mismatch = 0
Rejected: message type   = 0
Rejected: masking region = 0
Rejected: bad fcst value = 0

Not exactly sure what this is telling me...is it not being processed
because the valid times of the forecast file and the obs file do not
match?

Thanks again,

Joe


On 10/27/2011 3:33 PM, John Halley Gotway via RT wrote:
> Joe,
> Please try rerunning Point-Stat using the "-v 3" command line
option.
> It'll give you more diagnostic info about why observations were or
were
> not used. For example it may be an issue with the matching time
window.
>
>
> Thanks,
> John Halley Gotway
>
>> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
>> Transaction: Ticket created by joseph.pollina at noaa.gov
>>         Queue: met_help
>>       Subject: point_stat problem
>>         Owner: Nobody
>>    Requestors: joseph.pollina at noaa.gov
>>        Status: new
>>   Ticket<URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>
>>
>> I'm trying to run the MET verification package and am running into
a
>> problem when I run point_stat.  I went through the tutorial, and
>> everything ran without a problem.  Now I am trying to work with
actual
>> data.  I've downloaded a prepbufr file from NCEP, and ran the pb2nc
>> tool, and from what I can tell, the data looks good in the netcdf
file
>> that it creates.  However, when I run point_stat, everything seems
to be
>> running well, and I get the following output:
>>
>> Reading records for TMP/P500.
>> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>>
>>
--------------------------------------------------------------------------------
>>
>> Searching 400440 observations from 150951 PrepBufr messages.
>>
>>
--------------------------------------------------------------------------------
>>
>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>
>> and the program creates all the point_stat files that it is
supposed to
>> create, but when I open the .stat file, there is no data in the
file,
>> just column names without data.  I was wondering if you had any
insight
>> on what I could be doing wrong.  I appreciate any help you can
give.
>>
>> Thanks,
>>
>> Joe Pollina
>>
>
>


------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51020] point_stat problem
From: John Halley Gotway
Time: Mon Oct 31 09:27:25 2011

Joe,

Yep, that's correct.  Here's how the logic in Point-Stat works...

- When you run Point-Stat, you pass it a gridded forecast file, a
NetCDF point observation file (output of the PB2NC or ASCII2NC tools),
and a configuration file.
- Point-Stat finds the fields you've requested in the gridded data
file and extracts the valid time (let's call it 't') for the data.
The "valid time" is a model initialization time plus the lead time.
- It then looks in the config file for the "beg_ds" and "end_ds"
values (defined in seconds).  And it sets the observation matching
time window as [t+beg_ds, t+end_ds].
- It reads through all point observations and skips over any whose
time does not fall in the matching time window.
- The checks it does on the observations occur in the order listed in
the output you sent me - first checking the GRIB code, then the valid
time, and so on.

For the TMP/P500 verification task, it threw out 377,791 observations
because they were for a different observation type.  Then it threw out
the remaining 22,649 TMP observations because their times
didn't fall in that time window.

In a case like this, I'd suggest rerunning Point-Stat with the
addition of the following command line options:
  -obs_valid_beg 19000101_00 -obs_valid_end 21000101_00

Those command line options manually set the matching time window and
override the config file settings.  This is basically telling Point-
Stat to use any observations that occur between the years 1900
and 2100 - which will likely include all of yours.

Once you start getting non-zero pairs, you may find it helpful to turn
on the MPR (matched pair) output line in the config file.  That will
enable you to see the exact observation time values in the
data.  It'll help you debug what's going on and figure out why your
forecast and observation valid times are not matching up.

Hope that helps.

John

On 10/31/2011 02:02 AM, Joe Pollina via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020 >
>
> Thanks John.  I've done as you suggested, and I come up with the
> following information:
>
> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
> Number of matched pairs  = 0
> Observations processed   = 400440
> Rejected: GRIB code      = 377791
> Rejected: valid time     = 22649
> Rejected: bad obs value  = 0
> Rejected: off the grid   = 0
> Rejected: level mismatch = 0
> Rejected: message type   = 0
> Rejected: masking region = 0
> Rejected: bad fcst value = 0
>
> Not exactly sure what this is telling me...is it not being processed
> because the valid times of the forecast file and the obs file do not
match?
>
> Thanks again,
>
> Joe
>
>
> On 10/27/2011 3:33 PM, John Halley Gotway via RT wrote:
>> Joe,
>> Please try rerunning Point-Stat using the "-v 3" command line
option.
>> It'll give you more diagnostic info about why observations were or
were
>> not used. For example it may be an issue with the matching time
window.
>>
>>
>> Thanks,
>> John Halley Gotway
>>
>>> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
>>> Transaction: Ticket created by joseph.pollina at noaa.gov
>>>         Queue: met_help
>>>       Subject: point_stat problem
>>>         Owner: Nobody
>>>    Requestors: joseph.pollina at noaa.gov
>>>        Status: new
>>>   Ticket<URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>
>>>
>>> I'm trying to run the MET verification package and am running into
a
>>> problem when I run point_stat.  I went through the tutorial, and
>>> everything ran without a problem.  Now I am trying to work with
actual
>>> data.  I've downloaded a prepbufr file from NCEP, and ran the
pb2nc
>>> tool, and from what I can tell, the data looks good in the netcdf
file
>>> that it creates.  However, when I run point_stat, everything seems
to be
>>> running well, and I get the following output:
>>>
>>> Reading records for TMP/P500.
>>> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>>>
>>>
--------------------------------------------------------------------------------
>>>
>>> Searching 400440 observations from 150951 PrepBufr messages.
>>>
>>>
--------------------------------------------------------------------------------
>>>
>>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>>
>>> and the program creates all the point_stat files that it is
supposed to
>>> create, but when I open the .stat file, there is no data in the
file,
>>> just column names without data.  I was wondering if you had any
insight
>>> on what I could be doing wrong.  I appreciate any help you can
give.
>>>
>>> Thanks,
>>>
>>> Joe Pollina
>>>
>>
>>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51020] point_stat problem
From: Joe Pollina
Time: Wed Nov 02 08:15:20 2011

John,

That seemed to work, I no longer have rejection for valid time, but
now
I have rejection for being off the grid and level mismatch (see
below).
I think the "off the grid" error is due to a dimension problem with my
poly mask vs. the forecast file (also see below).  The error says that
the dimension of the masking region mush match the dimensions of the
data file.  When I reran gen_poly_mask, the file was created with the
dimensions of (285, 223).  How can I change the dimensions of the
masking region which is (185, 129)?  Also, have not figured out what
the
level mismatch is all about.

I hope that my emails are not a bother to you.  I am trying to find
the
meaning of the errors and their resolutions by referring to the MET
website as well as the User's Guide before I email you.

Again, many thanks for all the help you have given me,

Joe

Processing TMP/P500 versus TMP/P500, for observation type ADPUPA, over
region CONUS, for interpolation method UW_MEAN(1), using 0 pairs.
Number of matched pairs  = 0
Observations processed   = 400440
Rejected: GRIB code      = 377791
Rejected: valid time     = 0
Rejected: bad obs value  = 0
Rejected: off the grid   = 21485
Rejected: level mismatch = 1164
Rejected: message type   = 0
Rejected: masking region = 0
Rejected: bad fcst value = 0

ERROR: parse_poly_mask() -> the dimensions of the masking region (185,
129) must match the dimensions of the data (285, 223).



On 10/31/2011 11:27 AM, John Halley Gotway via RT wrote:
> Joe,
>
> Yep, that's correct.  Here's how the logic in Point-Stat works...
>
> - When you run Point-Stat, you pass it a gridded forecast file, a
NetCDF point observation file (output of the PB2NC or ASCII2NC tools),
and a configuration file.
> - Point-Stat finds the fields you've requested in the gridded data
file and extracts the valid time (let's call it 't') for the data.
The "valid time" is a model initialization time plus the lead time.
> - It then looks in the config file for the "beg_ds" and "end_ds"
values (defined in seconds).  And it sets the observation matching
time window as [t+beg_ds, t+end_ds].
> - It reads through all point observations and skips over any whose
time does not fall in the matching time window.
> - The checks it does on the observations occur in the order listed
in the output you sent me - first checking the GRIB code, then the
valid time, and so on.
>
> For the TMP/P500 verification task, it threw out 377,791
observations because they were for a different observation type.  Then
it threw out the remaining 22,649 TMP observations because their times
> didn't fall in that time window.
>
> In a case like this, I'd suggest rerunning Point-Stat with the
addition of the following command line options:
>    -obs_valid_beg 19000101_00 -obs_valid_end 21000101_00
>
> Those command line options manually set the matching time window and
override the config file settings.  This is basically telling Point-
Stat to use any observations that occur between the years 1900
> and 2100 - which will likely include all of yours.
>
> Once you start getting non-zero pairs, you may find it helpful to
turn on the MPR (matched pair) output line in the config file.  That
will enable you to see the exact observation time values in the
> data.  It'll help you debug what's going on and figure out why your
forecast and observation valid times are not matching up.
>
> Hope that helps.
>
> John
>
> On 10/31/2011 02:02 AM, Joe Pollina via RT wrote:
>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>
>> Thanks John.  I've done as you suggested, and I come up with the
>> following information:
>>
>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>> Number of matched pairs  = 0
>> Observations processed   = 400440
>> Rejected: GRIB code      = 377791
>> Rejected: valid time     = 22649
>> Rejected: bad obs value  = 0
>> Rejected: off the grid   = 0
>> Rejected: level mismatch = 0
>> Rejected: message type   = 0
>> Rejected: masking region = 0
>> Rejected: bad fcst value = 0
>>
>> Not exactly sure what this is telling me...is it not being
processed
>> because the valid times of the forecast file and the obs file do
not match?
>>
>> Thanks again,
>>
>> Joe
>>
>>
>> On 10/27/2011 3:33 PM, John Halley Gotway via RT wrote:
>>> Joe,
>>> Please try rerunning Point-Stat using the "-v 3" command line
option.
>>> It'll give you more diagnostic info about why observations were or
were
>>> not used. For example it may be an issue with the matching time
window.
>>>
>>>
>>> Thanks,
>>> John Halley Gotway
>>>
>>>> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
>>>> Transaction: Ticket created by joseph.pollina at noaa.gov
>>>>          Queue: met_help
>>>>        Subject: point_stat problem
>>>>          Owner: Nobody
>>>>     Requestors: joseph.pollina at noaa.gov
>>>>         Status: new
>>>>    Ticket<URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>
>>>>
>>>> I'm trying to run the MET verification package and am running
into a
>>>> problem when I run point_stat.  I went through the tutorial, and
>>>> everything ran without a problem.  Now I am trying to work with
actual
>>>> data.  I've downloaded a prepbufr file from NCEP, and ran the
pb2nc
>>>> tool, and from what I can tell, the data looks good in the netcdf
file
>>>> that it creates.  However, when I run point_stat, everything
seems to be
>>>> running well, and I get the following output:
>>>>
>>>> Reading records for TMP/P500.
>>>> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>>>>
>>>>
--------------------------------------------------------------------------------
>>>>
>>>> Searching 400440 observations from 150951 PrepBufr messages.
>>>>
>>>>
--------------------------------------------------------------------------------
>>>>
>>>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>>>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>>>
>>>> and the program creates all the point_stat files that it is
supposed to
>>>> create, but when I open the .stat file, there is no data in the
file,
>>>> just column names without data.  I was wondering if you had any
insight
>>>> on what I could be doing wrong.  I appreciate any help you can
give.
>>>>
>>>> Thanks,
>>>>
>>>> Joe Pollina
>>>>
>>>


------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51020] point_stat problem
From: John Halley Gotway
Time: Wed Nov 02 09:27:54 2011

Joe,

No worries.  I'm happy to help.

It'd probably be easiest at this point for you to just send me some
data, and I'll run it here.  Please send me:
- your gridded forecast file
- your NetCDF point observation file (output of PB2NC or ASCII2NC)
- the Point-Stat config file
- the input *AND* output of your call to gen_poly_mask
- and what version of MET are you running?

We typically have people post data to our anonymous ftp site following
the instructions listed here:
   http://www.dtcenter.org/met/users/support/met_help.php#ftp

Please write me to let me know when the data is present.

Here's a brief explanation for the couple of questions your raised...
- "off the grid" means that the observations we're used because their
lat/lon's fell outside of your forecast domain
- "level mismatch" means that the observation's level value (pressure
level, in your case) didn't match what you requested.  That means,
there 1164 observations of temperature that it considered, but
apparently none of them had of pressure level value of exactly 500 mb.

Once I get your data, I'll run it here and try to figure out what's
going on.

Thanks,
John Halley Gotway

On 11/02/2011 08:15 AM, Joe Pollina via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020 >
>
> John,
>
> That seemed to work, I no longer have rejection for valid time, but
now
> I have rejection for being off the grid and level mismatch (see
below).
> I think the "off the grid" error is due to a dimension problem with
my
> poly mask vs. the forecast file (also see below).  The error says
that
> the dimension of the masking region mush match the dimensions of the
> data file.  When I reran gen_poly_mask, the file was created with
the
> dimensions of (285, 223).  How can I change the dimensions of the
> masking region which is (185, 129)?  Also, have not figured out what
the
> level mismatch is all about.
>
> I hope that my emails are not a bother to you.  I am trying to find
the
> meaning of the errors and their resolutions by referring to the MET
> website as well as the User's Guide before I email you.
>
> Again, many thanks for all the help you have given me,
>
> Joe
>
> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
> region CONUS, for interpolation method UW_MEAN(1), using 0 pairs.
> Number of matched pairs  = 0
> Observations processed   = 400440
> Rejected: GRIB code      = 377791
> Rejected: valid time     = 0
> Rejected: bad obs value  = 0
> Rejected: off the grid   = 21485
> Rejected: level mismatch = 1164
> Rejected: message type   = 0
> Rejected: masking region = 0
> Rejected: bad fcst value = 0
>
> ERROR: parse_poly_mask() -> the dimensions of the masking region
(185,
> 129) must match the dimensions of the data (285, 223).
>
>
>
> On 10/31/2011 11:27 AM, John Halley Gotway via RT wrote:
>> Joe,
>>
>> Yep, that's correct.  Here's how the logic in Point-Stat works...
>>
>> - When you run Point-Stat, you pass it a gridded forecast file, a
NetCDF point observation file (output of the PB2NC or ASCII2NC tools),
and a configuration file.
>> - Point-Stat finds the fields you've requested in the gridded data
file and extracts the valid time (let's call it 't') for the data.
The "valid time" is a model initialization time plus the lead time.
>> - It then looks in the config file for the "beg_ds" and "end_ds"
values (defined in seconds).  And it sets the observation matching
time window as [t+beg_ds, t+end_ds].
>> - It reads through all point observations and skips over any whose
time does not fall in the matching time window.
>> - The checks it does on the observations occur in the order listed
in the output you sent me - first checking the GRIB code, then the
valid time, and so on.
>>
>> For the TMP/P500 verification task, it threw out 377,791
observations because they were for a different observation type.  Then
it threw out the remaining 22,649 TMP observations because their times
>> didn't fall in that time window.
>>
>> In a case like this, I'd suggest rerunning Point-Stat with the
addition of the following command line options:
>>    -obs_valid_beg 19000101_00 -obs_valid_end 21000101_00
>>
>> Those command line options manually set the matching time window
and override the config file settings.  This is basically telling
Point-Stat to use any observations that occur between the years 1900
>> and 2100 - which will likely include all of yours.
>>
>> Once you start getting non-zero pairs, you may find it helpful to
turn on the MPR (matched pair) output line in the config file.  That
will enable you to see the exact observation time values in the
>> data.  It'll help you debug what's going on and figure out why your
forecast and observation valid times are not matching up.
>>
>> Hope that helps.
>>
>> John
>>
>> On 10/31/2011 02:02 AM, Joe Pollina via RT wrote:
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>
>>> Thanks John.  I've done as you suggested, and I come up with the
>>> following information:
>>>
>>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>> Number of matched pairs  = 0
>>> Observations processed   = 400440
>>> Rejected: GRIB code      = 377791
>>> Rejected: valid time     = 22649
>>> Rejected: bad obs value  = 0
>>> Rejected: off the grid   = 0
>>> Rejected: level mismatch = 0
>>> Rejected: message type   = 0
>>> Rejected: masking region = 0
>>> Rejected: bad fcst value = 0
>>>
>>> Not exactly sure what this is telling me...is it not being
processed
>>> because the valid times of the forecast file and the obs file do
not match?
>>>
>>> Thanks again,
>>>
>>> Joe
>>>
>>>
>>> On 10/27/2011 3:33 PM, John Halley Gotway via RT wrote:
>>>> Joe,
>>>> Please try rerunning Point-Stat using the "-v 3" command line
option.
>>>> It'll give you more diagnostic info about why observations were
or were
>>>> not used. For example it may be an issue with the matching time
window.
>>>>
>>>>
>>>> Thanks,
>>>> John Halley Gotway
>>>>
>>>>> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
>>>>> Transaction: Ticket created by joseph.pollina at noaa.gov
>>>>>          Queue: met_help
>>>>>        Subject: point_stat problem
>>>>>          Owner: Nobody
>>>>>     Requestors: joseph.pollina at noaa.gov
>>>>>         Status: new
>>>>>    Ticket<URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>>
>>>>>
>>>>> I'm trying to run the MET verification package and am running
into a
>>>>> problem when I run point_stat.  I went through the tutorial, and
>>>>> everything ran without a problem.  Now I am trying to work with
actual
>>>>> data.  I've downloaded a prepbufr file from NCEP, and ran the
pb2nc
>>>>> tool, and from what I can tell, the data looks good in the
netcdf file
>>>>> that it creates.  However, when I run point_stat, everything
seems to be
>>>>> running well, and I get the following output:
>>>>>
>>>>> Reading records for TMP/P500.
>>>>> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>>>>>
>>>>>
--------------------------------------------------------------------------------
>>>>>
>>>>> Searching 400440 observations from 150951 PrepBufr messages.
>>>>>
>>>>>
--------------------------------------------------------------------------------
>>>>>
>>>>> Processing TMP/P500 versus TMP/P500, for observation type
ADPUPA, over
>>>>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>>>>
>>>>> and the program creates all the point_stat files that it is
supposed to
>>>>> create, but when I open the .stat file, there is no data in the
file,
>>>>> just column names without data.  I was wondering if you had any
insight
>>>>> on what I could be doing wrong.  I appreciate any help you can
give.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Joe Pollina
>>>>>
>>>>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51020] point_stat problem
From: Joe Pollina
Time: Mon Nov 07 16:30:06 2011

John,

I have created a directory with my last name.  All the files I believe
you wanted are in there.  I also added the prepbufr file to the
directory just to make sure I didn't grab the wrong file from the NCEP
site.  A lot of the configuration for these files was based on the
tutorial.  In other words, this was going to be a live run through
(after doing the tutorial run through), trying to take the forecast
file
from our local WRF, and just doing some minor verification stuff using
a
lot of the default setups from the files.  I thought I had changed
everything I needed, but apparently not.

Thanks for your help.

P.S.  while trying to transfer the files over, I thought I was in my
directory, but I was actually in incoming/irap/met_help, so most of
the
files are here, as well as in incoming/irap/met_help/pollina_data, so
you might want to get rid of the extra files in the met_help
directory.

Joe

On 11/2/2011 11:27 AM, John Halley Gotway via RT wrote:
> Joe,
>
> No worries.  I'm happy to help.
>
> It'd probably be easiest at this point for you to just send me some
data, and I'll run it here.  Please send me:
> - your gridded forecast file
> - your NetCDF point observation file (output of PB2NC or ASCII2NC)
> - the Point-Stat config file
> - the input *AND* output of your call to gen_poly_mask
> - and what version of MET are you running?
>
> We typically have people post data to our anonymous ftp site
following the instructions listed here:
>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> Please write me to let me know when the data is present.
>
> Here's a brief explanation for the couple of questions your
raised...
> - "off the grid" means that the observations we're used because
their lat/lon's fell outside of your forecast domain
> - "level mismatch" means that the observation's level value
(pressure level, in your case) didn't match what you requested.  That
means, there 1164 observations of temperature that it considered, but
> apparently none of them had of pressure level value of exactly 500
mb.
>
> Once I get your data, I'll run it here and try to figure out what's
going on.
>
> Thanks,
> John Halley Gotway
>
> On 11/02/2011 08:15 AM, Joe Pollina via RT wrote:
>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>
>> John,
>>
>> That seemed to work, I no longer have rejection for valid time, but
now
>> I have rejection for being off the grid and level mismatch (see
below).
>> I think the "off the grid" error is due to a dimension problem with
my
>> poly mask vs. the forecast file (also see below).  The error says
that
>> the dimension of the masking region mush match the dimensions of
the
>> data file.  When I reran gen_poly_mask, the file was created with
the
>> dimensions of (285, 223).  How can I change the dimensions of the
>> masking region which is (185, 129)?  Also, have not figured out
what the
>> level mismatch is all about.
>>
>> I hope that my emails are not a bother to you.  I am trying to find
the
>> meaning of the errors and their resolutions by referring to the MET
>> website as well as the User's Guide before I email you.
>>
>> Again, many thanks for all the help you have given me,
>>
>> Joe
>>
>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>> region CONUS, for interpolation method UW_MEAN(1), using 0 pairs.
>> Number of matched pairs  = 0
>> Observations processed   = 400440
>> Rejected: GRIB code      = 377791
>> Rejected: valid time     = 0
>> Rejected: bad obs value  = 0
>> Rejected: off the grid   = 21485
>> Rejected: level mismatch = 1164
>> Rejected: message type   = 0
>> Rejected: masking region = 0
>> Rejected: bad fcst value = 0
>>
>> ERROR: parse_poly_mask() ->  the dimensions of the masking region
(185,
>> 129) must match the dimensions of the data (285, 223).
>>
>>
>>
>> On 10/31/2011 11:27 AM, John Halley Gotway via RT wrote:
>>> Joe,
>>>
>>> Yep, that's correct.  Here's how the logic in Point-Stat works...
>>>
>>> - When you run Point-Stat, you pass it a gridded forecast file, a
NetCDF point observation file (output of the PB2NC or ASCII2NC tools),
and a configuration file.
>>> - Point-Stat finds the fields you've requested in the gridded data
file and extracts the valid time (let's call it 't') for the data.
The "valid time" is a model initialization time plus the lead time.
>>> - It then looks in the config file for the "beg_ds" and "end_ds"
values (defined in seconds).  And it sets the observation matching
time window as [t+beg_ds, t+end_ds].
>>> - It reads through all point observations and skips over any whose
time does not fall in the matching time window.
>>> - The checks it does on the observations occur in the order listed
in the output you sent me - first checking the GRIB code, then the
valid time, and so on.
>>>
>>> For the TMP/P500 verification task, it threw out 377,791
observations because they were for a different observation type.  Then
it threw out the remaining 22,649 TMP observations because their times
>>> didn't fall in that time window.
>>>
>>> In a case like this, I'd suggest rerunning Point-Stat with the
addition of the following command line options:
>>>     -obs_valid_beg 19000101_00 -obs_valid_end 21000101_00
>>>
>>> Those command line options manually set the matching time window
and override the config file settings.  This is basically telling
Point-Stat to use any observations that occur between the years 1900
>>> and 2100 - which will likely include all of yours.
>>>
>>> Once you start getting non-zero pairs, you may find it helpful to
turn on the MPR (matched pair) output line in the config file.  That
will enable you to see the exact observation time values in the
>>> data.  It'll help you debug what's going on and figure out why
your forecast and observation valid times are not matching up.
>>>
>>> Hope that helps.
>>>
>>> John
>>>
>>> On 10/31/2011 02:02 AM, Joe Pollina via RT wrote:
>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>
>>>> Thanks John.  I've done as you suggested, and I come up with the
>>>> following information:
>>>>
>>>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>>>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>>> Number of matched pairs  = 0
>>>> Observations processed   = 400440
>>>> Rejected: GRIB code      = 377791
>>>> Rejected: valid time     = 22649
>>>> Rejected: bad obs value  = 0
>>>> Rejected: off the grid   = 0
>>>> Rejected: level mismatch = 0
>>>> Rejected: message type   = 0
>>>> Rejected: masking region = 0
>>>> Rejected: bad fcst value = 0
>>>>
>>>> Not exactly sure what this is telling me...is it not being
processed
>>>> because the valid times of the forecast file and the obs file do
not match?
>>>>
>>>> Thanks again,
>>>>
>>>> Joe
>>>>
>>>>
>>>> On 10/27/2011 3:33 PM, John Halley Gotway via RT wrote:
>>>>> Joe,
>>>>> Please try rerunning Point-Stat using the "-v 3" command line
option.
>>>>> It'll give you more diagnostic info about why observations were
or were
>>>>> not used. For example it may be an issue with the matching time
window.
>>>>>
>>>>>
>>>>> Thanks,
>>>>> John Halley Gotway
>>>>>
>>>>>> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
>>>>>> Transaction: Ticket created by joseph.pollina at noaa.gov
>>>>>>           Queue: met_help
>>>>>>         Subject: point_stat problem
>>>>>>           Owner: Nobody
>>>>>>      Requestors: joseph.pollina at noaa.gov
>>>>>>          Status: new
>>>>>>     Ticket<URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>>>
>>>>>>
>>>>>> I'm trying to run the MET verification package and am running
into a
>>>>>> problem when I run point_stat.  I went through the tutorial,
and
>>>>>> everything ran without a problem.  Now I am trying to work with
actual
>>>>>> data.  I've downloaded a prepbufr file from NCEP, and ran the
pb2nc
>>>>>> tool, and from what I can tell, the data looks good in the
netcdf file
>>>>>> that it creates.  However, when I run point_stat, everything
seems to be
>>>>>> running well, and I get the following output:
>>>>>>
>>>>>> Reading records for TMP/P500.
>>>>>> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>>>>>>
>>>>>>
--------------------------------------------------------------------------------
>>>>>>
>>>>>> Searching 400440 observations from 150951 PrepBufr messages.
>>>>>>
>>>>>>
--------------------------------------------------------------------------------
>>>>>>
>>>>>> Processing TMP/P500 versus TMP/P500, for observation type
ADPUPA, over
>>>>>> region FULL, for interpolation method UW_MEAN(1), using 0
pairs.
>>>>>>
>>>>>> and the program creates all the point_stat files that it is
supposed to
>>>>>> create, but when I open the .stat file, there is no data in the
file,
>>>>>> just column names without data.  I was wondering if you had any
insight
>>>>>> on what I could be doing wrong.  I appreciate any help you can
give.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Joe Pollina
>>>>>>


------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #51020] point_stat problem
From: John Halley Gotway
Time: Mon Nov 14 08:56:13 2011

Joe,

I apologize for the delay in getting back to you on this issue.  I was
swamped last week.  But I will work on your issue today and let you
know what I find.

Thanks,
John

On 11/07/2011 04:30 PM, Joe Pollina via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020 >
>
> John,
>
> I have created a directory with my last name.  All the files I
believe
> you wanted are in there.  I also added the prepbufr file to the
> directory just to make sure I didn't grab the wrong file from the
NCEP
> site.  A lot of the configuration for these files was based on the
> tutorial.  In other words, this was going to be a live run through
> (after doing the tutorial run through), trying to take the forecast
file
> from our local WRF, and just doing some minor verification stuff
using a
> lot of the default setups from the files.  I thought I had changed
> everything I needed, but apparently not.
>
> Thanks for your help.
>
> P.S.  while trying to transfer the files over, I thought I was in my
> directory, but I was actually in incoming/irap/met_help, so most of
the
> files are here, as well as in incoming/irap/met_help/pollina_data,
so
> you might want to get rid of the extra files in the met_help
directory.
>
> Joe
>
> On 11/2/2011 11:27 AM, John Halley Gotway via RT wrote:
>> Joe,
>>
>> No worries.  I'm happy to help.
>>
>> It'd probably be easiest at this point for you to just send me some
data, and I'll run it here.  Please send me:
>> - your gridded forecast file
>> - your NetCDF point observation file (output of PB2NC or ASCII2NC)
>> - the Point-Stat config file
>> - the input *AND* output of your call to gen_poly_mask
>> - and what version of MET are you running?
>>
>> We typically have people post data to our anonymous ftp site
following the instructions listed here:
>>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> Please write me to let me know when the data is present.
>>
>> Here's a brief explanation for the couple of questions your
raised...
>> - "off the grid" means that the observations we're used because
their lat/lon's fell outside of your forecast domain
>> - "level mismatch" means that the observation's level value
(pressure level, in your case) didn't match what you requested.  That
means, there 1164 observations of temperature that it considered, but
>> apparently none of them had of pressure level value of exactly 500
mb.
>>
>> Once I get your data, I'll run it here and try to figure out what's
going on.
>>
>> Thanks,
>> John Halley Gotway
>>
>> On 11/02/2011 08:15 AM, Joe Pollina via RT wrote:
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>
>>> John,
>>>
>>> That seemed to work, I no longer have rejection for valid time,
but now
>>> I have rejection for being off the grid and level mismatch (see
below).
>>> I think the "off the grid" error is due to a dimension problem
with my
>>> poly mask vs. the forecast file (also see below).  The error says
that
>>> the dimension of the masking region mush match the dimensions of
the
>>> data file.  When I reran gen_poly_mask, the file was created with
the
>>> dimensions of (285, 223).  How can I change the dimensions of the
>>> masking region which is (185, 129)?  Also, have not figured out
what the
>>> level mismatch is all about.
>>>
>>> I hope that my emails are not a bother to you.  I am trying to
find the
>>> meaning of the errors and their resolutions by referring to the
MET
>>> website as well as the User's Guide before I email you.
>>>
>>> Again, many thanks for all the help you have given me,
>>>
>>> Joe
>>>
>>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>>> region CONUS, for interpolation method UW_MEAN(1), using 0 pairs.
>>> Number of matched pairs  = 0
>>> Observations processed   = 400440
>>> Rejected: GRIB code      = 377791
>>> Rejected: valid time     = 0
>>> Rejected: bad obs value  = 0
>>> Rejected: off the grid   = 21485
>>> Rejected: level mismatch = 1164
>>> Rejected: message type   = 0
>>> Rejected: masking region = 0
>>> Rejected: bad fcst value = 0
>>>
>>> ERROR: parse_poly_mask() ->  the dimensions of the masking region
(185,
>>> 129) must match the dimensions of the data (285, 223).
>>>
>>>
>>>
>>> On 10/31/2011 11:27 AM, John Halley Gotway via RT wrote:
>>>> Joe,
>>>>
>>>> Yep, that's correct.  Here's how the logic in Point-Stat works...
>>>>
>>>> - When you run Point-Stat, you pass it a gridded forecast file, a
NetCDF point observation file (output of the PB2NC or ASCII2NC tools),
and a configuration file.
>>>> - Point-Stat finds the fields you've requested in the gridded
data file and extracts the valid time (let's call it 't') for the
data.  The "valid time" is a model initialization time plus the lead
time.
>>>> - It then looks in the config file for the "beg_ds" and "end_ds"
values (defined in seconds).  And it sets the observation matching
time window as [t+beg_ds, t+end_ds].
>>>> - It reads through all point observations and skips over any
whose time does not fall in the matching time window.
>>>> - The checks it does on the observations occur in the order
listed in the output you sent me - first checking the GRIB code, then
the valid time, and so on.
>>>>
>>>> For the TMP/P500 verification task, it threw out 377,791
observations because they were for a different observation type.  Then
it threw out the remaining 22,649 TMP observations because their times
>>>> didn't fall in that time window.
>>>>
>>>> In a case like this, I'd suggest rerunning Point-Stat with the
addition of the following command line options:
>>>>     -obs_valid_beg 19000101_00 -obs_valid_end 21000101_00
>>>>
>>>> Those command line options manually set the matching time window
and override the config file settings.  This is basically telling
Point-Stat to use any observations that occur between the years 1900
>>>> and 2100 - which will likely include all of yours.
>>>>
>>>> Once you start getting non-zero pairs, you may find it helpful to
turn on the MPR (matched pair) output line in the config file.  That
will enable you to see the exact observation time values in the
>>>> data.  It'll help you debug what's going on and figure out why
your forecast and observation valid times are not matching up.
>>>>
>>>> Hope that helps.
>>>>
>>>> John
>>>>
>>>> On 10/31/2011 02:02 AM, Joe Pollina via RT wrote:
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>>
>>>>> Thanks John.  I've done as you suggested, and I come up with the
>>>>> following information:
>>>>>
>>>>> Processing TMP/P500 versus TMP/P500, for observation type
ADPUPA, over
>>>>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>>>> Number of matched pairs  = 0
>>>>> Observations processed   = 400440
>>>>> Rejected: GRIB code      = 377791
>>>>> Rejected: valid time     = 22649
>>>>> Rejected: bad obs value  = 0
>>>>> Rejected: off the grid   = 0
>>>>> Rejected: level mismatch = 0
>>>>> Rejected: message type   = 0
>>>>> Rejected: masking region = 0
>>>>> Rejected: bad fcst value = 0
>>>>>
>>>>> Not exactly sure what this is telling me...is it not being
processed
>>>>> because the valid times of the forecast file and the obs file do
not match?
>>>>>
>>>>> Thanks again,
>>>>>
>>>>> Joe
>>>>>
>>>>>
>>>>> On 10/27/2011 3:33 PM, John Halley Gotway via RT wrote:
>>>>>> Joe,
>>>>>> Please try rerunning Point-Stat using the "-v 3" command line
option.
>>>>>> It'll give you more diagnostic info about why observations were
or were
>>>>>> not used. For example it may be an issue with the matching time
window.
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> John Halley Gotway
>>>>>>
>>>>>>> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
>>>>>>> Transaction: Ticket created by joseph.pollina at noaa.gov
>>>>>>>           Queue: met_help
>>>>>>>         Subject: point_stat problem
>>>>>>>           Owner: Nobody
>>>>>>>      Requestors: joseph.pollina at noaa.gov
>>>>>>>          Status: new
>>>>>>>     Ticket<URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>>>>
>>>>>>>
>>>>>>> I'm trying to run the MET verification package and am running
into a
>>>>>>> problem when I run point_stat.  I went through the tutorial,
and
>>>>>>> everything ran without a problem.  Now I am trying to work
with actual
>>>>>>> data.  I've downloaded a prepbufr file from NCEP, and ran the
pb2nc
>>>>>>> tool, and from what I can tell, the data looks good in the
netcdf file
>>>>>>> that it creates.  However, when I run point_stat, everything
seems to be
>>>>>>> running well, and I get the following output:
>>>>>>>
>>>>>>> Reading records for TMP/P500.
>>>>>>> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>>>>>>>
>>>>>>>
--------------------------------------------------------------------------------
>>>>>>>
>>>>>>> Searching 400440 observations from 150951 PrepBufr messages.
>>>>>>>
>>>>>>>
--------------------------------------------------------------------------------
>>>>>>>
>>>>>>> Processing TMP/P500 versus TMP/P500, for observation type
ADPUPA, over
>>>>>>> region FULL, for interpolation method UW_MEAN(1), using 0
pairs.
>>>>>>>
>>>>>>> and the program creates all the point_stat files that it is
supposed to
>>>>>>> create, but when I open the .stat file, there is no data in
the file,
>>>>>>> just column names without data.  I was wondering if you had
any insight
>>>>>>> on what I could be doing wrong.  I appreciate any help you can
give.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Joe Pollina
>>>>>>>
>

------------------------------------------------
Subject: point_stat problem
From: John Halley Gotway
Time: Mon Nov 14 13:59:55 2011

Joe,

Thanks for sending that data for testing.  I ran the case you sent me
using the "-v 3" command line option and see the following output:

Processing TMP/P500 versus TMP/P500, for observation type ADPUPA, over
region CONUS, for interpolation method UW_MEAN(1), using 0 pairs.
Number of matched pairs  = 0
Observations processed   = 400440
Rejected: GRIB code      = 377791
Rejected: valid time     = 0
Rejected: bad obs value  = 0
Rejected: off the grid   = 21485
Rejected: level mismatch = 1164
Rejected: message type   = 0
Rejected: masking region = 0
Rejected: bad fcst value = 0

The first thing I usually check is the forecast and observation valid
times not matching up.  But in your case, no data was thrown out for
"valid time", so that's not the problem.

Other things to check are whether the forecast and observation data
overlap geographically.  I used an optional MET tool called
"plot_point_obs" to plot the location of the temperature observations
in
your NetCDF observation file.  I ran it twice - once plotting all TMP
observations and a second time only plotting ADPUPA observations
(since that's what you're verifying against in Point-Stat):
  plot_point_obs obs.nc obs_tmp_ALL.ps -gc 11
  plot_point_obs obs.nc obs_tmp.ps -gc 11 -msg_typ ADPUPA

The resulting images are attached.  As you can see, when filtering on
ADPUPA, there are no TMP observations over your domain, which is in
the northeast United States.

So the next question is why?  I ran the PREPBUFR file you sent through
the PB2NC tool myself and plotted the ADPUPA observations and got
similar results.

It occurs to me that the ADPUPA observations are typically just
radiosondes.  And I believe most radiosondes are launched and 00Z
and/or 12Z.

When you run this same process for data that's valid at 00Z or 12Z,
what kind of matched pairs do you see?  I believe that in our own
evaluations within the DTC, we only verify the upper-air variables
at 00Z and 12Z for this very same reason.

Hope that helps.

John


On 11/07/2011 04:30 PM, Joe Pollina via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020 >
>
> John,
>
> I have created a directory with my last name.  All the files I
believe
> you wanted are in there.  I also added the prepbufr file to the
> directory just to make sure I didn't grab the wrong file from the
NCEP
> site.  A lot of the configuration for these files was based on the
> tutorial.  In other words, this was going to be a live run through
> (after doing the tutorial run through), trying to take the forecast
file
> from our local WRF, and just doing some minor verification stuff
using a
> lot of the default setups from the files.  I thought I had changed
> everything I needed, but apparently not.
>
> Thanks for your help.
>
> P.S.  while trying to transfer the files over, I thought I was in my
> directory, but I was actually in incoming/irap/met_help, so most of
the
> files are here, as well as in incoming/irap/met_help/pollina_data,
so
> you might want to get rid of the extra files in the met_help
directory.
>
> Joe
>
> On 11/2/2011 11:27 AM, John Halley Gotway via RT wrote:
>> Joe,
>>
>> No worries.  I'm happy to help.
>>
>> It'd probably be easiest at this point for you to just send me some
data, and I'll run it here.  Please send me:
>> - your gridded forecast file
>> - your NetCDF point observation file (output of PB2NC or ASCII2NC)
>> - the Point-Stat config file
>> - the input *AND* output of your call to gen_poly_mask
>> - and what version of MET are you running?
>>
>> We typically have people post data to our anonymous ftp site
following the instructions listed here:
>>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> Please write me to let me know when the data is present.
>>
>> Here's a brief explanation for the couple of questions your
raised...
>> - "off the grid" means that the observations we're used because
their lat/lon's fell outside of your forecast domain
>> - "level mismatch" means that the observation's level value
(pressure level, in your case) didn't match what you requested.  That
means, there 1164 observations of temperature that it considered, but
>> apparently none of them had of pressure level value of exactly 500
mb.
>>
>> Once I get your data, I'll run it here and try to figure out what's
going on.
>>
>> Thanks,
>> John Halley Gotway
>>
>> On 11/02/2011 08:15 AM, Joe Pollina via RT wrote:
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>
>>> John,
>>>
>>> That seemed to work, I no longer have rejection for valid time,
but now
>>> I have rejection for being off the grid and level mismatch (see
below).
>>> I think the "off the grid" error is due to a dimension problem
with my
>>> poly mask vs. the forecast file (also see below).  The error says
that
>>> the dimension of the masking region mush match the dimensions of
the
>>> data file.  When I reran gen_poly_mask, the file was created with
the
>>> dimensions of (285, 223).  How can I change the dimensions of the
>>> masking region which is (185, 129)?  Also, have not figured out
what the
>>> level mismatch is all about.
>>>
>>> I hope that my emails are not a bother to you.  I am trying to
find the
>>> meaning of the errors and their resolutions by referring to the
MET
>>> website as well as the User's Guide before I email you.
>>>
>>> Again, many thanks for all the help you have given me,
>>>
>>> Joe
>>>
>>> Processing TMP/P500 versus TMP/P500, for observation type ADPUPA,
over
>>> region CONUS, for interpolation method UW_MEAN(1), using 0 pairs.
>>> Number of matched pairs  = 0
>>> Observations processed   = 400440
>>> Rejected: GRIB code      = 377791
>>> Rejected: valid time     = 0
>>> Rejected: bad obs value  = 0
>>> Rejected: off the grid   = 21485
>>> Rejected: level mismatch = 1164
>>> Rejected: message type   = 0
>>> Rejected: masking region = 0
>>> Rejected: bad fcst value = 0
>>>
>>> ERROR: parse_poly_mask() ->  the dimensions of the masking region
(185,
>>> 129) must match the dimensions of the data (285, 223).
>>>
>>>
>>>
>>> On 10/31/2011 11:27 AM, John Halley Gotway via RT wrote:
>>>> Joe,
>>>>
>>>> Yep, that's correct.  Here's how the logic in Point-Stat works...
>>>>
>>>> - When you run Point-Stat, you pass it a gridded forecast file, a
NetCDF point observation file (output of the PB2NC or ASCII2NC tools),
and a configuration file.
>>>> - Point-Stat finds the fields you've requested in the gridded
data file and extracts the valid time (let's call it 't') for the
data.  The "valid time" is a model initialization time plus the lead
time.
>>>> - It then looks in the config file for the "beg_ds" and "end_ds"
values (defined in seconds).  And it sets the observation matching
time window as [t+beg_ds, t+end_ds].
>>>> - It reads through all point observations and skips over any
whose time does not fall in the matching time window.
>>>> - The checks it does on the observations occur in the order
listed in the output you sent me - first checking the GRIB code, then
the valid time, and so on.
>>>>
>>>> For the TMP/P500 verification task, it threw out 377,791
observations because they were for a different observation type.  Then
it threw out the remaining 22,649 TMP observations because their times
>>>> didn't fall in that time window.
>>>>
>>>> In a case like this, I'd suggest rerunning Point-Stat with the
addition of the following command line options:
>>>>     -obs_valid_beg 19000101_00 -obs_valid_end 21000101_00
>>>>
>>>> Those command line options manually set the matching time window
and override the config file settings.  This is basically telling
Point-Stat to use any observations that occur between the years 1900
>>>> and 2100 - which will likely include all of yours.
>>>>
>>>> Once you start getting non-zero pairs, you may find it helpful to
turn on the MPR (matched pair) output line in the config file.  That
will enable you to see the exact observation time values in the
>>>> data.  It'll help you debug what's going on and figure out why
your forecast and observation valid times are not matching up.
>>>>
>>>> Hope that helps.
>>>>
>>>> John
>>>>
>>>> On 10/31/2011 02:02 AM, Joe Pollina via RT wrote:
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>>
>>>>> Thanks John.  I've done as you suggested, and I come up with the
>>>>> following information:
>>>>>
>>>>> Processing TMP/P500 versus TMP/P500, for observation type
ADPUPA, over
>>>>> region FULL, for interpolation method UW_MEAN(1), using 0 pairs.
>>>>> Number of matched pairs  = 0
>>>>> Observations processed   = 400440
>>>>> Rejected: GRIB code      = 377791
>>>>> Rejected: valid time     = 22649
>>>>> Rejected: bad obs value  = 0
>>>>> Rejected: off the grid   = 0
>>>>> Rejected: level mismatch = 0
>>>>> Rejected: message type   = 0
>>>>> Rejected: masking region = 0
>>>>> Rejected: bad fcst value = 0
>>>>>
>>>>> Not exactly sure what this is telling me...is it not being
processed
>>>>> because the valid times of the forecast file and the obs file do
not match?
>>>>>
>>>>> Thanks again,
>>>>>
>>>>> Joe
>>>>>
>>>>>
>>>>> On 10/27/2011 3:33 PM, John Halley Gotway via RT wrote:
>>>>>> Joe,
>>>>>> Please try rerunning Point-Stat using the "-v 3" command line
option.
>>>>>> It'll give you more diagnostic info about why observations were
or were
>>>>>> not used. For example it may be an issue with the matching time
window.
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> John Halley Gotway
>>>>>>
>>>>>>> Thu Oct 27 10:27:16 2011: Request 51020 was acted upon.
>>>>>>> Transaction: Ticket created by joseph.pollina at noaa.gov
>>>>>>>           Queue: met_help
>>>>>>>         Subject: point_stat problem
>>>>>>>           Owner: Nobody
>>>>>>>      Requestors: joseph.pollina at noaa.gov
>>>>>>>          Status: new
>>>>>>>     Ticket<URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=51020>
>>>>>>>
>>>>>>>
>>>>>>> I'm trying to run the MET verification package and am running
into a
>>>>>>> problem when I run point_stat.  I went through the tutorial,
and
>>>>>>> everything ran without a problem.  Now I am trying to work
with actual
>>>>>>> data.  I've downloaded a prepbufr file from NCEP, and ran the
pb2nc
>>>>>>> tool, and from what I can tell, the data looks good in the
netcdf file
>>>>>>> that it creates.  However, when I run point_stat, everything
seems to be
>>>>>>> running well, and I get the following output:
>>>>>>>
>>>>>>> Reading records for TMP/P500.
>>>>>>> For TMP/P500 found 1 forecast levels and 0 climatology levels.
>>>>>>>
>>>>>>>
--------------------------------------------------------------------------------
>>>>>>>
>>>>>>> Searching 400440 observations from 150951 PrepBufr messages.
>>>>>>>
>>>>>>>
--------------------------------------------------------------------------------
>>>>>>>
>>>>>>> Processing TMP/P500 versus TMP/P500, for observation type
ADPUPA, over
>>>>>>> region FULL, for interpolation method UW_MEAN(1), using 0
pairs.
>>>>>>>
>>>>>>> and the program creates all the point_stat files that it is
supposed to
>>>>>>> create, but when I open the .stat file, there is no data in
the file,
>>>>>>> just column names without data.  I was wondering if you had
any insight
>>>>>>> on what I could be doing wrong.  I appreciate any help you can
give.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Joe Pollina
>>>>>>>
>

------------------------------------------------


More information about the Met_help mailing list