[Met_help] [rt.rap.ucar.edu #61730] History for MET Question

John Halley Gotway via RT met_help at ucar.edu
Thu Jun 6 11:35:15 MDT 2013


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------



I am sorry to keep e-mailing you, but I now understand the root of the problem and have only one question left.  As I understand, MODE is designed to verify the results of PCP-Combine results of two grib files.  However, my data is essentially in rain gauge format (originally ascii and converted via ascii2nc).  Is there any way to use MODE to compare a forecast grib file with the observations produced from ascii2nc?  If possible, I assume this would involve some manipulation of the parameters in the netcdf file.  How would I go about doing that?

- Andrew


________________________________
 Von: Andrew J. <andrewwx at yahoo.com>
An: "met_help at ucar.edu" <met_help at ucar.edu> 
Gesendet: 10:35 Donnerstag, 6.Juni 2013
Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
 




Hello, I have a quick MET Question.  I am trying to use the mode tool to compare two generated fields:  a) one field generated by the 
PCP-Combine tool against b) a set of observations which were generated 
by the ascii2nc command.   I matched the PCP and Observation NCDF files 
in point_stat without a problem using the following inputs into my point stat file:

(for 3 hourly precipitation)

fcst = {
   field = {
      name  = "APCP_03";
      level = "A3";


obs = {
   field = {
      name  = "APCP";
      level = "A1";


However, when I try to use these exact same files and field inputs for my command in my MODE Configuration file, I
 receive the error message:

DEBUG 1: Default Config File: /etc/met/data/config/MODEConfig_default
DEBUG 1: Match Config File: /home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
DEBUG 1: Merge Config File: /home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
NetCDF: Attribute not found

I
 have matched the forecast file against itself without a problem in 
mode, so I am relatively certain that the error arises in my naming of 
fields in the observation file.  But I'm not sure why my current command
 is not working.  I sent a different email with my configuration file 
and obs.nc file, but as of yesterday, I had received a message saying the file still had not been delivered in the past four hours.  Perhaps it has been delivered now.  Please let me know if you need any more files or 
information.  Thank you in advance!

- Andrew


________________________________
 Von: Andrew J. <andrewwx at yahoo.com>
An: "met_help at ucar.edu" <met_help at ucar.edu> 
Gesendet: 19:44 Freitag, 8.März 2013
Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
 


Hello,

I have follow up question to this e-mail...

You have suggested verifying
 my point observations by computing our statistics over a set of matched points.

Great idea, I have a structure now set up to do that.

However, in the process, I decided to proof this method and verify model fields using two different methods to see if they were the same.  The methods were as follows:

1) I output CNT statistics from point-stat data and then examined a "-job summary" of these statistics over ~20 parameters (RMSE, MAE, etc.)
2) I did not initally compute CNT statistics, but rather I output matched pairs, and then used '-job aggregate_stat' to create the same CNT parameters (RMSE, MAE, etc.)

I expected these two methods to produce an exactly
 identical outcome.  However, while the values are similar, they are different in almost all cases, sometimes by as much
as 0.5 (in the RMSE column).  These results challenged my way of thinking about the MET system, because I was under the impression that the CNT statistics originally calculated in the point-stat step were taken from the matched pair data.  Therefore, these two methods should have produced the same result.  Is there something that I am overlooking?  Or perhaps a weighting technique that is internally varied between the two methods?  As always...thank you for your time.




________________________________
 Von: John Halley Gotway via RT <met_help at ucar.edu>
An: andrewwx at yahoo.com 
Gesendet: 16:54 Freitag, 1.Februar 2013
Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
 
Andrew,

Good question.  The basic answer to your question is no.  MET was designed to verify gridded forecasts against gridded or point observations.

But there are a couple of workarounds you could consider.

Here's one approach... If you have a bunch of point forecasts for stations around the country, it shouldn't be too hard to "grid" them by creating a NetCDF file who's values are bad data everywhere 
except for the handful of grid points where you have computed a downscaled value.  Then you could verify that gridded forecast against point observations using the Point-Stat tool.

Here's a second approach... Your downscaling method is basically producing a forecast value at stations for which you already know the observation value.  So really, you already have forecast and 
observation matched pair data.  (Often, that's the most difficult part of verification!)  You could simply reformat
 that matched pair data to look like the matched pair (MPR) output lines from the 
Point-Stat tool.  Then just save them all in a file that ends with a ".stat" extension.  Then, run the STAT-Analysis tool too read in those matched pairs and compute whatever verification statistics 
you'd like.

In this case, you STAT-Analysis job might look something like this:
    stat_analysis -lookin my_data.stat -job aggregate_stat -line_type MPR -out_line_type CNT

That'd read in all the matched pair lines and compute the corresponding continuous statistics (like RMSE, for example).  STAT-Analysis has the ability to filter your data down however you'd like and 
compute all the traditional types of continuous, categorical, and probabilistic statistics.

Either route will require some work on your part - either creating a gridded NetCDF file or reformatting your ASCII data.

There is a third alternative
 outside of MET.  If you happen to be familiar with R, you could read your forecast and observation matched pair values into R and use the "verification" package to compute 
stats on them.

Hope that helps.

John Halley Gotway
met_help at ucar.edu


On 02/01/2013 04:58 AM, Andrew J. via RT wrote:
>
> Fri Feb 01 04:58:28 2013: Request 60133 was acted upon.
> Transaction: Ticket created by andrewwx at yahoo.com
>         Queue: met_help
>       Subject: MET Question
>         Owner: Nobody
>    Requestors: andrewwx at yahoo.com
>        Status: new
>   Ticket <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=60133 >
>
>
> Hello,
>
> My name is Andrew, and I am a meteorologist doing some statistical analysis with the MET system.  I would like to examine not only the statistical errors associated with model output (we'll see WRF for example), but also the results of a downscaling algorithm that I have applied to the WRF model output.  The problem (for MET purposes at least), is that my downscaling and interpolation algorithm makes forecasts exactly at the confirmation station points.  In other words, my confirmation points are the exact same as my forecast points, and the forecast points after downscaling and interpolation are no
 longer on a grid (they are dispersed throughout the country at the same location as the station models).  Is there any way for MET to compare point
 forecasts (not grid forecasts) with point observations?  If not, would you have any suggestions of programs that might be able to do this for me?  Thank you in advance...
>
> Andrew
>

----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #61730] MET Question
From: John Halley Gotway
Time: Thu Jun 06 10:33:33 2013

Andrew,

MODE is designed to compare two gridded input files.  The gridded
input files must be on the same grid and can be in GRIB1, GRIB2, or
the gridded NetCDF output of the pcp_combine tool.  It sounds like
you have a gridded forecast field and point observations.  So
unfortunately, you cannot run MODE on them.  With point observations,
you're limited to using the Point-Stat tool.  And the only "spatial"
type of option in Point-Stat is the ability to choose multiple
interpolation methods.  You could see how the model performance varies
as you increase the number of points that are used in the
interpolation.

But for MODE, you really need gridded observations.  Do you have any
gridded precipitation analyses available?  The TRMM satellite provides
some precipitation analysis.

Thanks,
John Halley Gotway
met_help at ucar.edu

On 06/06/2013 04:35 AM, Andrew J. via RT wrote:
>
> Thu Jun 06 04:35:12 2013: Request 61730 was acted upon.
> Transaction: Ticket created by andrewwx at yahoo.com
>         Queue: met_help
>       Subject: MET Question
>         Owner: Nobody
>    Requestors: andrewwx at yahoo.com
>        Status: new
>   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=61730 >
>
>
>
>
> I am sorry to keep e-mailing you, but I now understand the root of
the problem and have only one question left.  As I understand, MODE is
designed to verify the results of PCP-Combine results of two grib
files.  However, my data is essentially in rain gauge format
(originally ascii and converted via ascii2nc).  Is there any way to
use MODE to compare a forecast grib file with the observations
produced from ascii2nc?  If possible, I assume this would involve some
manipulation of the parameters in the netcdf file.  How would I go
about doing that?
>
> - Andrew
>
>
> ________________________________
>   Von: Andrew J. <andrewwx at yahoo.com>
> An: "met_help at ucar.edu" <met_help at ucar.edu>
> Gesendet: 10:35 Donnerstag, 6.Juni 2013
> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>
>
>
>
>
> Hello, I have a quick MET Question.  I am trying to use the mode
tool to compare two generated fields:  a) one field generated by the
> PCP-Combine tool against b) a set of observations which were
generated
> by the ascii2nc command.   I matched the PCP and Observation NCDF
files
> in point_stat without a problem using the following inputs into my
point stat file:
>
> (for 3 hourly precipitation)
>
> fcst = {
>     field = {
>        name  = "APCP_03";
>        level = "A3";
>
>
> obs = {
>     field = {
>        name  = "APCP";
>        level = "A1";
>
>
> However, when I try to use these exact same files and field inputs
for my command in my MODE Configuration file, I
>   receive the error message:
>
> DEBUG 1: Default Config File:
/etc/met/data/config/MODEConfig_default
> DEBUG 1: Match Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
> DEBUG 1: Merge Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
> NetCDF: Attribute not found
>
> I
>   have matched the forecast file against itself without a problem in
> mode, so I am relatively certain that the error arises in my naming
of
> fields in the observation file.  But I'm not sure why my current
command
>   is not working.  I sent a different email with my configuration
file
> and obs.nc file, but as of yesterday, I had received a message
saying the file still had not been delivered in the past four hours.
Perhaps it has been delivered now.  Please let me know if you need any
more files or
> information.  Thank you in advance!
>
> - Andrew
>
>
> ________________________________
>   Von: Andrew J. <andrewwx at yahoo.com>
> An: "met_help at ucar.edu" <met_help at ucar.edu>
> Gesendet: 19:44 Freitag, 8.März 2013
> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>
>
>
> Hello,
>
> I have follow up question to this e-mail...
>
> You have suggested verifying
>   my point observations by computing our statistics over a set of
matched points.
>
> Great idea, I have a structure now set up to do that.
>
> However, in the process, I decided to proof this method and verify
model fields using two different methods to see if they were the same.
The methods were as follows:
>
> 1) I output CNT statistics from point-stat data and then examined a
"-job summary" of these statistics over ~20 parameters (RMSE, MAE,
etc.)
> 2) I did not initally compute CNT statistics, but rather I output
matched pairs, and then used '-job aggregate_stat' to create the same
CNT parameters (RMSE, MAE, etc.)
>
> I expected these two methods to produce an exactly
>   identical outcome.  However, while the values are similar, they
are different in almost all cases, sometimes by as much
> as 0.5 (in the RMSE column).  These results challenged my way of
thinking about the MET system, because I was under the impression that
the CNT statistics originally calculated in the point-stat step were
taken from the matched pair data.  Therefore, these two methods should
have produced the same result.  Is there something that I am
overlooking?  Or perhaps a weighting technique that is internally
varied between the two methods?  As always...thank you for your time.
>
>
>
>
> ________________________________
>   Von: John Halley Gotway via RT <met_help at ucar.edu>
> An: andrewwx at yahoo.com
> Gesendet: 16:54 Freitag, 1.Februar 2013
> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>
> Andrew,
>
> Good question.  The basic answer to your question is no.  MET was
designed to verify gridded forecasts against gridded or point
observations.
>
> But there are a couple of workarounds you could consider.
>
> Here's one approach... If you have a bunch of point forecasts for
stations around the country, it shouldn't be too hard to "grid" them
by creating a NetCDF file who's values are bad data everywhere
> except for the handful of grid points where you have computed a
downscaled value.  Then you could verify that gridded forecast against
point observations using the Point-Stat tool.
>
> Here's a second approach... Your downscaling method is basically
producing a forecast value at stations for which you already know the
observation value.  So really, you already have forecast and
> observation matched pair data.  (Often, that's the most difficult
part of verification!)  You could simply reformat
>   that matched pair data to look like the matched pair (MPR) output
lines from the
> Point-Stat tool.  Then just save them all in a file that ends with a
".stat" extension.  Then, run the STAT-Analysis tool too read in those
matched pairs and compute whatever verification statistics
> you'd like.
>
> In this case, you STAT-Analysis job might look something like this:
>      stat_analysis -lookin my_data.stat -job aggregate_stat
-line_type MPR -out_line_type CNT
>
> That'd read in all the matched pair lines and compute the
corresponding continuous statistics (like RMSE, for example).  STAT-
Analysis has the ability to filter your data down however you'd like
and
> compute all the traditional types of continuous, categorical, and
probabilistic statistics.
>
> Either route will require some work on your part - either creating a
gridded NetCDF file or reformatting your ASCII data.
>
> There is a third alternative
>   outside of MET.  If you happen to be familiar with R, you could
read your forecast and observation matched pair values into R and use
the "verification" package to compute
> stats on them.
>
> Hope that helps.
>
> John Halley Gotway
> met_help at ucar.edu
>
>
> On 02/01/2013 04:58 AM, Andrew J. via RT wrote:
>>
>> Fri Feb 01 04:58:28 2013: Request 60133 was acted upon.
>> Transaction: Ticket created by andrewwx at yahoo.com
>>           Queue: met_help
>>         Subject: MET Question
>>           Owner: Nobody
>>      Requestors: andrewwx at yahoo.com
>>          Status: new
>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=60133 >
>>
>>
>> Hello,
>>
>> My name is Andrew, and I am a meteorologist doing some statistical
analysis with the MET system.  I would like to examine not only the
statistical errors associated with model output (we'll see WRF for
example), but also the results of a downscaling algorithm that I have
applied to the WRF model output.  The problem (for MET purposes at
least), is that my downscaling and interpolation algorithm makes
forecasts exactly at the confirmation station points.  In other words,
my confirmation points are the exact same as my forecast points, and
the forecast points after downscaling and interpolation are no
>   longer on a grid (they are dispersed throughout the country at the
same location as the station models).  Is there any way for MET to
compare point
>   forecasts (not grid forecasts) with point observations?  If not,
would you have any suggestions of programs that might be able to do
this for me?  Thank you in advance...
>>
>> Andrew
>>

------------------------------------------------
Subject: MET Question
From: Andrew J.
Time: Thu Jun 06 10:50:34 2013

Hmmm...unfortunately my observations are only point observations,
although I can check into other possibilites such as TRMM.  I was
hoping I could write a work-around or hack to be able to analyze the
point observations against the gridded files, but it seems like that
may be quite difficult (and computationally expensive).  In theory, if
I interpolated the point observations onto the same grid as the
forecast files, then formatted the output as a NETCDF that looked like
a PCP-Combine produced file, could that work?




________________________________
 Von: John Halley Gotway via RT <met_help at ucar.edu>
An: andrewwx at yahoo.com
Gesendet: 18:33 Donnerstag, 6.Juni 2013
Betreff: Re: [rt.rap.ucar.edu #61730] MET Question


Andrew,

MODE is designed to compare two gridded input files.  The gridded
input files must be on the same grid and can be in GRIB1, GRIB2, or
the gridded NetCDF output of the pcp_combine tool.  It sounds like
you have a gridded forecast field and point observations.  So
unfortunately, you cannot run MODE on them.  With point observations,
you're limited to using the Point-Stat tool.  And the only "spatial"
type of option in Point-Stat is the ability to choose multiple
interpolation methods.  You could see how the model performance varies
as you increase the number of points that are used in the
interpolation.

But for MODE, you really need gridded observations.  Do you have any
gridded precipitation analyses available?  The TRMM satellite provides
some precipitation analysis.

Thanks,
John Halley Gotway
met_help at ucar.edu

On 06/06/2013 04:35 AM, Andrew J. via RT wrote:
>
> Thu Jun 06 04:35:12 2013: Request 61730 was acted upon.
> Transaction: Ticket created by andrewwx at yahoo.com
>         Queue: met_help
>       Subject: MET Question
>         Owner: Nobody
>    Requestors: andrewwx at yahoo.com
>        Status: new
>   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=61730 >
>
>
>
>
> I am sorry to keep e-mailing you, but I now understand the root of
the problem and have only one question left.  As I understand, MODE is
designed to verify the results of PCP-Combine results of two grib
files.  However, my data is essentially in rain gauge format
(originally ascii and converted via ascii2nc).  Is there any way to
use MODE to compare a forecast grib file with the observations
produced from ascii2nc?  If possible, I assume this would involve some
manipulation of the parameters in the netcdf file.  How would I go
about doing that?
>
> - Andrew
>
>
> ________________________________
>   Von: Andrew J. <andrewwx at yahoo.com>
> An: "met_help at ucar.edu" <met_help at ucar.edu>
> Gesendet: 10:35 Donnerstag, 6.Juni 2013
> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>
>
>
>
>
> Hello, I have a quick MET Question.  I am trying to use the mode
tool to compare two generated fields:  a) one field generated by the
> PCP-Combine tool against b) a set of observations which were
generated
> by the ascii2nc command.   I matched the PCP and Observation NCDF
files
> in point_stat without a problem using the following inputs into my
point stat file:
>
> (for 3 hourly precipitation)
>
> fcst = {
>     field = {
>        name  = "APCP_03";
>        level = "A3";
>
>
> obs = {
>     field = {
>        name  = "APCP";
>        level = "A1";
>
>
> However, when I try to use these exact same files and field inputs
for my command in my MODE Configuration file, I
>   receive the error message:
>
> DEBUG 1: Default Config File:
/etc/met/data/config/MODEConfig_default
> DEBUG 1: Match Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
> DEBUG 1: Merge Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
> NetCDF: Attribute not found
>
> I
>   have matched the forecast file against itself without a problem in
> mode, so I am relatively certain that the error arises in my naming
of
> fields in the observation file.  But I'm not sure why my current
command
>   is not working.  I sent a different email with my configuration
file
> and obs.nc file, but as of yesterday, I had received a message
saying the file still had not been delivered in the past four hours. 
Perhaps it has been delivered now.  Please let me know if you need any
more files or
> information.  Thank you in advance!
>
> - Andrew
>
>
> ________________________________
>   Von: Andrew J. <andrewwx at yahoo.com>
> An: "met_help at ucar.edu" <met_help at ucar.edu>
> Gesendet: 19:44 Freitag, 8.März 2013
> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>
>
>
> Hello,
>
> I have follow up question to this e-mail...
>
> You have suggested verifying
>   my point observations by computing our statistics over a set of
matched points.
>
> Great idea, I have a structure now set up to do that.
>
> However, in the process, I decided to proof this method and verify
model fields using two different methods to see if they were the
same.  The methods were as follows:
>
> 1) I output CNT statistics from point-stat data and then examined a
"-job summary" of these statistics over ~20 parameters (RMSE, MAE,
etc.)
> 2) I did not initally compute CNT statistics, but rather I output
matched pairs, and then used '-job aggregate_stat' to create the same
CNT parameters (RMSE, MAE, etc.)
>
> I expected these two methods to produce an exactly
>   identical outcome.  However, while the values are similar, they
are different in almost all cases, sometimes by as much
> as 0.5 (in the RMSE column).  These results challenged my way of
thinking about the MET system, because I was under the impression that
the CNT statistics originally calculated in the point-stat step were
taken from the matched pair data.  Therefore, these two methods should
have produced the same result.  Is there something that I am
overlooking?  Or perhaps a weighting technique that is internally
varied between the two methods?  As always...thank you for your time.
>
>
>
>
> ________________________________
>   Von: John Halley Gotway via RT <met_help at ucar.edu>
> An: andrewwx at yahoo.com
> Gesendet: 16:54 Freitag, 1.Februar 2013
> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>
> Andrew,
>
> Good question.  The basic answer to your question is no.  MET was
designed to verify gridded forecasts against gridded or point
observations.
>
> But there are a couple of workarounds you could consider.
>
> Here's one approach... If you have a bunch of point forecasts for
stations around the country, it shouldn't be too hard to "grid" them
by creating a NetCDF file who's values are bad data everywhere
> except for the handful of grid points where you have computed a
downscaled value.  Then you could verify that gridded forecast against
point observations using the Point-Stat tool.
>
> Here's a second approach... Your downscaling method is basically
producing a forecast value at stations for which you already know the
observation value.  So really, you already have forecast and
> observation matched pair data.  (Often, that's the most difficult
part of verification!)  You could simply reformat
>   that matched pair data to look like the matched pair (MPR) output
lines from the
> Point-Stat tool.  Then just save them all in a file that ends with a
".stat" extension.  Then, run the STAT-Analysis tool too read in those
matched pairs and compute whatever verification statistics
> you'd like.
>
> In this case, you STAT-Analysis job might look something like this:
>      stat_analysis -lookin my_data.stat -job aggregate_stat
-line_type MPR -out_line_type CNT
>
> That'd read in all the matched pair lines and compute the
corresponding continuous statistics (like RMSE, for example).  STAT-
Analysis has the ability to filter your data down however you'd like
and
> compute all the traditional types of continuous, categorical, and
probabilistic statistics.
>
> Either route will require some work on your part - either creating a
gridded NetCDF file or reformatting your ASCII data.
>
> There is a third alternative
>   outside of MET.  If you happen to be familiar with R, you could
read your forecast and observation matched pair values into R and use
the "verification" package to compute
> stats on them.
>
> Hope that helps.
>
> John Halley Gotway
> met_help at ucar.edu
>
>
> On 02/01/2013 04:58 AM, Andrew J. via RT wrote:
>>
>> Fri Feb 01 04:58:28 2013: Request 60133 was acted upon.
>> Transaction: Ticket created by andrewwx at yahoo.com
>>           Queue: met_help
>>         Subject: MET Question
>>           Owner: Nobody
>>      Requestors: andrewwx at yahoo.com
>>          Status: new
>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=60133 >
>>
>>
>> Hello,
>>
>> My name is Andrew, and I am a meteorologist doing some statistical
analysis with the MET system.  I would like to examine not only the
statistical errors associated with model output (we'll see WRF for
example), but also the results of a downscaling algorithm that I have
applied to the WRF model output.  The problem (for MET purposes at
least), is that my downscaling and interpolation algorithm makes
forecasts exactly at the confirmation station points.  In other words,
my confirmation points are the exact same as my forecast points, and
the forecast points after downscaling and interpolation are no
>   longer on a grid (they are dispersed throughout the country at the
same location as the station models).  Is there any way for MET to
compare point
>   forecasts (not grid forecasts) with point observations?  If not,
would you have any suggestions of programs that might be able to do
this for me?  Thank you in advance...
>>
>> Andrew
>>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #61730] MET Question
From: John Halley Gotway
Time: Thu Jun 06 11:15:41 2013

Andrew,

Sure, you could use point observations to construct a gridded field,
but that is no simple task indeed.  In the United States, NCEP
provides StageII and StageIV precipitation analyses which are a
combination of radar and rain gauge data, but there's a whole lot of
effort that goes into that.  I remember a presentation by Barbara
Casati a few years back about using wavelets to construct a
gridded field from point observations, but I believe she ran into a
lot of challenges.

You could always compare your model output to the model analysis field
from the next cycle, but that is no longer truly verification.

Thanks,
John

On 06/06/2013 10:50 AM, Andrew J. via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=61730 >
>
> Hmmm...unfortunately my observations are only point observations,
although I can check into other possibilites such as TRMM.  I was
hoping I could write a work-around or hack to be able to analyze the
point observations against the gridded files, but it seems like that
may be quite difficult (and computationally expensive).  In theory, if
I interpolated the point observations onto the same grid as the
forecast files, then formatted the output as a NETCDF that looked like
a PCP-Combine produced file, could that work?
>
>
>
>
> ________________________________
>   Von: John Halley Gotway via RT <met_help at ucar.edu>
> An: andrewwx at yahoo.com
> Gesendet: 18:33 Donnerstag, 6.Juni 2013
> Betreff: Re: [rt.rap.ucar.edu #61730] MET Question
>
>
> Andrew,
>
> MODE is designed to compare two gridded input files.  The gridded
input files must be on the same grid and can be in GRIB1, GRIB2, or
the gridded NetCDF output of the pcp_combine tool.  It sounds like
> you have a gridded forecast field and point observations.  So
unfortunately, you cannot run MODE on them.  With point observations,
you're limited to using the Point-Stat tool.  And the only "spatial"
> type of option in Point-Stat is the ability to choose multiple
interpolation methods.  You could see how the model performance varies
as you increase the number of points that are used in the
> interpolation.
>
> But for MODE, you really need gridded observations.  Do you have any
gridded precipitation analyses available?  The TRMM satellite provides
some precipitation analysis.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 06/06/2013 04:35 AM, Andrew J. via RT wrote:
>>
>> Thu Jun 06 04:35:12 2013: Request 61730 was acted upon.
>> Transaction: Ticket created by andrewwx at yahoo.com
>>           Queue: met_help
>>         Subject: MET Question
>>           Owner: Nobody
>>      Requestors: andrewwx at yahoo.com
>>          Status: new
>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=61730 >
>>
>>
>>
>>
>> I am sorry to keep e-mailing you, but I now understand the root of
the problem and have only one question left.  As I understand, MODE is
designed to verify the results of PCP-Combine results of two grib
files.  However, my data is essentially in rain gauge format
(originally ascii and converted via ascii2nc).  Is there any way to
use MODE to compare a forecast grib file with the observations
produced from ascii2nc?  If possible, I assume this would involve some
manipulation of the parameters in the netcdf file.  How would I go
about doing that?
>>
>> - Andrew
>>
>>
>> ________________________________
>>     Von: Andrew J. <andrewwx at yahoo.com>
>> An: "met_help at ucar.edu" <met_help at ucar.edu>
>> Gesendet: 10:35 Donnerstag, 6.Juni 2013
>> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>>
>>
>>
>>
>>
>> Hello, I have a quick MET Question.  I am trying to use the mode
tool to compare two generated fields:  a) one field generated by the
>> PCP-Combine tool against b) a set of observations which were
generated
>> by the ascii2nc command.   I matched the PCP and Observation NCDF
files
>> in point_stat without a problem using the following inputs into my
point stat file:
>>
>> (for 3 hourly precipitation)
>>
>> fcst = {
>>       field = {
>>          name  = "APCP_03";
>>          level = "A3";
>>
>>
>> obs = {
>>       field = {
>>          name  = "APCP";
>>          level = "A1";
>>
>>
>> However, when I try to use these exact same files and field inputs
for my command in my MODE Configuration file, I
>>     receive the error message:
>>
>> DEBUG 1: Default Config File:
/etc/met/data/config/MODEConfig_default
>> DEBUG 1: Match Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
>> DEBUG 1: Merge Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
>> NetCDF: Attribute not found
>>
>> I
>>     have matched the forecast file against itself without a problem
in
>> mode, so I am relatively certain that the error arises in my naming
of
>> fields in the observation file.  But I'm not sure why my current
command
>>     is not working.  I sent a different email with my configuration
file
>> and obs.nc file, but as of yesterday, I had received a message
saying the file still had not been delivered in the past four hours.
Perhaps it has been delivered now.  Please let me know if you need any
more files or
>> information.  Thank you in advance!
>>
>> - Andrew
>>
>>
>> ________________________________
>>     Von: Andrew J. <andrewwx at yahoo.com>
>> An: "met_help at ucar.edu" <met_help at ucar.edu>
>> Gesendet: 19:44 Freitag, 8.März 2013
>> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>>
>>
>>
>> Hello,
>>
>> I have follow up question to this e-mail...
>>
>> You have suggested verifying
>>     my point observations by computing our statistics over a set of
matched points.
>>
>> Great idea, I have a structure now set up to do that.
>>
>> However, in the process, I decided to proof this method and verify
model fields using two different methods to see if they were the same.
The methods were as follows:
>>
>> 1) I output CNT statistics from point-stat data and then examined a
"-job summary" of these statistics over ~20 parameters (RMSE, MAE,
etc.)
>> 2) I did not initally compute CNT statistics, but rather I output
matched pairs, and then used '-job aggregate_stat' to create the same
CNT parameters (RMSE, MAE, etc.)
>>
>> I expected these two methods to produce an exactly
>>     identical outcome.  However, while the values are similar, they
are different in almost all cases, sometimes by as much
>> as 0.5 (in the RMSE column).  These results challenged my way of
thinking about the MET system, because I was under the impression that
the CNT statistics originally calculated in the point-stat step were
taken from the matched pair data.  Therefore, these two methods should
have produced the same result.  Is there something that I am
overlooking?  Or perhaps a weighting technique that is internally
varied between the two methods?  As always...thank you for your time.
>>
>>
>>
>>
>> ________________________________
>>     Von: John Halley Gotway via RT <met_help at ucar.edu>
>> An: andrewwx at yahoo.com
>> Gesendet: 16:54 Freitag, 1.Februar 2013
>> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>>
>> Andrew,
>>
>> Good question.  The basic answer to your question is no.  MET was
designed to verify gridded forecasts against gridded or point
observations.
>>
>> But there are a couple of workarounds you could consider.
>>
>> Here's one approach... If you have a bunch of point forecasts for
stations around the country, it shouldn't be too hard to "grid" them
by creating a NetCDF file who's values are bad data everywhere
>> except for the handful of grid points where you have computed a
downscaled value.  Then you could verify that gridded forecast against
point observations using the Point-Stat tool.
>>
>> Here's a second approach... Your downscaling method is basically
producing a forecast value at stations for which you already know the
observation value.  So really, you already have forecast and
>> observation matched pair data.  (Often, that's the most difficult
part of verification!)  You could simply reformat
>>     that matched pair data to look like the matched pair (MPR)
output lines from the
>> Point-Stat tool.  Then just save them all in a file that ends with
a ".stat" extension.  Then, run the STAT-Analysis tool too read in
those matched pairs and compute whatever verification statistics
>> you'd like.
>>
>> In this case, you STAT-Analysis job might look something like this:
>>        stat_analysis -lookin my_data.stat -job aggregate_stat
-line_type MPR -out_line_type CNT
>>
>> That'd read in all the matched pair lines and compute the
corresponding continuous statistics (like RMSE, for example).  STAT-
Analysis has the ability to filter your data down however you'd like
and
>> compute all the traditional types of continuous, categorical, and
probabilistic statistics.
>>
>> Either route will require some work on your part - either creating
a gridded NetCDF file or reformatting your ASCII data.
>>
>> There is a third alternative
>>     outside of MET.  If you happen to be familiar with R, you could
read your forecast and observation matched pair values into R and use
the "verification" package to compute
>> stats on them.
>>
>> Hope that helps.
>>
>> John Halley Gotway
>> met_help at ucar.edu
>>
>>
>> On 02/01/2013 04:58 AM, Andrew J. via RT wrote:
>>>
>>> Fri Feb 01 04:58:28 2013: Request 60133 was acted upon.
>>> Transaction: Ticket created by andrewwx at yahoo.com
>>>             Queue: met_help
>>>           Subject: MET Question
>>>             Owner: Nobody
>>>        Requestors: andrewwx at yahoo.com
>>>            Status: new
>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=60133 >
>>>
>>>
>>> Hello,
>>>
>>> My name is Andrew, and I am a meteorologist doing some statistical
analysis with the MET system.  I would like to examine not only the
statistical errors associated with model output (we'll see WRF for
example), but also the results of a downscaling algorithm that I have
applied to the WRF model output.  The problem (for MET purposes at
least), is that my downscaling and interpolation algorithm makes
forecasts exactly at the confirmation station points.  In other words,
my confirmation points are the exact same as my forecast points, and
the forecast points after downscaling and interpolation are no
>>     longer on a grid (they are dispersed throughout the country at
the same location as the station models).  Is there any way for MET to
compare point
>>     forecasts (not grid forecasts) with point observations?  If
not, would you have any suggestions of programs that might be able to
do this for me?  Thank you in advance...
>>>
>>> Andrew
>>>

------------------------------------------------
Subject: MET Question
From: Andrew J.
Time: Thu Jun 06 11:31:39 2013

Agh, yeah I figured it might be more trouble than simply finding a new
solution.  I think I will start looking into different options and
data sources.

Thank you for your help!

- Andrew




________________________________
 Von: John Halley Gotway via RT <met_help at ucar.edu>
An: andrewwx at yahoo.com
Gesendet: 19:15 Donnerstag, 6.Juni 2013
Betreff: Re: [rt.rap.ucar.edu #61730] MET Question


Andrew,

Sure, you could use point observations to construct a gridded field,
but that is no simple task indeed.  In the United States, NCEP
provides StageII and StageIV precipitation analyses which are a
combination of radar and rain gauge data, but there's a whole lot of
effort that goes into that.  I remember a presentation by Barbara
Casati a few years back about using wavelets to construct a
gridded field from point observations, but I believe she ran into a
lot of challenges.

You could always compare your model output to the model analysis field
from the next cycle, but that is no longer truly verification.

Thanks,
John

On 06/06/2013 10:50 AM, Andrew J. via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=61730 >
>
> Hmmm...unfortunately my observations are only point observations,
although I can check into other possibilites such as TRMM.  I was
hoping I could write a work-around or hack to be able to analyze the
point observations against the gridded files, but it seems like that
may be quite difficult (and computationally expensive).  In theory, if
I interpolated the point observations onto the same grid as the
forecast files, then formatted the output as a NETCDF that looked like
a PCP-Combine produced file, could that work?
>
>
>
>
> ________________________________
>   Von: John Halley Gotway via RT <met_help at ucar.edu>
> An: andrewwx at yahoo.com
> Gesendet: 18:33 Donnerstag, 6.Juni 2013
> Betreff: Re: [rt.rap.ucar.edu #61730] MET Question
>
>
> Andrew,
>
> MODE is designed to compare two gridded input files.  The gridded
input files must be on the same grid and can be in GRIB1, GRIB2, or
the gridded NetCDF output of the pcp_combine tool.  It sounds like
> you have a gridded forecast field and point observations.  So
unfortunately, you cannot run MODE on them.  With point observations,
you're limited to using the Point-Stat tool.  And the only "spatial"
> type of option in Point-Stat is the ability to choose multiple
interpolation methods.  You could see how the model performance varies
as you increase the number of points that are used in the
> interpolation.
>
> But for MODE, you really need gridded observations.  Do you have any
gridded precipitation analyses available?  The TRMM satellite provides
some precipitation analysis.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 06/06/2013 04:35 AM, Andrew J. via RT wrote:
>>
>> Thu Jun 06 04:35:12 2013: Request 61730 was acted upon.
>> Transaction: Ticket created by andrewwx at yahoo.com
>>           Queue: met_help
>>         Subject: MET Question
>>           Owner: Nobody
>>      Requestors: andrewwx at yahoo.com
>>          Status: new
>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=61730 >
>>
>>
>>
>>
>> I am sorry to keep e-mailing you, but I now understand the root of
the problem and have only one question left.  As I understand, MODE is
designed to verify the results of PCP-Combine results of two grib
files.  However, my data is essentially in rain gauge format
(originally ascii and converted via ascii2nc).  Is there any way to
use MODE to compare a forecast grib file with the observations
produced from ascii2nc?  If possible, I assume this would involve some
manipulation of the parameters in the netcdf file.  How would I go
about doing that?
>>
>> - Andrew
>>
>>
>> ________________________________
>>     Von: Andrew J. <andrewwx at yahoo.com>
>> An: "met_help at ucar.edu" <met_help at ucar.edu>
>> Gesendet: 10:35 Donnerstag, 6.Juni 2013
>> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>>
>>
>>
>>
>>
>> Hello, I have a quick MET Question.  I am trying to use the mode
tool to compare two generated fields:  a) one field generated by the
>> PCP-Combine tool against b) a set of observations which were
generated
>> by the ascii2nc command.   I matched the PCP and Observation NCDF
files
>> in point_stat without a problem using the following inputs into my
point stat file:
>>
>> (for 3 hourly precipitation)
>>
>> fcst = {
>>       field = {
>>          name  = "APCP_03";
>>          level = "A3";
>>
>>
>> obs = {
>>       field = {
>>          name  = "APCP";
>>          level = "A1";
>>
>>
>> However, when I try to use these exact same files and field inputs
for my command in my MODE Configuration file, I
>>     receive the error message:
>>
>> DEBUG 1: Default Config File:
/etc/met/data/config/MODEConfig_default
>> DEBUG 1: Match Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
>> DEBUG 1: Merge Config File:
/home_local/aoberthaler/METv4.0/UBIMETVerify/Templates/MODEConfig
>> NetCDF: Attribute not found
>>
>> I
>>     have matched the forecast file against itself without a problem
in
>> mode, so I am relatively certain that the error arises in my naming
of
>> fields in the observation file.  But I'm not sure why my current
command
>>     is not working.  I sent a different email with my configuration
file
>> and obs.nc file, but as of yesterday, I had received a message
saying the file still had not been delivered in the past four hours. 
Perhaps it has been delivered now.  Please let me know if you need any
more files or
>> information.  Thank you in advance!
>>
>> - Andrew
>>
>>
>> ________________________________
>>     Von: Andrew J. <andrewwx at yahoo.com>
>> An: "met_help at ucar.edu" <met_help at ucar.edu>
>> Gesendet: 19:44 Freitag, 8.März 2013
>> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>>
>>
>>
>> Hello,
>>
>> I have follow up question to this e-mail...
>>
>> You have suggested verifying
>>     my point observations by computing our statistics over a set of
matched points.
>>
>> Great idea, I have a structure now set up to do that.
>>
>> However, in the process, I decided to proof this method and verify
model fields using two different methods to see if they were the
same.  The methods were as follows:
>>
>> 1) I output CNT statistics from point-stat data and then examined a
"-job summary" of these statistics over ~20 parameters (RMSE, MAE,
etc.)
>> 2) I did not initally compute CNT statistics, but rather I output
matched pairs, and then used '-job aggregate_stat' to create the same
CNT parameters (RMSE, MAE, etc.)
>>
>> I expected these two methods to produce an exactly
>>     identical outcome.  However, while the values are similar, they
are different in almost all cases, sometimes by as much
>> as 0.5 (in the RMSE column).  These results challenged my way of
thinking about the MET system, because I was under the impression that
the CNT statistics originally calculated in the point-stat step were
taken from the matched pair data.  Therefore, these two methods should
have produced the same result.  Is there something that I am
overlooking?  Or perhaps a weighting technique that is internally
varied between the two methods?  As always...thank you for your time.
>>
>>
>>
>>
>> ________________________________
>>     Von: John Halley Gotway via RT <met_help at ucar.edu>
>> An: andrewwx at yahoo.com
>> Gesendet: 16:54 Freitag, 1.Februar 2013
>> Betreff: Re: [rt.rap.ucar.edu #60133] MET Question
>>
>> Andrew,
>>
>> Good question.  The basic answer to your question is no.  MET was
designed to verify gridded forecasts against gridded or point
observations.
>>
>> But there are a couple of workarounds you could consider.
>>
>> Here's one approach... If you have a bunch of point forecasts for
stations around the country, it shouldn't be too hard to "grid" them
by creating a NetCDF file who's values are bad data everywhere
>> except for the handful of grid points where you have computed a
downscaled value.  Then you could verify that gridded forecast against
point observations using the Point-Stat tool.
>>
>> Here's a second approach... Your downscaling method is basically
producing a forecast value at stations for which you already know the
observation value.  So really, you already have forecast and
>> observation matched pair data.  (Often, that's the most difficult
part of verification!)  You could simply reformat
>>     that matched pair data to look like the matched pair (MPR)
output lines from the
>> Point-Stat tool.  Then just save them all in a file that ends with
a ".stat" extension.  Then, run the STAT-Analysis tool too read in
those matched pairs and compute whatever verification statistics
>> you'd like.
>>
>> In this case, you STAT-Analysis job might look something like this:
>>        stat_analysis -lookin my_data.stat -job aggregate_stat
-line_type MPR -out_line_type CNT
>>
>> That'd read in all the matched pair lines and compute the
corresponding continuous statistics (like RMSE, for example).  STAT-
Analysis has the ability to filter your data down however you'd like
and
>> compute all the traditional types of continuous, categorical, and
probabilistic statistics.
>>
>> Either route will require some work on your part - either creating
a gridded NetCDF file or reformatting your ASCII data.
>>
>> There is a third alternative
>>     outside of MET.  If you happen to be familiar with R, you could
read your forecast and observation matched pair values into R and use
the "verification" package to compute
>> stats on them.
>>
>> Hope that helps.
>>
>> John Halley Gotway
>> met_help at ucar.edu
>>
>>
>> On 02/01/2013 04:58 AM, Andrew J. via RT wrote:
>>>
>>> Fri Feb 01 04:58:28 2013: Request 60133 was acted upon.
>>> Transaction: Ticket created by andrewwx at yahoo.com
>>>             Queue: met_help
>>>           Subject: MET Question
>>>             Owner: Nobody
>>>        Requestors: andrewwx at yahoo.com
>>>            Status: new
>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=60133 >
>>>
>>>
>>> Hello,
>>>
>>> My name is Andrew, and I am a meteorologist doing some statistical
analysis with the MET system.  I would like to examine not only the
statistical errors associated with model output (we'll see WRF for
example), but also the results of a downscaling algorithm that I have
applied to the WRF model output.  The problem (for MET purposes at
least), is that my downscaling and interpolation algorithm makes
forecasts exactly at the confirmation station points.  In other words,
my confirmation points are the exact same as my forecast points, and
the forecast points after downscaling and interpolation are no
>>     longer on a grid (they are dispersed throughout the country at
the same location as the station models).  Is there any way for MET to
compare point
>>     forecasts (not grid forecasts) with point observations?  If
not, would you have any suggestions of programs that might be able to
do this for me?  Thank you in advance...
>>>
>>> Andrew
>>>

------------------------------------------------


More information about the Met_help mailing list