[Met_help] [rt.rap.ucar.edu #65427] History for Unable to visualize Fuzzy verf.

John Halley Gotway via RT met_help at ucar.edu
Mon Jun 2 14:19:23 MDT 2014


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Hi John. 

Some basic things about the grid stat tool are not clear to me. The met=doc mentions that the FUZZy verification method is inbuilt in the MEt package. 

IThe FUZZY verification mentions about the Radius that can be selected within which, if the OBS rf is found , then it is considered as a HIT. 

I could not find that RADIUS (I mean how to select this radius and intensity )in the configuration file???? 

Can u help me to understand???????????????
geeta 		 	   		  

----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Thu Feb 13 10:33:46 2014

Geeta,

You are correct, the input forecast and observation files must be on
the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.

(1) The first way is by applying an interpolation method to the data.
Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
Smoother forecasts and observations tend to produce better traditional
verification scores.  So you could see how your scores (like RMSE or
GSS) improve as you smooth the data more and more.  In the
config file, you could try:

interp = {
    field      = BOTH;
    vld_thresh = 1.0;

    type = [
       { method = UW_MEAN; width  = 1; },
       { method = UW_MEAN; width  = 3; },
       { method = UW_MEAN; width  = 6; },
       { method = UW_MEAN; width  = 9; }
    ];
};

This tells Grid-Stat to compute its statistics 4 times, applying more
smoothing each time.  Typically, the more the data has been smoothed,
the better the statistics will be.

(2) The second way is by applying neighborhood verification methods.
The most common are the Fractions Brier Score (FBS) and Fractions
Skill Score (FSS), both contained in the NBRCNT output line
type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.

Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
it places an n x n box around each grid point and counts up the number
of events within that box.  For a 3 x 3 box, if 4 of the 9 points
contained an event, the value for that point is 4/9.  This is
done for every grid point in for forecast field and the observation
field.  We call the result of this process the forecast and
observation "fractional coverage" fields.  The FSS and FBS scores are
computed by comparing the forecast and observation fractional coverage
fields to each other.

If you're verifying a single field using 3 different thresholds and 6
different neighborhood sizes, you'd get 18 NBRCNT lines in the output
file.

Here's an example of how you might set this up in the Grid-Stat config
file:

nbrhd = {
    vld_thresh = 1.0;
    width      = [ 3, 5, 9, 11, 13, 15 ];
    cov_thresh = [ >=0.5 ];
}

For a given threshold, you should look to see how FSS changes as you
increase the neighborhood size.

Hopefully that helps get you going.

Thanks,
John Halley Gotway
met_help at ucar.edu


On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
>
> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> Transaction: Ticket created by geeta124 at hotmail.com
>         Queue: met_help
>       Subject: Unable to visualize Fuzzy verf.
>         Owner: Nobody
>    Requestors: geeta124 at hotmail.com
>        Status: new
>   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>
>
> Hi John/ met_help.
>
> I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
> After the copygb is run, the FCST and OBS are on the same grid. ie
1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> ie at the Grid Points (GP) 1 to 6, U have Observations and the model
FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window is
defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
>
> geeta
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Fri Feb 14 01:38:20 2014

Thanks a lot John for your inputs and clarifications.

Still following doubts are there.

1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.

2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).

why should one have the interpolation ONCE again i.e (after copygb)
the grid fields are similar. ie. One GP has 2 values one OBS and one
FCST??? It is correct???

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Thu, 13 Feb 2014 10:33:47 -0700
>
> Geeta,
>
> You are correct, the input forecast and observation files must be on
the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
>
> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> config file, you could try:
>
> interp = {
>     field      = BOTH;
>     vld_thresh = 1.0;
>
>     type = [
>        { method = UW_MEAN; width  = 1; },
>        { method = UW_MEAN; width  = 3; },
>        { method = UW_MEAN; width  = 6; },
>        { method = UW_MEAN; width  = 9; }
>     ];
> };
>
> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
>
> (2) The second way is by applying neighborhood verification methods.
The most common are the Fractions Brier Score (FBS) and Fractions
Skill Score (FSS), both contained in the NBRCNT output line
> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
>
> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> done for every grid point in for forecast field and the observation
field.  We call the result of this process the forecast and
observation "fractional coverage" fields.  The FSS and FBS scores are
> computed by comparing the forecast and observation fractional
coverage fields to each other.
>
> If you're verifying a single field using 3 different thresholds and
6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
>
> Here's an example of how you might set this up in the Grid-Stat
config file:
>
> nbrhd = {
>     vld_thresh = 1.0;
>     width      = [ 3, 5, 9, 11, 13, 15 ];
>     cov_thresh = [ >=0.5 ];
> }
>
> For a given threshold, you should look to see how FSS changes as you
increase the neighborhood size.
>
> Hopefully that helps get you going.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
>
> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >
> > Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> > Transaction: Ticket created by geeta124 at hotmail.com
> >         Queue: met_help
> >       Subject: Unable to visualize Fuzzy verf.
> >         Owner: Nobody
> >    Requestors: geeta124 at hotmail.com
> >        Status: new
> >   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> >
> > Hi John/ met_help.
> >
> > I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
> > After the copygb is run, the FCST and OBS are on the same grid. ie
1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> > ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >
> > geeta
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Fri Feb 14 02:17:27 2014

Hi John,
 I have run grid-stat. Following is the error.

bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073321512274
Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
Observation File: ../trmm_nc_data/02june2011.nc
Configuration File: GridStatConfig_APCP_24
***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc

--------------------------------------------------------------------------------


Pls suggest.

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Fri, 14 Feb 2014 14:08:12 +0530




Thanks a lot John for your inputs and clarifications.

Still following doubts are there.

1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.

2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).

why should one have the interpolation ONCE again i.e (after copygb)
the grid fields are similar. ie. One GP has 2 values one OBS and one
FCST??? It is correct???

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Thu, 13 Feb 2014 10:33:47 -0700
>
> Geeta,
>
> You are correct, the input forecast and observation files must be on
the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
>
> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> config file, you could try:
>
> interp = {
>     field      = BOTH;
>     vld_thresh = 1.0;
>
>     type = [
>        { method = UW_MEAN; width  = 1; },
>        { method = UW_MEAN; width  = 3; },
>        { method = UW_MEAN; width  = 6; },
>        { method = UW_MEAN; width  = 9; }
>     ];
> };
>
> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
>
> (2) The second way is by applying neighborhood verification methods.
The most common are the Fractions Brier Score (FBS) and Fractions
Skill Score (FSS), both contained in the NBRCNT output line
> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
>
> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> done for every grid point in for forecast field and the observation
field.  We call the result of this process the forecast and
observation "fractional coverage" fields.  The FSS and FBS scores are
> computed by comparing the forecast and observation fractional
coverage fields to each other.
>
> If you're verifying a single field using 3 different thresholds and
6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
>
> Here's an example of how you might set this up in the Grid-Stat
config file:
>
> nbrhd = {
>     vld_thresh = 1.0;
>     width      = [ 3, 5, 9, 11, 13, 15 ];
>     cov_thresh = [ >=0.5 ];
> }
>
> For a given threshold, you should look to see how FSS changes as you
increase the neighborhood size.
>
> Hopefully that helps get you going.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
>
> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >
> > Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> > Transaction: Ticket created by geeta124 at hotmail.com
> >         Queue: met_help
> >       Subject: Unable to visualize Fuzzy verf.
> >         Owner: Nobody
> >    Requestors: geeta124 at hotmail.com
> >        Status: new
> >   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> >
> > Hi John/ met_help.
> >
> > I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
> > After the copygb is run, the FCST and OBS are on the same grid. ie
1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> > ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >
> > geeta
> >
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Fri Feb 14 09:48:08 2014

Geeta,

You run copygb to put the forecast and observation fields on exactly
the same grid, meaning the exact same resolution and number of grid
points.

I was trying to make the point that the "interpolation methods" in the
grid_stat config file could be used as a form of "upscaling".  You are
right, there is no *need* to interpolate the data since
you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
option of UW_MEAN (for un-weighted mean) and width of 5.  For each
grid point, grid_stat will replace the value at the grid point with
the average of the 25 points in a 5x5 box around that point.
Doing that for every point in the grid smooths the data and provides a
way of upscaling.

The default interpolation width is 1, meaning that no smoothing is
performed.  However, you could use multiple smoothing widths and see
how your performance changes the more you smooth the data.

Does that make sense?

Regarding the runtime error you're getting, I see that you're using
input NetCDF files for the forecast and observation fields.  In the
config file, you need to specify the name and dimensions of the
NetCDF variable to be used.  Assuming the NetCDF variable is named
"APCP_24", it would look something like this:

fcst = {
    wind_thresh = [ NA ];

    field = [
       {
         name       = "APCP_24";
         level      = [ "(*,*)" ];
         cat_thresh = [ >0.0, >=5.0 ];
       }
    ];

};

If you continue to experience problems, please send me sample forecast
and observation files along with the GridStatConfig file you're using.
You can post it to our anonymous ftp site following these
instructions:
    http://www.dtcenter.org/met/users/support/met_help.php#ftp

Thanks,
John

On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>
> Hi John,
>   I have run grid-stat. Following is the error.
>
> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> GSL_RNG_TYPE=mt19937
> GSL_RNG_SEED=18446744073321512274
> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> Observation File: ../trmm_nc_data/02june2011.nc
> Configuration File: GridStatConfig_APCP_24
> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>
>
--------------------------------------------------------------------------------
>
>
> Pls suggest.
>
> geeta
>
> From: geeta124 at hotmail.com
> To: met_help at ucar.edu
> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> Date: Fri, 14 Feb 2014 14:08:12 +0530
>
>
>
>
> Thanks a lot John for your inputs and clarifications.
>
> Still following doubts are there.
>
> 1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
> R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.
>
> 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
>
> why should one have the interpolation ONCE again i.e (after copygb)
the grid fields are similar. ie. One GP has 2 values one OBS and one
FCST??? It is correct???
>
> geeta
>
>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> From: met_help at ucar.edu
>> To: geeta124 at hotmail.com
>> Date: Thu, 13 Feb 2014 10:33:47 -0700
>>
>> Geeta,
>>
>> You are correct, the input forecast and observation files must be
on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
>>
>> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
>> config file, you could try:
>>
>> interp = {
>>      field      = BOTH;
>>      vld_thresh = 1.0;
>>
>>      type = [
>>         { method = UW_MEAN; width  = 1; },
>>         { method = UW_MEAN; width  = 3; },
>>         { method = UW_MEAN; width  = 6; },
>>         { method = UW_MEAN; width  = 9; }
>>      ];
>> };
>>
>> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
>>
>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
>> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
>> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
>>
>> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
>> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
>> done for every grid point in for forecast field and the observation
field.  We call the result of this process the forecast and
observation "fractional coverage" fields.  The FSS and FBS scores are
>> computed by comparing the forecast and observation fractional
coverage fields to each other.
>>
>> If you're verifying a single field using 3 different thresholds and
6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
>>
>> Here's an example of how you might set this up in the Grid-Stat
config file:
>>
>> nbrhd = {
>>      vld_thresh = 1.0;
>>      width      = [ 3, 5, 9, 11, 13, 15 ];
>>      cov_thresh = [ >=0.5 ];
>> }
>>
>> For a given threshold, you should look to see how FSS changes as
you increase the neighborhood size.
>>
>> Hopefully that helps get you going.
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>>
>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
>>>
>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
>>> Transaction: Ticket created by geeta124 at hotmail.com
>>>          Queue: met_help
>>>        Subject: Unable to visualize Fuzzy verf.
>>>          Owner: Nobody
>>>     Requestors: geeta124 at hotmail.com
>>>         Status: new
>>>    Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>
>>>
>>> Hi John/ met_help.
>>>
>>> I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
>>> After the copygb is run, the FCST and OBS are on the same grid. ie
1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
>>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
>>>
>>> geeta
>>>
>>
>
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Tue Feb 18 22:00:20 2014

Hi John,
Sorry for Late reply.
Pls do not close this ticket.
I have to send U my FCST and OBS.nc files. The UPS at my centre is
down so the machine is not up.
Pls bear with me. will send you the required in/p ASAP.

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Fri, 14 Feb 2014 14:44:09 +0530




Hi John,
 I have run grid-stat. Following is the error.

bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073321512274
Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
Observation File: ../trmm_nc_data/02june2011.nc
Configuration File: GridStatConfig_APCP_24
***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc

--------------------------------------------------------------------------------


Pls suggest.

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Fri, 14 Feb 2014 14:08:12 +0530




Thanks a lot John for your inputs and clarifications.

Still following doubts are there.

1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.

2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).

why should one have the interpolation ONCE again i.e (after copygb)
the grid fields are similar. ie. One GP has 2 values one OBS and one
FCST??? It is correct???

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Thu, 13 Feb 2014 10:33:47 -0700
>
> Geeta,
>
> You are correct, the input forecast and observation files must be on
the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
>
> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> config file, you could try:
>
> interp = {
>     field      = BOTH;
>     vld_thresh = 1.0;
>
>     type = [
>        { method = UW_MEAN; width  = 1; },
>        { method = UW_MEAN; width  = 3; },
>        { method = UW_MEAN; width  = 6; },
>        { method = UW_MEAN; width  = 9; }
>     ];
> };
>
> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
>
> (2) The second way is by applying neighborhood verification methods.
The most common are the Fractions Brier Score (FBS) and Fractions
Skill Score (FSS), both contained in the NBRCNT output line
> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
>
> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> done for every grid point in for forecast field and the observation
field.  We call the result of this process the forecast and
observation "fractional coverage" fields.  The FSS and FBS scores are
> computed by comparing the forecast and observation fractional
coverage fields to each other.
>
> If you're verifying a single field using 3 different thresholds and
6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
>
> Here's an example of how you might set this up in the Grid-Stat
config file:
>
> nbrhd = {
>     vld_thresh = 1.0;
>     width      = [ 3, 5, 9, 11, 13, 15 ];
>     cov_thresh = [ >=0.5 ];
> }
>
> For a given threshold, you should look to see how FSS changes as you
increase the neighborhood size.
>
> Hopefully that helps get you going.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
>
> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >
> > Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> > Transaction: Ticket created by geeta124 at hotmail.com
> >         Queue: met_help
> >       Subject: Unable to visualize Fuzzy verf.
> >         Owner: Nobody
> >    Requestors: geeta124 at hotmail.com
> >        Status: new
> >   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> >
> > Hi John/ met_help.
> >
> > I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
> > After the copygb is run, the FCST and OBS are on the same grid. ie
1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> > ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >
> > geeta
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Wed Feb 19 22:54:44 2014

Hi John,

-bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=664274259
Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
Observation File: ../trmm_nc_data/02june2011.nc
Configuration File: GridStatConfig_APCP_24
***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
--------------------------------------------------------------------------------
-bash-3.2$

I have this problem.
I am attaching both the netcdf files and configuration file for Ur
reference.

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Wed, 19 Feb 2014 10:30:12 +0530




Hi John,
Sorry for Late reply.
Pls do not close this ticket.
I have to send U my FCST and OBS.nc files. The UPS at my centre is
down so the machine is not up.
Pls bear with me. will send you the required in/p ASAP.

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Fri, 14 Feb 2014 14:44:09 +0530




Hi John,
 I have run grid-stat. Following is the error.

bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073321512274
Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
Observation File: ../trmm_nc_data/02june2011.nc
Configuration File: GridStatConfig_APCP_24
***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc

--------------------------------------------------------------------------------


Pls suggest.

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Fri, 14 Feb 2014 14:08:12 +0530




Thanks a lot John for your inputs and clarifications.

Still following doubts are there.

1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.

2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).

why should one have the interpolation ONCE again i.e (after copygb)
the grid fields are similar. ie. One GP has 2 values one OBS and one
FCST??? It is correct???

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Thu, 13 Feb 2014 10:33:47 -0700
>
> Geeta,
>
> You are correct, the input forecast and observation files must be on
the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
>
> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> config file, you could try:
>
> interp = {
>     field      = BOTH;
>     vld_thresh = 1.0;
>
>     type = [
>        { method = UW_MEAN; width  = 1; },
>        { method = UW_MEAN; width  = 3; },
>        { method = UW_MEAN; width  = 6; },
>        { method = UW_MEAN; width  = 9; }
>     ];
> };
>
> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
>
> (2) The second way is by applying neighborhood verification methods.
The most common are the Fractions Brier Score (FBS) and Fractions
Skill Score (FSS), both contained in the NBRCNT output line
> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
>
> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> done for every grid point in for forecast field and the observation
field.  We call the result of this process the forecast and
observation "fractional coverage" fields.  The FSS and FBS scores are
> computed by comparing the forecast and observation fractional
coverage fields to each other.
>
> If you're verifying a single field using 3 different thresholds and
6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
>
> Here's an example of how you might set this up in the Grid-Stat
config file:
>
> nbrhd = {
>     vld_thresh = 1.0;
>     width      = [ 3, 5, 9, 11, 13, 15 ];
>     cov_thresh = [ >=0.5 ];
> }
>
> For a given threshold, you should look to see how FSS changes as you
increase the neighborhood size.
>
> Hopefully that helps get you going.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
>
> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >
> > Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> > Transaction: Ticket created by geeta124 at hotmail.com
> >         Queue: met_help
> >       Subject: Unable to visualize Fuzzy verf.
> >         Owner: Nobody
> >    Requestors: geeta124 at hotmail.com
> >        Status: new
> >   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> >
> > Hi John/ met_help.
> >
> > I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
> > After the copygb is run, the FCST and OBS are on the same grid. ie
1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> > ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >
> > geeta
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Wed Feb 19 23:00:33 2014

Hi John,
Sorry I have put my data in Ur server . my dir name is geeta124_data.
Kindly check that.

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Fri, 14 Feb 2014 09:48:08 -0700
>
> Geeta,
>
> You run copygb to put the forecast and observation fields on exactly
the same grid, meaning the exact same resolution and number of grid
points.
>
> I was trying to make the point that the "interpolation methods" in
the grid_stat config file could be used as a form of "upscaling".  You
are right, there is no *need* to interpolate the data since
> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> option of UW_MEAN (for un-weighted mean) and width of 5.  For each
grid point, grid_stat will replace the value at the grid point with
the average of the 25 points in a 5x5 box around that point.
> Doing that for every point in the grid smooths the data and provides
a way of upscaling.
>
> The default interpolation width is 1, meaning that no smoothing is
performed.  However, you could use multiple smoothing widths and see
how your performance changes the more you smooth the data.
>
> Does that make sense?
>
> Regarding the runtime error you're getting, I see that you're using
input NetCDF files for the forecast and observation fields.  In the
config file, you need to specify the name and dimensions of the
> NetCDF variable to be used.  Assuming the NetCDF variable is named
"APCP_24", it would look something like this:
>
> fcst = {
>     wind_thresh = [ NA ];
>
>     field = [
>        {
>          name       = "APCP_24";
>          level      = [ "(*,*)" ];
>          cat_thresh = [ >0.0, >=5.0 ];
>        }
>     ];
>
> };
>
> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> instructions:
>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> Thanks,
> John
>
> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > Hi John,
> >   I have run grid-stat. Following is the error.
> >
> > bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> > GSL_RNG_TYPE=mt19937
> > GSL_RNG_SEED=18446744073321512274
> > Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> > Observation File: ../trmm_nc_data/02june2011.nc
> > Configuration File: GridStatConfig_APCP_24
> > ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >
> >
--------------------------------------------------------------------------------
> >
> >
> > Pls suggest.
> >
> > geeta
> >
> > From: geeta124 at hotmail.com
> > To: met_help at ucar.edu
> > Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> > Date: Fri, 14 Feb 2014 14:08:12 +0530
> >
> >
> >
> >
> > Thanks a lot John for your inputs and clarifications.
> >
> > Still following doubts are there.
> >
> > 1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
> > R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.
> >
> > 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
> >
> > why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>
> >> Geeta,
> >>
> >> You are correct, the input forecast and observation files must be
on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
> >>
> >> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> >> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >> config file, you could try:
> >>
> >> interp = {
> >>      field      = BOTH;
> >>      vld_thresh = 1.0;
> >>
> >>      type = [
> >>         { method = UW_MEAN; width  = 1; },
> >>         { method = UW_MEAN; width  = 3; },
> >>         { method = UW_MEAN; width  = 6; },
> >>         { method = UW_MEAN; width  = 9; }
> >>      ];
> >> };
> >>
> >> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
> >>
> >> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> >> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
> >>
> >> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> >> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >> computed by comparing the forecast and observation fractional
coverage fields to each other.
> >>
> >> If you're verifying a single field using 3 different thresholds
and 6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
> >>
> >> Here's an example of how you might set this up in the Grid-Stat
config file:
> >>
> >> nbrhd = {
> >>      vld_thresh = 1.0;
> >>      width      = [ 3, 5, 9, 11, 13, 15 ];
> >>      cov_thresh = [ >=0.5 ];
> >> }
> >>
> >> For a given threshold, you should look to see how FSS changes as
you increase the neighborhood size.
> >>
> >> Hopefully that helps get you going.
> >>
> >> Thanks,
> >> John Halley Gotway
> >> met_help at ucar.edu
> >>
> >>
> >> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>
> >>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>          Queue: met_help
> >>>        Subject: Unable to visualize Fuzzy verf.
> >>>          Owner: Nobody
> >>>     Requestors: geeta124 at hotmail.com
> >>>         Status: new
> >>>    Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>>
> >>> Hi John/ met_help.
> >>>
> >>> I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
> >>> After the copygb is run, the FCST and OBS are on the same grid.
ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >>>
> >>> geeta
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Wed Feb 19 23:33:58 2014

Hi John,
I am bothering you with a few more. Hope u ll bear with me.
So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.

1. Now I was reading  about 3 approaches of FUZZY verf which are   a.
Multi event contingency Table (My question is -----Can we define a hit
as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).

b) Pragmatic approach  (donot know what's that???)

c) Conditional Square root of Ranked probability score (CSRR). (donot
know what's that???)

I donot understand these. Can u lead me to the right direction or
provide some hints????

2. How Can I prepare the QUILT plots (Spatial scale vs Threshold) for
a score???
Can the QUILT plot be prepared for any score like HK, HSS, FBS or
FSS????


thanks
geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Thu, 20 Feb 2014 11:30:25 +0530




Hi John,
Sorry I have put my data in Ur server . my dir name is geeta124_data.
Kindly check that.

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Fri, 14 Feb 2014 09:48:08 -0700
>
> Geeta,
>
> You run copygb to put the forecast and observation fields on exactly
the same grid, meaning the exact same resolution and number of grid
points.
>
> I was trying to make the point that the "interpolation methods" in
the grid_stat config file could be used as a form of "upscaling".  You
are right, there is no *need* to interpolate the data since
> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> option of UW_MEAN (for un-weighted mean) and width of 5.  For each
grid point, grid_stat will replace the value at the grid point with
the average of the 25 points in a 5x5 box around that point.
> Doing that for every point in the grid smooths the data and provides
a way of upscaling.
>
> The default interpolation width is 1, meaning that no smoothing is
performed.  However, you could use multiple smoothing widths and see
how your performance changes the more you smooth the data.
>
> Does that make sense?
>
> Regarding the runtime error you're getting, I see that you're using
input NetCDF files for the forecast and observation fields.  In the
config file, you need to specify the name and dimensions of the
> NetCDF variable to be used.  Assuming the NetCDF variable is named
"APCP_24", it would look something like this:
>
> fcst = {
>     wind_thresh = [ NA ];
>
>     field = [
>        {
>          name       = "APCP_24";
>          level      = [ "(*,*)" ];
>          cat_thresh = [ >0.0, >=5.0 ];
>        }
>     ];
>
> };
>
> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> instructions:
>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> Thanks,
> John
>
> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > Hi John,
> >   I have run grid-stat. Following is the error.
> >
> > bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> > GSL_RNG_TYPE=mt19937
> > GSL_RNG_SEED=18446744073321512274
> > Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> > Observation File: ../trmm_nc_data/02june2011.nc
> > Configuration File: GridStatConfig_APCP_24
> > ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >
> >
--------------------------------------------------------------------------------
> >
> >
> > Pls suggest.
> >
> > geeta
> >
> > From: geeta124 at hotmail.com
> > To: met_help at ucar.edu
> > Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> > Date: Fri, 14 Feb 2014 14:08:12 +0530
> >
> >
> >
> >
> > Thanks a lot John for your inputs and clarifications.
> >
> > Still following doubts are there.
> >
> > 1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
> > R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.
> >
> > 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
> >
> > why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>
> >> Geeta,
> >>
> >> You are correct, the input forecast and observation files must be
on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
> >>
> >> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> >> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >> config file, you could try:
> >>
> >> interp = {
> >>      field      = BOTH;
> >>      vld_thresh = 1.0;
> >>
> >>      type = [
> >>         { method = UW_MEAN; width  = 1; },
> >>         { method = UW_MEAN; width  = 3; },
> >>         { method = UW_MEAN; width  = 6; },
> >>         { method = UW_MEAN; width  = 9; }
> >>      ];
> >> };
> >>
> >> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
> >>
> >> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> >> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
> >>
> >> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> >> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >> computed by comparing the forecast and observation fractional
coverage fields to each other.
> >>
> >> If you're verifying a single field using 3 different thresholds
and 6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
> >>
> >> Here's an example of how you might set this up in the Grid-Stat
config file:
> >>
> >> nbrhd = {
> >>      vld_thresh = 1.0;
> >>      width      = [ 3, 5, 9, 11, 13, 15 ];
> >>      cov_thresh = [ >=0.5 ];
> >> }
> >>
> >> For a given threshold, you should look to see how FSS changes as
you increase the neighborhood size.
> >>
> >> Hopefully that helps get you going.
> >>
> >> Thanks,
> >> John Halley Gotway
> >> met_help at ucar.edu
> >>
> >>
> >> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>
> >>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>          Queue: met_help
> >>>        Subject: Unable to visualize Fuzzy verf.
> >>>          Owner: Nobody
> >>>     Requestors: geeta124 at hotmail.com
> >>>         Status: new
> >>>    Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>>
> >>> Hi John/ met_help.
> >>>
> >>> I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
> >>> After the copygb is run, the FCST and OBS are on the same grid.
ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >>>
> >>> geeta
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Thu Feb 20 10:21:20 2014

Geeta,

I see that you're using METv3.0.  The current version is METv4.1, and
it'd be good to switch to that version when possible.  There have been
major changes to the MET configuration file format since
METv3.0, so be sure to use the default config files for METv4.1.

I ran METv3.0 grid_stat on the data files you sent and reproduced the
error message you saw:
    ***WARNING***: process_scores() -> 61(*,*) not found in file:
2011060100_WRFPRS_day1_003Z.nc

Since the input files are both NetCDF files, you need to specify the
name of the NetCDF variable that should be used.  So I modified your
config file:
    FROM: fcst_field[] = [ "61/A24" ];
    TO:   fcst_field[] = [ "APCP_24(*,*)" ];

When I reran with this change, I got this error:
    NetCDF: Attribute not found

After some digging, I found the problem to be the MET_version global
attribute in 02june2011.nc:
                 :MET_version = "V3.0.1" ;

I switched that to be consistent with the version of MET you're
running:
                 :MET_version = "V3.0" ;

And then I got this error:
ERROR: parse_poly_mask() -> the dimensions of the masking region (185,
129) must match the dimensions of the data (53, 53).

So I modified the config file to change the masking region settings:
    FROM: mask_grid[] = [ "DTC165", "DTC166" ];
          mask_poly[] = [ "MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
                          "MET_BASE/data/poly/LMV.poly" ];

    TO:   mask_grid[] = [ "FULL" ];
          mask_poly[] = [];

And then it ran fine.

To summarize...
  (1) To run METv3.0 grid_stat, please set the "MET_version" global
attribute in all the gridded NetCDF files you're using to METv3.0.
  (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
  (3) Consider updating to using METv4.1 instead.

Thanks,
John

On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>
> Hi John,
> I am bothering you with a few more. Hope u ll bear with me.
> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
>
> 1. Now I was reading  about 3 approaches of FUZZY verf which are
a. Multi event contingency Table (My question is -----Can we define a
hit as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).
>
> b) Pragmatic approach  (donot know what's that???)
>
> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
>
> I donot understand these. Can u lead me to the right direction or
provide some hints????
>
> 2. How Can I prepare the QUILT plots (Spatial scale vs Threshold)
for a score???
> Can the QUILT plot be prepared for any score like HK, HSS, FBS or
FSS????
>
>
> thanks
> geeta
>
> From: geeta124 at hotmail.com
> To: met_help at ucar.edu
> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> Date: Thu, 20 Feb 2014 11:30:25 +0530
>
>
>
>
> Hi John,
> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> Kindly check that.
>
> geeta
>
>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> From: met_help at ucar.edu
>> To: geeta124 at hotmail.com
>> Date: Fri, 14 Feb 2014 09:48:08 -0700
>>
>> Geeta,
>>
>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
>>
>> I was trying to make the point that the "interpolation methods" in
the grid_stat config file could be used as a form of "upscaling".  You
are right, there is no *need* to interpolate the data since
>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
>> option of UW_MEAN (for un-weighted mean) and width of 5.  For each
grid point, grid_stat will replace the value at the grid point with
the average of the 25 points in a 5x5 box around that point.
>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
>>
>> The default interpolation width is 1, meaning that no smoothing is
performed.  However, you could use multiple smoothing widths and see
how your performance changes the more you smooth the data.
>>
>> Does that make sense?
>>
>> Regarding the runtime error you're getting, I see that you're using
input NetCDF files for the forecast and observation fields.  In the
config file, you need to specify the name and dimensions of the
>> NetCDF variable to be used.  Assuming the NetCDF variable is named
"APCP_24", it would look something like this:
>>
>> fcst = {
>>      wind_thresh = [ NA ];
>>
>>      field = [
>>         {
>>           name       = "APCP_24";
>>           level      = [ "(*,*)" ];
>>           cat_thresh = [ >0.0, >=5.0 ];
>>         }
>>      ];
>>
>> };
>>
>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
>> instructions:
>>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> Thanks,
>> John
>>
>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>
>>> Hi John,
>>>    I have run grid-stat. Following is the error.
>>>
>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>>> GSL_RNG_TYPE=mt19937
>>> GSL_RNG_SEED=18446744073321512274
>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>> Observation File: ../trmm_nc_data/02june2011.nc
>>> Configuration File: GridStatConfig_APCP_24
>>> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>
>>>
--------------------------------------------------------------------------------
>>>
>>>
>>> Pls suggest.
>>>
>>> geeta
>>>
>>> From: geeta124 at hotmail.com
>>> To: met_help at ucar.edu
>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
>>>
>>>
>>>
>>>
>>> Thanks a lot John for your inputs and clarifications.
>>>
>>> Still following doubts are there.
>>>
>>> 1. when I run copygb, what it does is to make the observation and
Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
>>> R u calling that as Upscaling???? So this process is not a part of
GRID-stat. So essentially copygb is doing the upscaling part.
>>>
>>> 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
>>>
>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
>>>
>>> geeta
>>>
>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>> From: met_help at ucar.edu
>>>> To: geeta124 at hotmail.com
>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
>>>>
>>>> Geeta,
>>>>
>>>> You are correct, the input forecast and observation files must be
on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
>>>>
>>>> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
>>>> config file, you could try:
>>>>
>>>> interp = {
>>>>       field      = BOTH;
>>>>       vld_thresh = 1.0;
>>>>
>>>>       type = [
>>>>          { method = UW_MEAN; width  = 1; },
>>>>          { method = UW_MEAN; width  = 3; },
>>>>          { method = UW_MEAN; width  = 6; },
>>>>          { method = UW_MEAN; width  = 9; }
>>>>       ];
>>>> };
>>>>
>>>> This tells Grid-Stat to compute its statistics 4 times, applying
more smoothing each time.  Typically, the more the data has been
smoothed, the better the statistics will be.
>>>>
>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
>>>> type.  Be sure to turn the NBRCNT output line on in the Grid-Stat
config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
>>>> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
>>>>
>>>> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
>>>> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
>>>> computed by comparing the forecast and observation fractional
coverage fields to each other.
>>>>
>>>> If you're verifying a single field using 3 different thresholds
and 6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
>>>>
>>>> Here's an example of how you might set this up in the Grid-Stat
config file:
>>>>
>>>> nbrhd = {
>>>>       vld_thresh = 1.0;
>>>>       width      = [ 3, 5, 9, 11, 13, 15 ];
>>>>       cov_thresh = [ >=0.5 ];
>>>> }
>>>>
>>>> For a given threshold, you should look to see how FSS changes as
you increase the neighborhood size.
>>>>
>>>> Hopefully that helps get you going.
>>>>
>>>> Thanks,
>>>> John Halley Gotway
>>>> met_help at ucar.edu
>>>>
>>>>
>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
>>>>>
>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
>>>>> Transaction: Ticket created by geeta124 at hotmail.com
>>>>>           Queue: met_help
>>>>>         Subject: Unable to visualize Fuzzy verf.
>>>>>           Owner: Nobody
>>>>>      Requestors: geeta124 at hotmail.com
>>>>>          Status: new
>>>>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>
>>>>>
>>>>> Hi John/ met_help.
>>>>>
>>>>> I was reading MET doc that mentions about the FUZZY verification
methods. I am trying to visualise what grid stat does or how it
functions.
>>>>> After the copygb is run, the FCST and OBS are on the same grid.
ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
>>>>>
>>>>> geeta
>>>>>
>>>>
>>>
>>>
>>
>
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Thu Feb 20 10:21:20 2014

////////////////////////////////////////////////////////////////////////////////
//
// Default grid_stat configuration file
//
////////////////////////////////////////////////////////////////////////////////

//
// Specify a name to designate the model being verified.  This name
will be
// written to the second column of the ASCII output generated.
//
model = "WRF";

//
// Specify a comma-separated list of fields to be verified.  The
forecast and
// observation fields may be specified separately.  If the obs_field
parameter
// is left blank, it will default to the contents of fcst_field.
//
// Each field is specified as a GRIB code or abbreviation followed by
an
// accumulation or vertical level indicator for GRIB files or as a
variable name
// followed by a list of dimensions for NetCDF files output from
p_interp or MET.
//
// Specifying verification fields for GRIB files:
//    GC/ANNN for accumulation interval NNN
//    GC/ZNNN for vertical level NNN
//    GC/PNNN for pressure level NNN in hPa
//    GC/PNNN-NNN for a range of pressure levels in hPa
//    GC/LNNN for a generic level type
//    GC/RNNN for a specific GRIB record number
//    Where GC is the number of or abbreviation for the grib code
//    to be verified.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
// Specifying verification fields for NetCDF files:
//    var_name(i,...,j,*,*) for a single field
//    Where var_name is the name of the NetCDF variable,
//    and i,...,j specifies fixed dimension values,
//    and *,* specifies the two dimensions for the gridded field.
//
//    NOTE: To verify winds as vectors rather than scalars,
//          specify UGRD (or 33) followd by VGRD (or 34) with the
//          same level values.
//
//    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
//
// e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for GRIB
input
// e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ]; for
NetCDF input
//
fcst_field[] = [ "APCP_24(*,*)" ];
obs_field[]  = [ "APCP_03(*,*)" ];

//
// Specify a comma-separated list of groups of thresholds to be
applied to the
// fields listed above.  Thresholds for the forecast and observation
fields
// may be specified separately.  If the obs_thresh parameter is left
blank,
// it will default to the content of fcst_thresh.
//
// At least one threshold must be provided for each field listed
above.  The
// lengths of the "fcst_field" and "fcst_thresh" arrays must match, as
must
// lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
// thresholds to a field, separate the threshold values with a space.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//
// NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
//       and be preceeded by "ge".
//
// e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0 ge80.0" ];
//
fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
obs_thresh[]  = [];

//
// Specify a comma-separated list of thresholds to be used when
computing
// VL1L2 partial sums for winds.  The thresholds are applied to the
wind speed
// values derived from each U/V pair.  Only those U/V pairs which meet
the wind
// speed threshold criteria are retained.  If the obs_wind_thresh
parameter is
// left blank, it will default to the contents of fcst_wind_thresh.
//
// To apply multiple wind speed thresholds, separate the threshold
values with a
// space.  Use "NA" to indicate that no wind speed threshold should be
applied.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//    'NA' for no threshold
//
// e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
//
fcst_wind_thresh[] = [ "NA" ];
obs_wind_thresh[]  = [];

//
// Specify a comma-separated list of grids to be used in masking the
data over
// which to perform scoring.  An empty list indicates that no masking
grid
// should be performed.  The standard NCEP grids are named "GNNN"
where NNN
// indicates the three digit grid number.  Enter "FULL" to score over
the
// entire domain.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
//
// e.g. mask_grid[] = [ "FULL" ];
//
mask_grid[] = [ "FULL" ];

//
// Specify a comma-separated list of masking regions to be applied.
// An empty list indicates that no additional masks should be used.
// The masking regions may be defined in one of 4 ways:
//
// (1) An ASCII file containing a lat/lon polygon.
//     Latitude in degrees north and longitude in degrees east.
//     By default, the first and last polygon points are connected.
//     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n points:
//          "poly_name lat1 lon1 lat2 lon2... latn lonn"
//
// (2) The NetCDF output of the gen_poly_mask tool.
//
// (3) A NetCDF data file, followed by the name of the NetCDF variable
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.nc var_name gt0.00"
//
// (4) A GRIB data file, followed by a description of the field
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.grb APCP/A3 gt0.00"
//
// Any NetCDF or GRIB file used must have the same grid dimensions as
the
// data being verified.
//
// MET_BASE may be used in the path for the files above.
//
// e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
//                      "poly_mask.ncf",
//                      "sample.nc APCP",
//                      "sample.grb HGT/Z0 gt100.0" ];
//
mask_poly[] = [];

//
// Specify a comma-separated list of values for alpha to be used when
computing
// confidence intervals.  Values of alpha must be between 0 and 1.
//
// e.g. ci_alpha[] = [ 0.05, 0.10 ];
//
ci_alpha[] = [ 0.10, 0.05 ];

//
// Specify the method to be used for computing bootstrap confidence
intervals.
// The value for this is interpreted as follows:
//    (0) Use the BCa interval method (computationally intensive)
//    (1) Use the percentile interval method
//
boot_interval = 1;

//
// Specify a proportion between 0 and 1 to define the replicate sample
size
// to be used when computing percentile intervals.  The replicate
sample
// size is set to boot_rep_prop * n, where n is the number of raw data
points.
//
// e.g boot_rep_prop = 0.80;
//
boot_rep_prop = 1.0;

//
// Specify the number of times each set of matched pair data should be
// resampled when computing bootstrap confidence intervals.  A value
of
// zero disables the computation of bootstrap condifence intervals.
//
// e.g. n_boot_rep = 1000;
//
n_boot_rep = 0;

//
// Specify the name of the random number generator to be used.  See
the MET
// Users Guide for a list of possible random number generators.
//
boot_rng = "mt19937";

//
// Specify the seed value to be used when computing bootstrap
confidence
// intervals.  If left unspecified, the seed will change for each run
and
// the computed bootstrap confidence intervals will not be
reproducable.
//
boot_seed = "";

//
// Specify a comma-separated list of interpolation method(s) to be
used for
// smoothing the data fields prior to comparing them.  The value at
each grid
// point is replaced by the measure computed over the neighborhood
defined
// around the grid point.  String values are interpreted as follows:
//    MIN     = Minimum in the neighborhood
//    MAX     = Maximum in the neighborhood
//    MEDIAN  = Median in the neighborhood
//    UW_MEAN = Unweighted mean in the neighborhood
//
//    NOTE: The distance-weighted mean (DW_MEAN) is not an option here
since
//          it will have no effect on a gridded field.
//
//    NOTE: The least-squares fit (LS_FIT) is not an option here since
//          it reduces to an unweighted mean on a grid.
//
// e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
//
interp_method[] = [ "UW_MEAN" ];

//
// Specify a comma-separated list of box widths to be used by the
interpolation
// techniques listed above.  All values must be odd.  A value of 1
indicates
// that no smoothing should be performed.  For values greater than 1,
the n*n
// grid points around each point will be used to smooth the data
fields.
//
// e.g. interp_width = [ 1, 3, 5 ];
//
interp_width[] = [ 1 ];

//
// The interp_flag controls how the smoothing defined above should be
applied:
// (1) Smooth only the forecast field
// (2) Smooth only the observation field
// (3) Smooth both the forecast and observation fields
//
interp_flag = 1;

//
// When smoothing, compute a ratio of the number of valid data points
to
// the total number of points in the neighborhood.  If that ratio is
less
// than this threshold, do not compute a smoothed forecast value.
This
// threshold must be between 0 and 1.  Setting this threshold to 1
will
// require that each observation be surrounded by n*n valid forecast
// points.
//
// e.g. interp_thresh = 1.0;
//
interp_thresh = 1.0;

//
// Specify a comma-separated list of box widths to be used to define
the
// neighborhood size for the neighborhood verification methods. All
values
// must be odd.  For values greater than 1, the n*n grid points around
each
// point will be used to define the neighborhood.
//
// e.g. nbr_width = [ 3, 5 ];
//
nbr_width[] = [ 3, 5 ];

//
// When applying the neighborhood verification methods, compute a
ratio
// of the number of valid data points to the total number of points in
// the neighborhood.  If that ratio is less than this threshold, do
not
// include it in the computations.  This threshold must be between 0
// and 1.  Setting this threshold to 1 will require that each point be
// surrounded by n*n valid forecast points.
//
// e.g. nbr_thresh = 1.0;
//
nbr_thresh = 1.0;

//
// When applying the neighborhood verification methods, apply a
threshold
// to the fractional coverage values to define contingency tables from
// which to compute statistics.
//
// e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
//
cov_thresh[] = [ "ge0.5" ];

//
// Specify flags to indicate the type of data to be output:
//
//    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
//           Total (TOTAL),
//           Forecast Rate (F_RATE),
//           Hit Rate (H_RATE),
//           Observation Rate (O_RATE)
//
//    (2) STAT and CTC Text Files, Contingency Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON)
//
//    (3) STAT and CTS Text Files, Contingency Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Frequency Bias (FBIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Contingency Table Count columns repeated N_CAT*N_CAT
times
//
//    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Accuracy (ACC),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Gerrity Score (GER),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (6) STAT and CNT Text Files, Statistics of Continuous Variables:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//           Forecast Standard Deviation (FSTDEV),
//           Observation Mean (OBAR),
//           Observation Standard Deviation (OSTDEV),
//           Pearson's Correlation Coefficient (PR_CORR),
//           Spearman's Rank Correlation Coefficient (SP_CORR),
//           Kendall Tau Rank Correlation Coefficient (KT_CORR),
//           Number of ranks compared (RANKS),
//           Number of tied ranks in the forecast field (FRANK_TIES),
//           Number of tied ranks in the observation field
(ORANK_TIES),
//           Mean Error (ME),
//           Standard Deviation of the Error (ESTDEV),
//           Multiplicative Bias (MBIAS = FBAR - OBAR),
//           Mean Absolute Error (MAE),
//           Mean Squared Error (MSE),
//           Bias-Corrected Mean Squared Error (BCMSE),
//           Root Mean Squared Error (RMSE),
//           Percentiles of the Error (E10, E25, E50, E75, E90)
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//              = mean(f)
//           Observation Mean (OBAR),
//              = mean(o)
//           Forecast*Observation Product Mean (FOBAR),
//              = mean(f*o)
//           Forecast Squared Mean (FFBAR),
//              = mean(f^2)
//           Observation Squared Mean (OOBAR)
//              = mean(o^2)
//
//    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
//           Total (TOTAL),
//           U-Forecast Mean (UFBAR),
//              = mean(uf)
//           V-Forecast Mean (VFBAR),
//              = mean(vf)
//           U-Observation Mean (UOBAR),
//              = mean(uo)
//           V-Observation Mean (VOBAR),
//              = mean(vo)
//           U-Product Plus V-Product (UVFOBAR),
//              = mean(uf*uo+vf*vo)
//           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
//              = mean(uf^2+vf^2)
//           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
//              = mean(uo^2+vo^2)
//
//    (9) STAT and PCT Text Files, Nx2 Probability Contingency Table
Counts:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Row Observation Yes Count (OY_i),
//           Row Observation No Count (ON_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (10) STAT and PSTD Text Files, Nx2 Probability Contingency Table
Scores:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Base Rate (BASER) with confidence interval limits,
//           Reliability (RELIABILITY),
//           Resolution (RESOLUTION),
//           Uncertainty (UNCERTAINTY),
//           Area Under the ROC Curve (ROC_AUC),
//           Brier Score (BRIER) with confidence interval limits,
//           Probability Threshold Value (THRESH_i)
//           NOTE: Previous column repeated for each probability
threshold.
//
//   (11) STAT and PJC Text Files, Joint/Continuous Statistics of
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Observation Yes Count Divided by Total (OY_TP_i),
//           Observation No Count Divided by Total (ON_TP_i),
//           Calibration (CALIBRATION_i),
//           Refinement (REFINEMENT_i),
//           Likelikhood (LIKELIHOOD_i),
//           Base Rate (BASER_i),
//           NOTE: Previous 7 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (12) STAT and PRC Text Files, ROC Curve Points for
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Probability of Detecting Yes (PODY_i),
//           Probability of False Detection (POFD_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (13) STAT and NBRCTC Text Files, Neighborhood Methods Contingency
Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON),
//           Fractional Threshold Value (FRAC_T)
//
//   (14) STAT and NBRCTS Text Files, Neighborhood Methods Contingency
Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Bias (BIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//   (15) STAT and NBRCNT Text Files, Neighborhood Methods Continuous
Scores:
//           Total (TOTAL),
//           Fractions Brier Score (FBS),
//           Fractions Skill Score (FSS)
//
//   (16) NetCDF File containing difference fields for each grib
//        code/mask combination.  A non-zero value indicates that
//        this NetCDF file should be produced.  A value of 0
//        indicates that it should not be produced.
//
// Values for flags (1) through (15) are interpreted as follows:
//    (0) Do not generate output of this type
//    (1) Write output to a STAT file
//    (2) Write output to a STAT file and a text file
//
output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1 ];

//
// Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
// Coefficients should be computed.  Computing them over large
datasets is
// computationally intensive and slows down the runtime execution
significantly.
//    (0) Do not compute these correlation coefficients
//    (1) Compute these correlation coefficients
//
rank_corr_flag = 0;

//
// Specify the GRIB Table 2 parameter table version number to be used
// for interpreting GRIB codes.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
grib_ptv = 2;

//
// Directory where temporary files should be written.
//
tmp_dir = "/tmp";

//
// Prefix to be used for the output file names.
//
output_prefix = "APCP_24";

//
// Indicate a version number for the contents of this configuration
file.
// The value should generally not be modified.
//
version = "V3.0";

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Fri Feb 21 02:33:42 2014

thanks John,
I have made the changes as per your config file.
But the error persists.
-bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073358673747
Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
Observation File: ../trmm_nc_data/02june2011.nc
Configuration File: GridStatConfig_APCP_24
NetCDF: Attribute not found
-bash-3.2$


2. I have used  ncdump to see my file attributes.
Are u referring to these attributes???

// global attributes:
                :FileOrigins = "File 2011060100_WRFPRS_d01.003Z.nc
generated 20140130_092500 UTC on host rmcdlh by the MET pcp_combine
tool" ;
                :MET_version = "V3.0" ;
                :MET_tool = "pcp_combine" ;

Following is my Config file.
____________________________________________________________________
////////////////////////////////////////////////////////////////////////////////
//
// Default grid_stat configuration file
//
////////////////////////////////////////////////////////////////////////////////
//
// Specify a name to designate the model being verified.  This name
will be
// written to the second column of the ASCII output generated.
//
model = "WRF";
//
// Specify a comma-separated list of fields to be verified.  The
forecast and
// observation fields may be specified separately.  If the obs_field
parameter
// is left blank, it will default to the contents of fcst_field.
//
// Each field is specified as a GRIB code or abbreviation followed by
an
// accumulation or vertical level indicator for GRIB files or as a
variable name
// followed by a list of dimensions for NetCDF files output from
p_interp or MET.
//
// Specifying verification fields for GRIB files:
//    GC/ANNN for accumulation interval NNN
//    GC/ZNNN for vertical level NNN
//    GC/PNNN for pressure level NNN in hPa
//    GC/PNNN-NNN for a range of pressure levels in hPa
//    GC/LNNN for a generic level type
//    GC/RNNN for a specific GRIB record number
//    Where GC is the number of or abbreviation for the grib code
//    to be verified.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
// Specifying verification fields for NetCDF files:
//    var_name(i,...,j,*,*) for a single field
//    Where var_name is the name of the NetCDF variable,
//    and i,...,j specifies fixed dimension values,
//    and *,* specifies the two dimensions for the gridded field.
//
//    NOTE: To verify winds as vectors rather than scalars,
//          specify UGRD (or 33) followd by VGRD (or 34) with the
//          same level values.
//
//    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
//
// e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for GRIB
input
// e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ]; for
NetCDF input
//
fcst_field[] = [ "APCP_24(*,*)" ];
obs_field[]  = [ "APCP_03(*,*)" ];
//
// Specify a comma-separated list of groups of thresholds to be
applied to the
// fields listed above.  Thresholds for the forecast and observation
fields
// may be specified separately.  If the obs_thresh parameter is left
blank,
// it will default to the content of fcst_thresh.
//
// At least one threshold must be provided for each field listed
above.  The
// lengths of the "fcst_field" and "fcst_thresh" arrays must match, as
must
// lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
// thresholds to a field, separate the threshold values with a space.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//
// NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
//       and be preceeded by "ge".
//
// e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0 ge80.0" ];
//
fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
obs_thresh[]  = [];
//
// Specify a comma-separated list of thresholds to be used when
computing
// VL1L2 partial sums for winds.  The thresholds are applied to the
wind speed
// values derived from each U/V pair.  Only those U/V pairs which meet
the wind
// speed threshold criteria are retained.  If the obs_wind_thresh
parameter is
// left blank, it will default to the contents of fcst_wind_thresh.
//
// To apply multiple wind speed thresholds, separate the threshold
values with a
// space.  Use "NA" to indicate that no wind speed threshold should be
applied.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//    'NA' for no threshold
//
// e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
//
fcst_wind_thresh[] = [ "NA" ];
obs_wind_thresh[]  = [];
//
// Specify a comma-separated list of grids to be used in masking the
data over
// which to perform scoring.  An empty list indicates that no masking
grid
// should be performed.  The standard NCEP grids are named "GNNN"
where NNN
// indicates the three digit grid number.  Enter "FULL" to score over
the
// entire domain.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
//
// e.g. mask_grid[] = [ "FULL" ];
//
mask_grid[] = [ "FULL" ];
//
// Specify a comma-separated list of masking regions to be applied.
// An empty list indicates that no additional masks should be used.
// The masking regions may be defined in one of 4 ways:
//
// (1) An ASCII file containing a lat/lon polygon.
//     Latitude in degrees north and longitude in degrees east.
//     By default, the first and last polygon points are connected.
//     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n points:
//          "poly_name lat1 lon1 lat2 lon2... latn lonn"
//
// (2) The NetCDF output of the gen_poly_mask tool.
//
// (3) A NetCDF data file, followed by the name of the NetCDF variable
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.nc var_name gt0.00"
//
// (4) A GRIB data file, followed by a description of the field
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.grb APCP/A3 gt0.00"
//
// Any NetCDF or GRIB file used must have the same grid dimensions as
the
// data being verified.
//
// MET_BASE may be used in the path for the files above.
//
// e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
//                      "poly_mask.ncf",
//                      "sample.nc APCP",
//                      "sample.grb HGT/Z0 gt100.0" ];
//
mask_poly[] = [];
//
// Specify a comma-separated list of values for alpha to be used when
computing
// confidence intervals.  Values of alpha must be between 0 and 1.
//
// e.g. ci_alpha[] = [ 0.05, 0.10 ];
//
ci_alpha[] = [ 0.10, 0.05 ];
//
// Specify the method to be used for computing bootstrap confidence
intervals.
// The value for this is interpreted as follows:
//    (0) Use the BCa interval method (computationally intensive)
//    (1) Use the percentile interval method
//
boot_interval = 1;
//
// Specify a proportion between 0 and 1 to define the replicate sample
size
// to be used when computing percentile intervals.  The replicate
sample
// size is set to boot_rep_prop * n, where n is the number of raw data
points.
//
// e.g boot_rep_prop = 0.80;
//
boot_rep_prop = 1.0;
//
// Specify the number of times each set of matched pair data should be
// resampled when computing bootstrap confidence intervals.  A value
of
// zero disables the computation of bootstrap condifence intervals.
//
// e.g. n_boot_rep = 1000;
//
n_boot_rep = 0;
//
// Specify the name of the random number generator to be used.  See
the MET
// Users Guide for a list of possible random number generators.
//
boot_rng = "mt19937";
//
// Specify the seed value to be used when computing bootstrap
confidence
// intervals.  If left unspecified, the seed will change for each run
and
// the computed bootstrap confidence intervals will not be
reproducable.
//
boot_seed = "";
//
// Specify a comma-separated list of interpolation method(s) to be
used for
// smoothing the data fields prior to comparing them.  The value at
each grid
// point is replaced by the measure computed over the neighborhood
defined
// around the grid point.  String values are interpreted as follows:
//    MIN     = Minimum in the neighborhood
//    MAX     = Maximum in the neighborhood
//    MEDIAN  = Median in the neighborhood
//    UW_MEAN = Unweighted mean in the neighborhood
//
//    NOTE: The distance-weighted mean (DW_MEAN) is not an option here
since
//          it will have no effect on a gridded field.
//
//    NOTE: The least-squares fit (LS_FIT) is not an option here since
//          it reduces to an unweighted mean on a grid.
//
// e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
//
interp_method[] = [ "UW_MEAN" ];
//
// Specify a comma-separated list of box widths to be used by the
interpolation
// techniques listed above.  All values must be odd.  A value of 1
indicates
// that no smoothing should be performed.  For values greater than 1,
the n*n
// grid points around each point will be used to smooth the data
fields.
//
// e.g. interp_width = [ 1, 3, 5 ];
//
interp_width[] = [ 1 ];
//
// The interp_flag controls how the smoothing defined above should be
applied:
// (1) Smooth only the forecast field
// (2) Smooth only the observation field
// (3) Smooth both the forecast and observation fields
//
interp_flag = 1;
//
// When smoothing, compute a ratio of the number of valid data points
to
// the total number of points in the neighborhood.  If that ratio is
less
// than this threshold, do not compute a smoothed forecast value.
This
// threshold must be between 0 and 1.  Setting this threshold to 1
will
// require that each observation be surrounded by n*n valid forecast
// points.
//
// e.g. interp_thresh = 1.0;
//
interp_thresh = 1.0;
//
// Specify a comma-separated list of box widths to be used to define
the
// neighborhood size for the neighborhood verification methods. All
values
// must be odd.  For values greater than 1, the n*n grid points around
each
// point will be used to define the neighborhood.
//
// e.g. nbr_width = [ 3, 5 ];
//
nbr_width[] = [ 3, 5 ];
//
// When applying the neighborhood verification methods, compute a
ratio
// of the number of valid data points to the total number of points in
// the neighborhood.  If that ratio is less than this threshold, do
not
// include it in the computations.  This threshold must be between 0
// and 1.  Setting this threshold to 1 will require that each point be
// surrounded by n*n valid forecast points.
//
// e.g. nbr_thresh = 1.0;
//
nbr_thresh = 1.0;
//
// When applying the neighborhood verification methods, apply a
threshold
// to the fractional coverage values to define contingency tables from
// which to compute statistics.
//
// e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
//
cov_thresh[] = [ "ge0.5" ];
//
// Specify flags to indicate the type of data to be output:
//
//    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
//           Total (TOTAL),
//           Forecast Rate (F_RATE),
//           Hit Rate (H_RATE),
//           Observation Rate (O_RATE)
//
//    (2) STAT and CTC Text Files, Contingency Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON)
//
//    (3) STAT and CTS Text Files, Contingency Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Frequency Bias (FBIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Contingency Table Count columns repeated N_CAT*N_CAT
times
//
//    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Accuracy (ACC),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Gerrity Score (GER),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (6) STAT and CNT Text Files, Statistics of Continuous Variables:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//           Forecast Standard Deviation (FSTDEV),
//           Observation Mean (OBAR),
//           Observation Standard Deviation (OSTDEV),
//           Pearson's Correlation Coefficient (PR_CORR),
//           Spearman's Rank Correlation Coefficient (SP_CORR),
//           Kendall Tau Rank Correlation Coefficient (KT_CORR),
//           Number of ranks compared (RANKS),
//           Number of tied ranks in the forecast field (FRANK_TIES),
//           Number of tied ranks in the observation field
(ORANK_TIES),
//           Mean Error (ME),
//           Standard Deviation of the Error (ESTDEV),
//           Multiplicative Bias (MBIAS = FBAR - OBAR),
//           Mean Absolute Error (MAE),
//           Mean Squared Error (MSE),
//           Bias-Corrected Mean Squared Error (BCMSE),
//           Root Mean Squared Error (RMSE),
//           Percentiles of the Error (E10, E25, E50, E75, E90)
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//              = mean(f)
//           Observation Mean (OBAR),
//              = mean(o)
//           Forecast*Observation Product Mean (FOBAR),
//              = mean(f*o)
//           Forecast Squared Mean (FFBAR),
//              = mean(f^2)
//           Observation Squared Mean (OOBAR)
//              = mean(o^2)
//
//    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
//           Total (TOTAL),
//           U-Forecast Mean (UFBAR),
//              = mean(uf)
//           V-Forecast Mean (VFBAR),
//              = mean(vf)
//           U-Observation Mean (UOBAR),
//              = mean(uo)
//           V-Observation Mean (VOBAR),
//              = mean(vo)
//           U-Product Plus V-Product (UVFOBAR),
//              = mean(uf*uo+vf*vo)
//           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
//              = mean(uf^2+vf^2)
//           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
//              = mean(uo^2+vo^2)
//
//    (9) STAT and PCT Text Files, Nx2 Probability Contingency Table
Counts:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Row Observation Yes Count (OY_i),
//           Row Observation No Count (ON_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (10) STAT and PSTD Text Files, Nx2 Probability Contingency Table
Scores:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Base Rate (BASER) with confidence interval limits,
//           Reliability (RELIABILITY),
//           Resolution (RESOLUTION),
//           Uncertainty (UNCERTAINTY),
//           Area Under the ROC Curve (ROC_AUC),
//           Brier Score (BRIER) with confidence interval limits,
//           Probability Threshold Value (THRESH_i)
//           NOTE: Previous column repeated for each probability
threshold.
//
//   (11) STAT and PJC Text Files, Joint/Continuous Statistics of
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Observation Yes Count Divided by Total (OY_TP_i),
//           Observation No Count Divided by Total (ON_TP_i),
//           Calibration (CALIBRATION_i),
//           Refinement (REFINEMENT_i),
//           Likelikhood (LIKELIHOOD_i),
//           Base Rate (BASER_i),
//           NOTE: Previous 7 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (12) STAT and PRC Text Files, ROC Curve Points for
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Probability of Detecting Yes (PODY_i),
//           Probability of False Detection (POFD_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (13) STAT and NBRCTC Text Files, Neighborhood Methods Contingency
Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON),
//           Fractional Threshold Value (FRAC_T)
//
//   (14) STAT and NBRCTS Text Files, Neighborhood Methods Contingency
Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Bias (BIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//   (15) STAT and NBRCNT Text Files, Neighborhood Methods Continuous
Scores:
//           Total (TOTAL),
//           Fractions Brier Score (FBS),
//           Fractions Skill Score (FSS)
//
//   (16) NetCDF File containing difference fields for each grib
//        code/mask combination.  A non-zero value indicates that
//        this NetCDF file should be produced.  A value of 0
//        indicates that it should not be produced.
//
// Values for flags (1) through (15) are interpreted as follows:
//    (0) Do not generate output of this type
//    (1) Write output to a STAT file
//    (2) Write output to a STAT file and a text file
//
output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1 ];
//
// Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
// Coefficients should be computed.  Computing them over large
datasets is
// computationally intensive and slows down the runtime execution
significantly.
//    (0) Do not compute these correlation coefficients
//    (1) Compute these correlation coefficients
//
rank_corr_flag = 0;
//
// Specify the GRIB Table 2 parameter table version number to be used
// for interpreting GRIB codes.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
grib_ptv = 2;
//
// Directory where temporary files should be written.
//
tmp_dir = "/tmp";
//
// Prefix to be used for the output file names.
//
output_prefix = "APCP_24";
//
// Indicate a version number for the contents of this configuration
file.
// The value should generally not be modified.
//
version = "V3.0";


geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Thu, 20 Feb 2014 10:21:20 -0700
>
> Geeta,
>
> I see that you're using METv3.0.  The current version is METv4.1,
and it'd be good to switch to that version when possible.  There have
been major changes to the MET configuration file format since
> METv3.0, so be sure to use the default config files for METv4.1.
>
> I ran METv3.0 grid_stat on the data files you sent and reproduced
the error message you saw:
>     ***WARNING***: process_scores() -> 61(*,*) not found in file:
2011060100_WRFPRS_day1_003Z.nc
>
> Since the input files are both NetCDF files, you need to specify the
name of the NetCDF variable that should be used.  So I modified your
config file:
>     FROM: fcst_field[] = [ "61/A24" ];
>     TO:   fcst_field[] = [ "APCP_24(*,*)" ];
>
> When I reran with this change, I got this error:
>     NetCDF: Attribute not found
>
> After some digging, I found the problem to be the MET_version global
attribute in 02june2011.nc:
>                  :MET_version = "V3.0.1" ;
>
> I switched that to be consistent with the version of MET you're
running:
>                  :MET_version = "V3.0" ;
>
> And then I got this error:
> ERROR: parse_poly_mask() -> the dimensions of the masking region
(185, 129) must match the dimensions of the data (53, 53).
>
> So I modified the config file to change the masking region settings:
>     FROM: mask_grid[] = [ "DTC165", "DTC166" ];
>           mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
>                           "MET_BASE/data/poly/LMV.poly" ];
>
>     TO:   mask_grid[] = [ "FULL" ];
>           mask_poly[] = [];
>
> And then it ran fine.
>
> To summarize...
>   (1) To run METv3.0 grid_stat, please set the "MET_version" global
attribute in all the gridded NetCDF files you're using to METv3.0.
>   (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
>   (3) Consider updating to using METv4.1 instead.
>
> Thanks,
> John
>
> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > Hi John,
> > I am bothering you with a few more. Hope u ll bear with me.
> > So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >
> > 1. Now I was reading  about 3 approaches of FUZZY verf which are
a. Multi event contingency Table (My question is -----Can we define a
hit as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).
> >
> > b) Pragmatic approach  (donot know what's that???)
> >
> > c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >
> > I donot understand these. Can u lead me to the right direction or
provide some hints????
> >
> > 2. How Can I prepare the QUILT plots (Spatial scale vs Threshold)
for a score???
> > Can the QUILT plot be prepared for any score like HK, HSS, FBS or
FSS????
> >
> >
> > thanks
> > geeta
> >
> > From: geeta124 at hotmail.com
> > To: met_help at ucar.edu
> > Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> > Date: Thu, 20 Feb 2014 11:30:25 +0530
> >
> >
> >
> >
> > Hi John,
> > Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> > Kindly check that.
> >
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>
> >> Geeta,
> >>
> >> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>
> >> I was trying to make the point that the "interpolation methods"
in the grid_stat config file could be used as a form of "upscaling".
You are right, there is no *need* to interpolate the data since
> >> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>
> >> The default interpolation width is 1, meaning that no smoothing
is performed.  However, you could use multiple smoothing widths and
see how your performance changes the more you smooth the data.
> >>
> >> Does that make sense?
> >>
> >> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>
> >> fcst = {
> >>      wind_thresh = [ NA ];
> >>
> >>      field = [
> >>         {
> >>           name       = "APCP_24";
> >>           level      = [ "(*,*)" ];
> >>           cat_thresh = [ >0.0, >=5.0 ];
> >>         }
> >>      ];
> >>
> >> };
> >>
> >> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >> instructions:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> Thanks,
> >> John
> >>
> >> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> Hi John,
> >>>    I have run grid-stat. Following is the error.
> >>>
> >>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744073321512274
> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>> Observation File: ../trmm_nc_data/02june2011.nc
> >>> Configuration File: GridStatConfig_APCP_24
> >>> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>
> >>>
--------------------------------------------------------------------------------
> >>>
> >>>
> >>> Pls suggest.
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>
> >>>
> >>>
> >>>
> >>> Thanks a lot John for your inputs and clarifications.
> >>>
> >>> Still following doubts are there.
> >>>
> >>> 1. when I run copygb, what it does is to make the observation
and Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
> >>> R u calling that as Upscaling???? So this process is not a part
of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>
> >>> 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
> >>>
> >>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> You are correct, the input forecast and observation files must
be on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
> >>>>
> >>>> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> >>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>> config file, you could try:
> >>>>
> >>>> interp = {
> >>>>       field      = BOTH;
> >>>>       vld_thresh = 1.0;
> >>>>
> >>>>       type = [
> >>>>          { method = UW_MEAN; width  = 1; },
> >>>>          { method = UW_MEAN; width  = 3; },
> >>>>          { method = UW_MEAN; width  = 6; },
> >>>>          { method = UW_MEAN; width  = 9; }
> >>>>       ];
> >>>> };
> >>>>
> >>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>
> >>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>> type.  Be sure to turn the NBRCNT output line on in the Grid-
Stat config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> >>>> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
> >>>>
> >>>> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> >>>> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>> computed by comparing the forecast and observation fractional
coverage fields to each other.
> >>>>
> >>>> If you're verifying a single field using 3 different thresholds
and 6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
> >>>>
> >>>> Here's an example of how you might set this up in the Grid-Stat
config file:
> >>>>
> >>>> nbrhd = {
> >>>>       vld_thresh = 1.0;
> >>>>       width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>       cov_thresh = [ >=0.5 ];
> >>>> }
> >>>>
> >>>> For a given threshold, you should look to see how FSS changes
as you increase the neighborhood size.
> >>>>
> >>>> Hopefully that helps get you going.
> >>>>
> >>>> Thanks,
> >>>> John Halley Gotway
> >>>> met_help at ucar.edu
> >>>>
> >>>>
> >>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>           Queue: met_help
> >>>>>         Subject: Unable to visualize Fuzzy verf.
> >>>>>           Owner: Nobody
> >>>>>      Requestors: geeta124 at hotmail.com
> >>>>>          Status: new
> >>>>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>
> >>>>>
> >>>>> Hi John/ met_help.
> >>>>>
> >>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Sun Feb 23 09:54:11 2014

hi John,
Can you help with changing/ appending  the GLOBAL attributes of the
NETCDF file???.

Can you provide some more hints.

regards

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Fri, 21 Feb 2014 15:03:34 +0530




thanks John,
I have made the changes as per your config file.
But the error persists.
-bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073358673747
Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
Observation File: ../trmm_nc_data/02june2011.nc
Configuration File: GridStatConfig_APCP_24
NetCDF: Attribute not found
-bash-3.2$


2. I have used  ncdump to see my file attributes.
Are u referring to these attributes???

// global attributes:
                :FileOrigins = "File 2011060100_WRFPRS_d01.003Z.nc
generated 20140130_092500 UTC on host rmcdlh by the MET pcp_combine
tool" ;
                :MET_version = "V3.0" ;
                :MET_tool = "pcp_combine" ;

Following is my Config file.
____________________________________________________________________
////////////////////////////////////////////////////////////////////////////////
//
// Default grid_stat configuration file
//
////////////////////////////////////////////////////////////////////////////////
//
// Specify a name to designate the model being verified.  This name
will be
// written to the second column of the ASCII output generated.
//
model = "WRF";
//
// Specify a comma-separated list of fields to be verified.  The
forecast and
// observation fields may be specified separately.  If the obs_field
parameter
// is left blank, it will default to the contents of fcst_field.
//
// Each field is specified as a GRIB code or abbreviation followed by
an
// accumulation or vertical level indicator for GRIB files or as a
variable name
// followed by a list of dimensions for NetCDF files output from
p_interp or MET.
//
// Specifying verification fields for GRIB files:
//    GC/ANNN for accumulation interval NNN
//    GC/ZNNN for vertical level NNN
//    GC/PNNN for pressure level NNN in hPa
//    GC/PNNN-NNN for a range of pressure levels in hPa
//    GC/LNNN for a generic level type
//    GC/RNNN for a specific GRIB record number
//    Where GC is the number of or abbreviation for the grib code
//    to be verified.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
// Specifying verification fields for NetCDF files:
//    var_name(i,...,j,*,*) for a single field
//    Where var_name is the name of the NetCDF variable,
//    and i,...,j specifies fixed dimension values,
//    and *,* specifies the two dimensions for the gridded field.
//
//    NOTE: To verify winds as vectors rather than scalars,
//          specify UGRD (or 33) followd by VGRD (or 34) with the
//          same level values.
//
//    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
//
// e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for GRIB
input
// e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ]; for
NetCDF input
//
fcst_field[] = [ "APCP_24(*,*)" ];
obs_field[]  = [ "APCP_03(*,*)" ];
//
// Specify a comma-separated list of groups of thresholds to be
applied to the
// fields listed above.  Thresholds for the forecast and observation
fields
// may be specified separately.  If the obs_thresh parameter is left
blank,
// it will default to the content of fcst_thresh.
//
// At least one threshold must be provided for each field listed
above.  The
// lengths of the "fcst_field" and "fcst_thresh" arrays must match, as
must
// lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
// thresholds to a field, separate the threshold values with a space.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//
// NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
//       and be preceeded by "ge".
//
// e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0 ge80.0" ];
//
fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
obs_thresh[]  = [];
//
// Specify a comma-separated list of thresholds to be used when
computing
// VL1L2 partial sums for winds.  The thresholds are applied to the
wind speed
// values derived from each U/V pair.  Only those U/V pairs which meet
the wind
// speed threshold criteria are retained.  If the obs_wind_thresh
parameter is
// left blank, it will default to the contents of fcst_wind_thresh.
//
// To apply multiple wind speed thresholds, separate the threshold
values with a
// space.  Use "NA" to indicate that no wind speed threshold should be
applied.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//    'NA' for no threshold
//
// e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
//
fcst_wind_thresh[] = [ "NA" ];
obs_wind_thresh[]  = [];
//
// Specify a comma-separated list of grids to be used in masking the
data over
// which to perform scoring.  An empty list indicates that no masking
grid
// should be performed.  The standard NCEP grids are named "GNNN"
where NNN
// indicates the three digit grid number.  Enter "FULL" to score over
the
// entire domain.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
//
// e.g. mask_grid[] = [ "FULL" ];
//
mask_grid[] = [ "FULL" ];
//
// Specify a comma-separated list of masking regions to be applied.
// An empty list indicates that no additional masks should be used.
// The masking regions may be defined in one of 4 ways:
//
// (1) An ASCII file containing a lat/lon polygon.
//     Latitude in degrees north and longitude in degrees east.
//     By default, the first and last polygon points are connected.
//     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n points:
//          "poly_name lat1 lon1 lat2 lon2... latn lonn"
//
// (2) The NetCDF output of the gen_poly_mask tool.
//
// (3) A NetCDF data file, followed by the name of the NetCDF variable
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.nc var_name gt0.00"
//
// (4) A GRIB data file, followed by a description of the field
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.grb APCP/A3 gt0.00"
//
// Any NetCDF or GRIB file used must have the same grid dimensions as
the
// data being verified.
//
// MET_BASE may be used in the path for the files above.
//
// e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
//                      "poly_mask.ncf",
//                      "sample.nc APCP",
//                      "sample.grb HGT/Z0 gt100.0" ];
//
mask_poly[] = [];
//
// Specify a comma-separated list of values for alpha to be used when
computing
// confidence intervals.  Values of alpha must be between 0 and 1.
//
// e.g. ci_alpha[] = [ 0.05, 0.10 ];
//
ci_alpha[] = [ 0.10, 0.05 ];
//
// Specify the method to be used for computing bootstrap confidence
intervals.
// The value for this is interpreted as follows:
//    (0) Use the BCa interval method (computationally intensive)
//    (1) Use the percentile interval method
//
boot_interval = 1;
//
// Specify a proportion between 0 and 1 to define the replicate sample
size
// to be used when computing percentile intervals.  The replicate
sample
// size is set to boot_rep_prop * n, where n is the number of raw data
points.
//
// e.g boot_rep_prop = 0.80;
//
boot_rep_prop = 1.0;
//
// Specify the number of times each set of matched pair data should be
// resampled when computing bootstrap confidence intervals.  A value
of
// zero disables the computation of bootstrap condifence intervals.
//
// e.g. n_boot_rep = 1000;
//
n_boot_rep = 0;
//
// Specify the name of the random number generator to be used.  See
the MET
// Users Guide for a list of possible random number generators.
//
boot_rng = "mt19937";
//
// Specify the seed value to be used when computing bootstrap
confidence
// intervals.  If left unspecified, the seed will change for each run
and
// the computed bootstrap confidence intervals will not be
reproducable.
//
boot_seed = "";
//
// Specify a comma-separated list of interpolation method(s) to be
used for
// smoothing the data fields prior to comparing them.  The value at
each grid
// point is replaced by the measure computed over the neighborhood
defined
// around the grid point.  String values are interpreted as follows:
//    MIN     = Minimum in the neighborhood
//    MAX     = Maximum in the neighborhood
//    MEDIAN  = Median in the neighborhood
//    UW_MEAN = Unweighted mean in the neighborhood
//
//    NOTE: The distance-weighted mean (DW_MEAN) is not an option here
since
//          it will have no effect on a gridded field.
//
//    NOTE: The least-squares fit (LS_FIT) is not an option here since
//          it reduces to an unweighted mean on a grid.
//
// e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
//
interp_method[] = [ "UW_MEAN" ];
//
// Specify a comma-separated list of box widths to be used by the
interpolation
// techniques listed above.  All values must be odd.  A value of 1
indicates
// that no smoothing should be performed.  For values greater than 1,
the n*n
// grid points around each point will be used to smooth the data
fields.
//
// e.g. interp_width = [ 1, 3, 5 ];
//
interp_width[] = [ 1 ];
//
// The interp_flag controls how the smoothing defined above should be
applied:
// (1) Smooth only the forecast field
// (2) Smooth only the observation field
// (3) Smooth both the forecast and observation fields
//
interp_flag = 1;
//
// When smoothing, compute a ratio of the number of valid data points
to
// the total number of points in the neighborhood.  If that ratio is
less
// than this threshold, do not compute a smoothed forecast value.
This
// threshold must be between 0 and 1.  Setting this threshold to 1
will
// require that each observation be surrounded by n*n valid forecast
// points.
//
// e.g. interp_thresh = 1.0;
//
interp_thresh = 1.0;
//
// Specify a comma-separated list of box widths to be used to define
the
// neighborhood size for the neighborhood verification methods. All
values
// must be odd.  For values greater than 1, the n*n grid points around
each
// point will be used to define the neighborhood.
//
// e.g. nbr_width = [ 3, 5 ];
//
nbr_width[] = [ 3, 5 ];
//
// When applying the neighborhood verification methods, compute a
ratio
// of the number of valid data points to the total number of points in
// the neighborhood.  If that ratio is less than this threshold, do
not
// include it in the computations.  This threshold must be between 0
// and 1.  Setting this threshold to 1 will require that each point be
// surrounded by n*n valid forecast points.
//
// e.g. nbr_thresh = 1.0;
//
nbr_thresh = 1.0;
//
// When applying the neighborhood verification methods, apply a
threshold
// to the fractional coverage values to define contingency tables from
// which to compute statistics.
//
// e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
//
cov_thresh[] = [ "ge0.5" ];
//
// Specify flags to indicate the type of data to be output:
//
//    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
//           Total (TOTAL),
//           Forecast Rate (F_RATE),
//           Hit Rate (H_RATE),
//           Observation Rate (O_RATE)
//
//    (2) STAT and CTC Text Files, Contingency Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON)
//
//    (3) STAT and CTS Text Files, Contingency Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Frequency Bias (FBIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Contingency Table Count columns repeated N_CAT*N_CAT
times
//
//    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Accuracy (ACC),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Gerrity Score (GER),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (6) STAT and CNT Text Files, Statistics of Continuous Variables:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//           Forecast Standard Deviation (FSTDEV),
//           Observation Mean (OBAR),
//           Observation Standard Deviation (OSTDEV),
//           Pearson's Correlation Coefficient (PR_CORR),
//           Spearman's Rank Correlation Coefficient (SP_CORR),
//           Kendall Tau Rank Correlation Coefficient (KT_CORR),
//           Number of ranks compared (RANKS),
//           Number of tied ranks in the forecast field (FRANK_TIES),
//           Number of tied ranks in the observation field
(ORANK_TIES),
//           Mean Error (ME),
//           Standard Deviation of the Error (ESTDEV),
//           Multiplicative Bias (MBIAS = FBAR - OBAR),
//           Mean Absolute Error (MAE),
//           Mean Squared Error (MSE),
//           Bias-Corrected Mean Squared Error (BCMSE),
//           Root Mean Squared Error (RMSE),
//           Percentiles of the Error (E10, E25, E50, E75, E90)
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//              = mean(f)
//           Observation Mean (OBAR),
//              = mean(o)
//           Forecast*Observation Product Mean (FOBAR),
//              = mean(f*o)
//           Forecast Squared Mean (FFBAR),
//              = mean(f^2)
//           Observation Squared Mean (OOBAR)
//              = mean(o^2)
//
//    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
//           Total (TOTAL),
//           U-Forecast Mean (UFBAR),
//              = mean(uf)
//           V-Forecast Mean (VFBAR),
//              = mean(vf)
//           U-Observation Mean (UOBAR),
//              = mean(uo)
//           V-Observation Mean (VOBAR),
//              = mean(vo)
//           U-Product Plus V-Product (UVFOBAR),
//              = mean(uf*uo+vf*vo)
//           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
//              = mean(uf^2+vf^2)
//           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
//              = mean(uo^2+vo^2)
//
//    (9) STAT and PCT Text Files, Nx2 Probability Contingency Table
Counts:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Row Observation Yes Count (OY_i),
//           Row Observation No Count (ON_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (10) STAT and PSTD Text Files, Nx2 Probability Contingency Table
Scores:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Base Rate (BASER) with confidence interval limits,
//           Reliability (RELIABILITY),
//           Resolution (RESOLUTION),
//           Uncertainty (UNCERTAINTY),
//           Area Under the ROC Curve (ROC_AUC),
//           Brier Score (BRIER) with confidence interval limits,
//           Probability Threshold Value (THRESH_i)
//           NOTE: Previous column repeated for each probability
threshold.
//
//   (11) STAT and PJC Text Files, Joint/Continuous Statistics of
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Observation Yes Count Divided by Total (OY_TP_i),
//           Observation No Count Divided by Total (ON_TP_i),
//           Calibration (CALIBRATION_i),
//           Refinement (REFINEMENT_i),
//           Likelikhood (LIKELIHOOD_i),
//           Base Rate (BASER_i),
//           NOTE: Previous 7 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (12) STAT and PRC Text Files, ROC Curve Points for
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Probability of Detecting Yes (PODY_i),
//           Probability of False Detection (POFD_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (13) STAT and NBRCTC Text Files, Neighborhood Methods Contingency
Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON),
//           Fractional Threshold Value (FRAC_T)
//
//   (14) STAT and NBRCTS Text Files, Neighborhood Methods Contingency
Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Bias (BIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//   (15) STAT and NBRCNT Text Files, Neighborhood Methods Continuous
Scores:
//           Total (TOTAL),
//           Fractions Brier Score (FBS),
//           Fractions Skill Score (FSS)
//
//   (16) NetCDF File containing difference fields for each grib
//        code/mask combination.  A non-zero value indicates that
//        this NetCDF file should be produced.  A value of 0
//        indicates that it should not be produced.
//
// Values for flags (1) through (15) are interpreted as follows:
//    (0) Do not generate output of this type
//    (1) Write output to a STAT file
//    (2) Write output to a STAT file and a text file
//
output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1 ];
//
// Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
// Coefficients should be computed.  Computing them over large
datasets is
// computationally intensive and slows down the runtime execution
significantly.
//    (0) Do not compute these correlation coefficients
//    (1) Compute these correlation coefficients
//
rank_corr_flag = 0;
//
// Specify the GRIB Table 2 parameter table version number to be used
// for interpreting GRIB codes.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
grib_ptv = 2;
//
// Directory where temporary files should be written.
//
tmp_dir = "/tmp";
//
// Prefix to be used for the output file names.
//
output_prefix = "APCP_24";
//
// Indicate a version number for the contents of this configuration
file.
// The value should generally not be modified.
//
version = "V3.0";


geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Thu, 20 Feb 2014 10:21:20 -0700
>
> Geeta,
>
> I see that you're using METv3.0.  The current version is METv4.1,
and it'd be good to switch to that version when possible.  There have
been major changes to the MET configuration file format since
> METv3.0, so be sure to use the default config files for METv4.1.
>
> I ran METv3.0 grid_stat on the data files you sent and reproduced
the error message you saw:
>     ***WARNING***: process_scores() -> 61(*,*) not found in file:
2011060100_WRFPRS_day1_003Z.nc
>
> Since the input files are both NetCDF files, you need to specify the
name of the NetCDF variable that should be used.  So I modified your
config file:
>     FROM: fcst_field[] = [ "61/A24" ];
>     TO:   fcst_field[] = [ "APCP_24(*,*)" ];
>
> When I reran with this change, I got this error:
>     NetCDF: Attribute not found
>
> After some digging, I found the problem to be the MET_version global
attribute in 02june2011.nc:
>                  :MET_version = "V3.0.1" ;
>
> I switched that to be consistent with the version of MET you're
running:
>                  :MET_version = "V3.0" ;
>
> And then I got this error:
> ERROR: parse_poly_mask() -> the dimensions of the masking region
(185, 129) must match the dimensions of the data (53, 53).
>
> So I modified the config file to change the masking region settings:
>     FROM: mask_grid[] = [ "DTC165", "DTC166" ];
>           mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
>                           "MET_BASE/data/poly/LMV.poly" ];
>
>     TO:   mask_grid[] = [ "FULL" ];
>           mask_poly[] = [];
>
> And then it ran fine.
>
> To summarize...
>   (1) To run METv3.0 grid_stat, please set the "MET_version" global
attribute in all the gridded NetCDF files you're using to METv3.0.
>   (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
>   (3) Consider updating to using METv4.1 instead.
>
> Thanks,
> John
>
> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > Hi John,
> > I am bothering you with a few more. Hope u ll bear with me.
> > So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >
> > 1. Now I was reading  about 3 approaches of FUZZY verf which are
a. Multi event contingency Table (My question is -----Can we define a
hit as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).
> >
> > b) Pragmatic approach  (donot know what's that???)
> >
> > c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >
> > I donot understand these. Can u lead me to the right direction or
provide some hints????
> >
> > 2. How Can I prepare the QUILT plots (Spatial scale vs Threshold)
for a score???
> > Can the QUILT plot be prepared for any score like HK, HSS, FBS or
FSS????
> >
> >
> > thanks
> > geeta
> >
> > From: geeta124 at hotmail.com
> > To: met_help at ucar.edu
> > Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> > Date: Thu, 20 Feb 2014 11:30:25 +0530
> >
> >
> >
> >
> > Hi John,
> > Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> > Kindly check that.
> >
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>
> >> Geeta,
> >>
> >> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>
> >> I was trying to make the point that the "interpolation methods"
in the grid_stat config file could be used as a form of "upscaling".
You are right, there is no *need* to interpolate the data since
> >> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>
> >> The default interpolation width is 1, meaning that no smoothing
is performed.  However, you could use multiple smoothing widths and
see how your performance changes the more you smooth the data.
> >>
> >> Does that make sense?
> >>
> >> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>
> >> fcst = {
> >>      wind_thresh = [ NA ];
> >>
> >>      field = [
> >>         {
> >>           name       = "APCP_24";
> >>           level      = [ "(*,*)" ];
> >>           cat_thresh = [ >0.0, >=5.0 ];
> >>         }
> >>      ];
> >>
> >> };
> >>
> >> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >> instructions:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> Thanks,
> >> John
> >>
> >> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> Hi John,
> >>>    I have run grid-stat. Following is the error.
> >>>
> >>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744073321512274
> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>> Observation File: ../trmm_nc_data/02june2011.nc
> >>> Configuration File: GridStatConfig_APCP_24
> >>> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>
> >>>
--------------------------------------------------------------------------------
> >>>
> >>>
> >>> Pls suggest.
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>
> >>>
> >>>
> >>>
> >>> Thanks a lot John for your inputs and clarifications.
> >>>
> >>> Still following doubts are there.
> >>>
> >>> 1. when I run copygb, what it does is to make the observation
and Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
> >>> R u calling that as Upscaling???? So this process is not a part
of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>
> >>> 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
> >>>
> >>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> You are correct, the input forecast and observation files must
be on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
> >>>>
> >>>> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> >>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>> config file, you could try:
> >>>>
> >>>> interp = {
> >>>>       field      = BOTH;
> >>>>       vld_thresh = 1.0;
> >>>>
> >>>>       type = [
> >>>>          { method = UW_MEAN; width  = 1; },
> >>>>          { method = UW_MEAN; width  = 3; },
> >>>>          { method = UW_MEAN; width  = 6; },
> >>>>          { method = UW_MEAN; width  = 9; }
> >>>>       ];
> >>>> };
> >>>>
> >>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>
> >>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>> type.  Be sure to turn the NBRCNT output line on in the Grid-
Stat config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> >>>> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
> >>>>
> >>>> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> >>>> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>> computed by comparing the forecast and observation fractional
coverage fields to each other.
> >>>>
> >>>> If you're verifying a single field using 3 different thresholds
and 6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
> >>>>
> >>>> Here's an example of how you might set this up in the Grid-Stat
config file:
> >>>>
> >>>> nbrhd = {
> >>>>       vld_thresh = 1.0;
> >>>>       width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>       cov_thresh = [ >=0.5 ];
> >>>> }
> >>>>
> >>>> For a given threshold, you should look to see how FSS changes
as you increase the neighborhood size.
> >>>>
> >>>> Hopefully that helps get you going.
> >>>>
> >>>> Thanks,
> >>>> John Halley Gotway
> >>>> met_help at ucar.edu
> >>>>
> >>>>
> >>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>           Queue: met_help
> >>>>>         Subject: Unable to visualize Fuzzy verf.
> >>>>>           Owner: Nobody
> >>>>>      Requestors: geeta124 at hotmail.com
> >>>>>          Status: new
> >>>>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>
> >>>>>
> >>>>> Hi John/ met_help.
> >>>>>
> >>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Mon Feb 24 17:59:24 2014

hi john,
you had discussed about the upscaling (of both obs and fcst or any one
of them). The forecast is compared with the observations which are
averaged to coarse scales.
How is this averaged defined in the configuration file.

Pls let me know reg the global attributes ????

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Sun, 23 Feb 2014 22:24:04 +0530




hi John,
Can you help with changing/ appending  the GLOBAL attributes of the
NETCDF file???.

Can you provide some more hints.

regards

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Fri, 21 Feb 2014 15:03:34 +0530




thanks John,
I have made the changes as per your config file.
But the error persists.
-bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
GSL_RNG_TYPE=mt19937
GSL_RNG_SEED=18446744073358673747
Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
Observation File: ../trmm_nc_data/02june2011.nc
Configuration File: GridStatConfig_APCP_24
NetCDF: Attribute not found
-bash-3.2$


2. I have used  ncdump to see my file attributes.
Are u referring to these attributes???

// global attributes:
                :FileOrigins = "File 2011060100_WRFPRS_d01.003Z.nc
generated 20140130_092500 UTC on host rmcdlh by the MET pcp_combine
tool" ;
                :MET_version = "V3.0" ;
                :MET_tool = "pcp_combine" ;

Following is my Config file.
____________________________________________________________________
////////////////////////////////////////////////////////////////////////////////
//
// Default grid_stat configuration file
//
////////////////////////////////////////////////////////////////////////////////
//
// Specify a name to designate the model being verified.  This name
will be
// written to the second column of the ASCII output generated.
//
model = "WRF";
//
// Specify a comma-separated list of fields to be verified.  The
forecast and
// observation fields may be specified separately.  If the obs_field
parameter
// is left blank, it will default to the contents of fcst_field.
//
// Each field is specified as a GRIB code or abbreviation followed by
an
// accumulation or vertical level indicator for GRIB files or as a
variable name
// followed by a list of dimensions for NetCDF files output from
p_interp or MET.
//
// Specifying verification fields for GRIB files:
//    GC/ANNN for accumulation interval NNN
//    GC/ZNNN for vertical level NNN
//    GC/PNNN for pressure level NNN in hPa
//    GC/PNNN-NNN for a range of pressure levels in hPa
//    GC/LNNN for a generic level type
//    GC/RNNN for a specific GRIB record number
//    Where GC is the number of or abbreviation for the grib code
//    to be verified.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
// Specifying verification fields for NetCDF files:
//    var_name(i,...,j,*,*) for a single field
//    Where var_name is the name of the NetCDF variable,
//    and i,...,j specifies fixed dimension values,
//    and *,* specifies the two dimensions for the gridded field.
//
//    NOTE: To verify winds as vectors rather than scalars,
//          specify UGRD (or 33) followd by VGRD (or 34) with the
//          same level values.
//
//    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
//
// e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for GRIB
input
// e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ]; for
NetCDF input
//
fcst_field[] = [ "APCP_24(*,*)" ];
obs_field[]  = [ "APCP_03(*,*)" ];
//
// Specify a comma-separated list of groups of thresholds to be
applied to the
// fields listed above.  Thresholds for the forecast and observation
fields
// may be specified separately.  If the obs_thresh parameter is left
blank,
// it will default to the content of fcst_thresh.
//
// At least one threshold must be provided for each field listed
above.  The
// lengths of the "fcst_field" and "fcst_thresh" arrays must match, as
must
// lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
// thresholds to a field, separate the threshold values with a space.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//
// NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
//       and be preceeded by "ge".
//
// e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0 ge80.0" ];
//
fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
obs_thresh[]  = [];
//
// Specify a comma-separated list of thresholds to be used when
computing
// VL1L2 partial sums for winds.  The thresholds are applied to the
wind speed
// values derived from each U/V pair.  Only those U/V pairs which meet
the wind
// speed threshold criteria are retained.  If the obs_wind_thresh
parameter is
// left blank, it will default to the contents of fcst_wind_thresh.
//
// To apply multiple wind speed thresholds, separate the threshold
values with a
// space.  Use "NA" to indicate that no wind speed threshold should be
applied.
//
// Each threshold must be preceded by a two letter indicator for the
type of
// thresholding to be performed:
//    'lt' for less than     'le' for less than or equal to
//    'eq' for equal to      'ne' for not equal to
//    'gt' for greater than  'ge' for greater than or equal to
//    'NA' for no threshold
//
// e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
//
fcst_wind_thresh[] = [ "NA" ];
obs_wind_thresh[]  = [];
//
// Specify a comma-separated list of grids to be used in masking the
data over
// which to perform scoring.  An empty list indicates that no masking
grid
// should be performed.  The standard NCEP grids are named "GNNN"
where NNN
// indicates the three digit grid number.  Enter "FULL" to score over
the
// entire domain.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
//
// e.g. mask_grid[] = [ "FULL" ];
//
mask_grid[] = [ "FULL" ];
//
// Specify a comma-separated list of masking regions to be applied.
// An empty list indicates that no additional masks should be used.
// The masking regions may be defined in one of 4 ways:
//
// (1) An ASCII file containing a lat/lon polygon.
//     Latitude in degrees north and longitude in degrees east.
//     By default, the first and last polygon points are connected.
//     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n points:
//          "poly_name lat1 lon1 lat2 lon2... latn lonn"
//
// (2) The NetCDF output of the gen_poly_mask tool.
//
// (3) A NetCDF data file, followed by the name of the NetCDF variable
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.nc var_name gt0.00"
//
// (4) A GRIB data file, followed by a description of the field
//     to be used, and optionally, a threshold to be applied to the
field.
//     e.g. "sample.grb APCP/A3 gt0.00"
//
// Any NetCDF or GRIB file used must have the same grid dimensions as
the
// data being verified.
//
// MET_BASE may be used in the path for the files above.
//
// e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
//                      "poly_mask.ncf",
//                      "sample.nc APCP",
//                      "sample.grb HGT/Z0 gt100.0" ];
//
mask_poly[] = [];
//
// Specify a comma-separated list of values for alpha to be used when
computing
// confidence intervals.  Values of alpha must be between 0 and 1.
//
// e.g. ci_alpha[] = [ 0.05, 0.10 ];
//
ci_alpha[] = [ 0.10, 0.05 ];
//
// Specify the method to be used for computing bootstrap confidence
intervals.
// The value for this is interpreted as follows:
//    (0) Use the BCa interval method (computationally intensive)
//    (1) Use the percentile interval method
//
boot_interval = 1;
//
// Specify a proportion between 0 and 1 to define the replicate sample
size
// to be used when computing percentile intervals.  The replicate
sample
// size is set to boot_rep_prop * n, where n is the number of raw data
points.
//
// e.g boot_rep_prop = 0.80;
//
boot_rep_prop = 1.0;
//
// Specify the number of times each set of matched pair data should be
// resampled when computing bootstrap confidence intervals.  A value
of
// zero disables the computation of bootstrap condifence intervals.
//
// e.g. n_boot_rep = 1000;
//
n_boot_rep = 0;
//
// Specify the name of the random number generator to be used.  See
the MET
// Users Guide for a list of possible random number generators.
//
boot_rng = "mt19937";
//
// Specify the seed value to be used when computing bootstrap
confidence
// intervals.  If left unspecified, the seed will change for each run
and
// the computed bootstrap confidence intervals will not be
reproducable.
//
boot_seed = "";
//
// Specify a comma-separated list of interpolation method(s) to be
used for
// smoothing the data fields prior to comparing them.  The value at
each grid
// point is replaced by the measure computed over the neighborhood
defined
// around the grid point.  String values are interpreted as follows:
//    MIN     = Minimum in the neighborhood
//    MAX     = Maximum in the neighborhood
//    MEDIAN  = Median in the neighborhood
//    UW_MEAN = Unweighted mean in the neighborhood
//
//    NOTE: The distance-weighted mean (DW_MEAN) is not an option here
since
//          it will have no effect on a gridded field.
//
//    NOTE: The least-squares fit (LS_FIT) is not an option here since
//          it reduces to an unweighted mean on a grid.
//
// e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
//
interp_method[] = [ "UW_MEAN" ];
//
// Specify a comma-separated list of box widths to be used by the
interpolation
// techniques listed above.  All values must be odd.  A value of 1
indicates
// that no smoothing should be performed.  For values greater than 1,
the n*n
// grid points around each point will be used to smooth the data
fields.
//
// e.g. interp_width = [ 1, 3, 5 ];
//
interp_width[] = [ 1 ];
//
// The interp_flag controls how the smoothing defined above should be
applied:
// (1) Smooth only the forecast field
// (2) Smooth only the observation field
// (3) Smooth both the forecast and observation fields
//
interp_flag = 1;
//
// When smoothing, compute a ratio of the number of valid data points
to
// the total number of points in the neighborhood.  If that ratio is
less
// than this threshold, do not compute a smoothed forecast value.
This
// threshold must be between 0 and 1.  Setting this threshold to 1
will
// require that each observation be surrounded by n*n valid forecast
// points.
//
// e.g. interp_thresh = 1.0;
//
interp_thresh = 1.0;
//
// Specify a comma-separated list of box widths to be used to define
the
// neighborhood size for the neighborhood verification methods. All
values
// must be odd.  For values greater than 1, the n*n grid points around
each
// point will be used to define the neighborhood.
//
// e.g. nbr_width = [ 3, 5 ];
//
nbr_width[] = [ 3, 5 ];
//
// When applying the neighborhood verification methods, compute a
ratio
// of the number of valid data points to the total number of points in
// the neighborhood.  If that ratio is less than this threshold, do
not
// include it in the computations.  This threshold must be between 0
// and 1.  Setting this threshold to 1 will require that each point be
// surrounded by n*n valid forecast points.
//
// e.g. nbr_thresh = 1.0;
//
nbr_thresh = 1.0;
//
// When applying the neighborhood verification methods, apply a
threshold
// to the fractional coverage values to define contingency tables from
// which to compute statistics.
//
// e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
//
cov_thresh[] = [ "ge0.5" ];
//
// Specify flags to indicate the type of data to be output:
//
//    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
//           Total (TOTAL),
//           Forecast Rate (F_RATE),
//           Hit Rate (H_RATE),
//           Observation Rate (O_RATE)
//
//    (2) STAT and CTC Text Files, Contingency Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON)
//
//    (3) STAT and CTS Text Files, Contingency Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Frequency Bias (FBIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Contingency Table Count columns repeated N_CAT*N_CAT
times
//
//    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
//           Total (TOTAL),
//           Number of Categories (N_CAT),
//           Accuracy (ACC),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Gerrity Score (GER),
//           NOTE: All statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (6) STAT and CNT Text Files, Statistics of Continuous Variables:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//           Forecast Standard Deviation (FSTDEV),
//           Observation Mean (OBAR),
//           Observation Standard Deviation (OSTDEV),
//           Pearson's Correlation Coefficient (PR_CORR),
//           Spearman's Rank Correlation Coefficient (SP_CORR),
//           Kendall Tau Rank Correlation Coefficient (KT_CORR),
//           Number of ranks compared (RANKS),
//           Number of tied ranks in the forecast field (FRANK_TIES),
//           Number of tied ranks in the observation field
(ORANK_TIES),
//           Mean Error (ME),
//           Standard Deviation of the Error (ESTDEV),
//           Multiplicative Bias (MBIAS = FBAR - OBAR),
//           Mean Absolute Error (MAE),
//           Mean Squared Error (MSE),
//           Bias-Corrected Mean Squared Error (BCMSE),
//           Root Mean Squared Error (RMSE),
//           Percentiles of the Error (E10, E25, E50, E75, E90)
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
//           Total (TOTAL),
//           Forecast Mean (FBAR),
//              = mean(f)
//           Observation Mean (OBAR),
//              = mean(o)
//           Forecast*Observation Product Mean (FOBAR),
//              = mean(f*o)
//           Forecast Squared Mean (FFBAR),
//              = mean(f^2)
//           Observation Squared Mean (OOBAR)
//              = mean(o^2)
//
//    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
//           Total (TOTAL),
//           U-Forecast Mean (UFBAR),
//              = mean(uf)
//           V-Forecast Mean (VFBAR),
//              = mean(vf)
//           U-Observation Mean (UOBAR),
//              = mean(uo)
//           V-Observation Mean (VOBAR),
//              = mean(vo)
//           U-Product Plus V-Product (UVFOBAR),
//              = mean(uf*uo+vf*vo)
//           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
//              = mean(uf^2+vf^2)
//           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
//              = mean(uo^2+vo^2)
//
//    (9) STAT and PCT Text Files, Nx2 Probability Contingency Table
Counts:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Row Observation Yes Count (OY_i),
//           Row Observation No Count (ON_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (10) STAT and PSTD Text Files, Nx2 Probability Contingency Table
Scores:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Base Rate (BASER) with confidence interval limits,
//           Reliability (RELIABILITY),
//           Resolution (RESOLUTION),
//           Uncertainty (UNCERTAINTY),
//           Area Under the ROC Curve (ROC_AUC),
//           Brier Score (BRIER) with confidence interval limits,
//           Probability Threshold Value (THRESH_i)
//           NOTE: Previous column repeated for each probability
threshold.
//
//   (11) STAT and PJC Text Files, Joint/Continuous Statistics of
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Observation Yes Count Divided by Total (OY_TP_i),
//           Observation No Count Divided by Total (ON_TP_i),
//           Calibration (CALIBRATION_i),
//           Refinement (REFINEMENT_i),
//           Likelikhood (LIKELIHOOD_i),
//           Base Rate (BASER_i),
//           NOTE: Previous 7 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (12) STAT and PRC Text Files, ROC Curve Points for
//                                 Probabilistic Variables:
//           Total (TOTAL),
//           Number of Forecast Probability Thresholds (N_THRESH),
//           Probability Threshold Value (THRESH_i),
//           Probability of Detecting Yes (PODY_i),
//           Probability of False Detection (POFD_i),
//           NOTE: Previous 3 columns repeated for each row in the
table
//           Last Probability Threshold Value (THRESH_n)
//
//   (13) STAT and NBRCTC Text Files, Neighborhood Methods Contingency
Table Counts:
//           Total (TOTAL),
//           Forecast Yes and Observation Yes Count (FY_OY),
//           Forecast Yes and Observation No Count (FY_ON),
//           Forecast No and Observation Yes Count (FN_OY),
//           Forecast No and Observation No Count (FN_ON),
//           Fractional Threshold Value (FRAC_T)
//
//   (14) STAT and NBRCTS Text Files, Neighborhood Methods Contingency
Table Scores:
//           Total (TOTAL),
//           Base Rate (BASER),
//           Forecast Mean (FMEAN),
//           Accuracy (ACC),
//           Bias (BIAS),
//           Probability of Detecting Yes (PODY),
//           Probability of Detecting No (PODN),
//           Probability of False Detection (POFD),
//           False Alarm Ratio (FAR),
//           Critical Success Index (CSI),
//           Gilbert Skill Score (GSS),
//           Hanssen and Kuipers Discriminant (HK),
//           Heidke Skill Score (HSS),
//           Odds Ratio (ODDS),
//           NOTE: Most statistics listed above contain parametric
and/or
//                 non-parametric confidence interval limits.
//
//   (15) STAT and NBRCNT Text Files, Neighborhood Methods Continuous
Scores:
//           Total (TOTAL),
//           Fractions Brier Score (FBS),
//           Fractions Skill Score (FSS)
//
//   (16) NetCDF File containing difference fields for each grib
//        code/mask combination.  A non-zero value indicates that
//        this NetCDF file should be produced.  A value of 0
//        indicates that it should not be produced.
//
// Values for flags (1) through (15) are interpreted as follows:
//    (0) Do not generate output of this type
//    (1) Write output to a STAT file
//    (2) Write output to a STAT file and a text file
//
output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1 ];
//
// Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
// Coefficients should be computed.  Computing them over large
datasets is
// computationally intensive and slows down the runtime execution
significantly.
//    (0) Do not compute these correlation coefficients
//    (1) Compute these correlation coefficients
//
rank_corr_flag = 0;
//
// Specify the GRIB Table 2 parameter table version number to be used
// for interpreting GRIB codes.
// http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
//
grib_ptv = 2;
//
// Directory where temporary files should be written.
//
tmp_dir = "/tmp";
//
// Prefix to be used for the output file names.
//
output_prefix = "APCP_24";
//
// Indicate a version number for the contents of this configuration
file.
// The value should generally not be modified.
//
version = "V3.0";


geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Thu, 20 Feb 2014 10:21:20 -0700
>
> Geeta,
>
> I see that you're using METv3.0.  The current version is METv4.1,
and it'd be good to switch to that version when possible.  There have
been major changes to the MET configuration file format since
> METv3.0, so be sure to use the default config files for METv4.1.
>
> I ran METv3.0 grid_stat on the data files you sent and reproduced
the error message you saw:
>     ***WARNING***: process_scores() -> 61(*,*) not found in file:
2011060100_WRFPRS_day1_003Z.nc
>
> Since the input files are both NetCDF files, you need to specify the
name of the NetCDF variable that should be used.  So I modified your
config file:
>     FROM: fcst_field[] = [ "61/A24" ];
>     TO:   fcst_field[] = [ "APCP_24(*,*)" ];
>
> When I reran with this change, I got this error:
>     NetCDF: Attribute not found
>
> After some digging, I found the problem to be the MET_version global
attribute in 02june2011.nc:
>                  :MET_version = "V3.0.1" ;
>
> I switched that to be consistent with the version of MET you're
running:
>                  :MET_version = "V3.0" ;
>
> And then I got this error:
> ERROR: parse_poly_mask() -> the dimensions of the masking region
(185, 129) must match the dimensions of the data (53, 53).
>
> So I modified the config file to change the masking region settings:
>     FROM: mask_grid[] = [ "DTC165", "DTC166" ];
>           mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
>                           "MET_BASE/data/poly/LMV.poly" ];
>
>     TO:   mask_grid[] = [ "FULL" ];
>           mask_poly[] = [];
>
> And then it ran fine.
>
> To summarize...
>   (1) To run METv3.0 grid_stat, please set the "MET_version" global
attribute in all the gridded NetCDF files you're using to METv3.0.
>   (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
>   (3) Consider updating to using METv4.1 instead.
>
> Thanks,
> John
>
> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > Hi John,
> > I am bothering you with a few more. Hope u ll bear with me.
> > So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >
> > 1. Now I was reading  about 3 approaches of FUZZY verf which are
a. Multi event contingency Table (My question is -----Can we define a
hit as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).
> >
> > b) Pragmatic approach  (donot know what's that???)
> >
> > c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >
> > I donot understand these. Can u lead me to the right direction or
provide some hints????
> >
> > 2. How Can I prepare the QUILT plots (Spatial scale vs Threshold)
for a score???
> > Can the QUILT plot be prepared for any score like HK, HSS, FBS or
FSS????
> >
> >
> > thanks
> > geeta
> >
> > From: geeta124 at hotmail.com
> > To: met_help at ucar.edu
> > Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> > Date: Thu, 20 Feb 2014 11:30:25 +0530
> >
> >
> >
> >
> > Hi John,
> > Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> > Kindly check that.
> >
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>
> >> Geeta,
> >>
> >> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>
> >> I was trying to make the point that the "interpolation methods"
in the grid_stat config file could be used as a form of "upscaling".
You are right, there is no *need* to interpolate the data since
> >> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>
> >> The default interpolation width is 1, meaning that no smoothing
is performed.  However, you could use multiple smoothing widths and
see how your performance changes the more you smooth the data.
> >>
> >> Does that make sense?
> >>
> >> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>
> >> fcst = {
> >>      wind_thresh = [ NA ];
> >>
> >>      field = [
> >>         {
> >>           name       = "APCP_24";
> >>           level      = [ "(*,*)" ];
> >>           cat_thresh = [ >0.0, >=5.0 ];
> >>         }
> >>      ];
> >>
> >> };
> >>
> >> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >> instructions:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> Thanks,
> >> John
> >>
> >> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> Hi John,
> >>>    I have run grid-stat. Following is the error.
> >>>
> >>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744073321512274
> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>> Observation File: ../trmm_nc_data/02june2011.nc
> >>> Configuration File: GridStatConfig_APCP_24
> >>> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>
> >>>
--------------------------------------------------------------------------------
> >>>
> >>>
> >>> Pls suggest.
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>
> >>>
> >>>
> >>>
> >>> Thanks a lot John for your inputs and clarifications.
> >>>
> >>> Still following doubts are there.
> >>>
> >>> 1. when I run copygb, what it does is to make the observation
and Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
> >>> R u calling that as Upscaling???? So this process is not a part
of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>
> >>> 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
> >>>
> >>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> You are correct, the input forecast and observation files must
be on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
> >>>>
> >>>> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
> >>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>> config file, you could try:
> >>>>
> >>>> interp = {
> >>>>       field      = BOTH;
> >>>>       vld_thresh = 1.0;
> >>>>
> >>>>       type = [
> >>>>          { method = UW_MEAN; width  = 1; },
> >>>>          { method = UW_MEAN; width  = 3; },
> >>>>          { method = UW_MEAN; width  = 6; },
> >>>>          { method = UW_MEAN; width  = 9; }
> >>>>       ];
> >>>> };
> >>>>
> >>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>
> >>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>> type.  Be sure to turn the NBRCNT output line on in the Grid-
Stat config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> >>>> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
> >>>>
> >>>> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
> >>>> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>> computed by comparing the forecast and observation fractional
coverage fields to each other.
> >>>>
> >>>> If you're verifying a single field using 3 different thresholds
and 6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
> >>>>
> >>>> Here's an example of how you might set this up in the Grid-Stat
config file:
> >>>>
> >>>> nbrhd = {
> >>>>       vld_thresh = 1.0;
> >>>>       width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>       cov_thresh = [ >=0.5 ];
> >>>> }
> >>>>
> >>>> For a given threshold, you should look to see how FSS changes
as you increase the neighborhood size.
> >>>>
> >>>> Hopefully that helps get you going.
> >>>>
> >>>> Thanks,
> >>>> John Halley Gotway
> >>>> met_help at ucar.edu
> >>>>
> >>>>
> >>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>           Queue: met_help
> >>>>>         Subject: Unable to visualize Fuzzy verf.
> >>>>>           Owner: Nobody
> >>>>>      Requestors: geeta124 at hotmail.com
> >>>>>          Status: new
> >>>>>     Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>
> >>>>>
> >>>>> Hi John/ met_help.
> >>>>>
> >>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Tue Feb 25 11:26:10 2014

Geeta,

Sorry, I was out of the office yesterday.  Looking back through this
ticket, I see that you're still getting the following error:
    NetCDF: Attribute not found

You said that you updated the NetCDF files to include the following
global attribute:
    MET_version = "V3.0" ;

If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
    http://www.dtcenter.org/met/users/support/met_help.php#ftp

I see 2 other questions in your emails:

(1) How can you control the optional "upscaling" or "smoothing" done
by grid_stat?
     In METv3.0, that is controlled by configuration file options that
begin with "interp_".  For example, try the following:
        interp_method[] = [ "UW_MEAN" ];
        interp_width[]  = [ 1, 3, 5 ];
        interp_flag     = 3;

     For each output line you were getting before, you should now get
2 more.  Since interp_flag is set to 3, grid_stat will smooth both the
forecast and observation fields.  For interp_width = 3,
it'll smooth each data point by computing the average of the 9 points
in a 3x3 box around each grid point.  For interp_width = 5, it'll
smooth each data point by computing the average of the 25 points
in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.

However computing the fractions skill score (in the NBRCNT line type)
is a common way of doing "neighborhood" or "fuzzy" verification.

(2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
plots a red dot for each observation lat/lon it finds in the data.  It
is intended to just give you a quick look at the location of the
observations to make sure that they exist where you expect.  It
in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
the computation of statistics.  The MPR line type includes columns
named "OBS_LAT" and "OBS_LON" giving the point observation location
information.  You could read the lat/lon information from the MPR
line type and use whatever plotting tool you prefer to plot the
observation locations.

If you do post more data to the ftp site, please write me back and
I'll go grab it.

Thanks,
John


On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>
> hi john,
> you had discussed about the upscaling (of both obs and fcst or any
one of them). The forecast is compared with the observations which are
averaged to coarse scales.
> How is this averaged defined in the configuration file.
>
> Pls let me know reg the global attributes ????
>
> geeta
>
> From: geeta124 at hotmail.com
> To: met_help at ucar.edu
> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> Date: Sun, 23 Feb 2014 22:24:04 +0530
>
>
>
>
> hi John,
> Can you help with changing/ appending  the GLOBAL attributes of the
NETCDF file???.
>
> Can you provide some more hints.
>
> regards
>
> geeta
>
> From: geeta124 at hotmail.com
> To: met_help at ucar.edu
> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> Date: Fri, 21 Feb 2014 15:03:34 +0530
>
>
>
>
> thanks John,
> I have made the changes as per your config file.
> But the error persists.
> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> GSL_RNG_TYPE=mt19937
> GSL_RNG_SEED=18446744073358673747
> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> Observation File: ../trmm_nc_data/02june2011.nc
> Configuration File: GridStatConfig_APCP_24
> NetCDF: Attribute not found
> -bash-3.2$
>
>
> 2. I have used  ncdump to see my file attributes.
> Are u referring to these attributes???
>
> // global attributes:
>                  :FileOrigins = "File 2011060100_WRFPRS_d01.003Z.nc
generated 20140130_092500 UTC on host rmcdlh by the MET pcp_combine
tool" ;
>                  :MET_version = "V3.0" ;
>                  :MET_tool = "pcp_combine" ;
>
> Following is my Config file.
> ____________________________________________________________________
>
////////////////////////////////////////////////////////////////////////////////
> //
> // Default grid_stat configuration file
> //
>
////////////////////////////////////////////////////////////////////////////////
> //
> // Specify a name to designate the model being verified.  This name
will be
> // written to the second column of the ASCII output generated.
> //
> model = "WRF";
> //
> // Specify a comma-separated list of fields to be verified.  The
forecast and
> // observation fields may be specified separately.  If the obs_field
parameter
> // is left blank, it will default to the contents of fcst_field.
> //
> // Each field is specified as a GRIB code or abbreviation followed
by an
> // accumulation or vertical level indicator for GRIB files or as a
variable name
> // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
> //
> // Specifying verification fields for GRIB files:
> //    GC/ANNN for accumulation interval NNN
> //    GC/ZNNN for vertical level NNN
> //    GC/PNNN for pressure level NNN in hPa
> //    GC/PNNN-NNN for a range of pressure levels in hPa
> //    GC/LNNN for a generic level type
> //    GC/RNNN for a specific GRIB record number
> //    Where GC is the number of or abbreviation for the grib code
> //    to be verified.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> // Specifying verification fields for NetCDF files:
> //    var_name(i,...,j,*,*) for a single field
> //    Where var_name is the name of the NetCDF variable,
> //    and i,...,j specifies fixed dimension values,
> //    and *,* specifies the two dimensions for the gridded field.
> //
> //    NOTE: To verify winds as vectors rather than scalars,
> //          specify UGRD (or 33) followd by VGRD (or 34) with the
> //          same level values.
> //
> //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
> //
> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for GRIB
input
> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ]; for
NetCDF input
> //
> fcst_field[] = [ "APCP_24(*,*)" ];
> obs_field[]  = [ "APCP_03(*,*)" ];
> //
> // Specify a comma-separated list of groups of thresholds to be
applied to the
> // fields listed above.  Thresholds for the forecast and observation
fields
> // may be specified separately.  If the obs_thresh parameter is left
blank,
> // it will default to the content of fcst_thresh.
> //
> // At least one threshold must be provided for each field listed
above.  The
> // lengths of the "fcst_field" and "fcst_thresh" arrays must match,
as must
> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> // thresholds to a field, separate the threshold values with a
space.
> //
> // Each threshold must be preceded by a two letter indicator for the
type of
> // thresholding to be performed:
> //    'lt' for less than     'le' for less than or equal to
> //    'eq' for equal to      'ne' for not equal to
> //    'gt' for greater than  'ge' for greater than or equal to
> //
> // NOTE: Thresholds for probabilities must begin with 0.0, end with
1.0,
> //       and be preceeded by "ge".
> //
> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0 ge80.0" ];
> //
> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
> obs_thresh[]  = [];
> //
> // Specify a comma-separated list of thresholds to be used when
computing
> // VL1L2 partial sums for winds.  The thresholds are applied to the
wind speed
> // values derived from each U/V pair.  Only those U/V pairs which
meet the wind
> // speed threshold criteria are retained.  If the obs_wind_thresh
parameter is
> // left blank, it will default to the contents of fcst_wind_thresh.
> //
> // To apply multiple wind speed thresholds, separate the threshold
values with a
> // space.  Use "NA" to indicate that no wind speed threshold should
be applied.
> //
> // Each threshold must be preceded by a two letter indicator for the
type of
> // thresholding to be performed:
> //    'lt' for less than     'le' for less than or equal to
> //    'eq' for equal to      'ne' for not equal to
> //    'gt' for greater than  'ge' for greater than or equal to
> //    'NA' for no threshold
> //
> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> //
> fcst_wind_thresh[] = [ "NA" ];
> obs_wind_thresh[]  = [];
> //
> // Specify a comma-separated list of grids to be used in masking the
data over
> // which to perform scoring.  An empty list indicates that no
masking grid
> // should be performed.  The standard NCEP grids are named "GNNN"
where NNN
> // indicates the three digit grid number.  Enter "FULL" to score
over the
> // entire domain.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> //
> // e.g. mask_grid[] = [ "FULL" ];
> //
> mask_grid[] = [ "FULL" ];
> //
> // Specify a comma-separated list of masking regions to be applied.
> // An empty list indicates that no additional masks should be used.
> // The masking regions may be defined in one of 4 ways:
> //
> // (1) An ASCII file containing a lat/lon polygon.
> //     Latitude in degrees north and longitude in degrees east.
> //     By default, the first and last polygon points are connected.
> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> //
> // (2) The NetCDF output of the gen_poly_mask tool.
> //
> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> //     to be used, and optionally, a threshold to be applied to the
field.
> //     e.g. "sample.nc var_name gt0.00"
> //
> // (4) A GRIB data file, followed by a description of the field
> //     to be used, and optionally, a threshold to be applied to the
field.
> //     e.g. "sample.grb APCP/A3 gt0.00"
> //
> // Any NetCDF or GRIB file used must have the same grid dimensions
as the
> // data being verified.
> //
> // MET_BASE may be used in the path for the files above.
> //
> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> //                      "poly_mask.ncf",
> //                      "sample.nc APCP",
> //                      "sample.grb HGT/Z0 gt100.0" ];
> //
> mask_poly[] = [];
> //
> // Specify a comma-separated list of values for alpha to be used
when computing
> // confidence intervals.  Values of alpha must be between 0 and 1.
> //
> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> //
> ci_alpha[] = [ 0.10, 0.05 ];
> //
> // Specify the method to be used for computing bootstrap confidence
intervals.
> // The value for this is interpreted as follows:
> //    (0) Use the BCa interval method (computationally intensive)
> //    (1) Use the percentile interval method
> //
> boot_interval = 1;
> //
> // Specify a proportion between 0 and 1 to define the replicate
sample size
> // to be used when computing percentile intervals.  The replicate
sample
> // size is set to boot_rep_prop * n, where n is the number of raw
data points.
> //
> // e.g boot_rep_prop = 0.80;
> //
> boot_rep_prop = 1.0;
> //
> // Specify the number of times each set of matched pair data should
be
> // resampled when computing bootstrap confidence intervals.  A value
of
> // zero disables the computation of bootstrap condifence intervals.
> //
> // e.g. n_boot_rep = 1000;
> //
> n_boot_rep = 0;
> //
> // Specify the name of the random number generator to be used.  See
the MET
> // Users Guide for a list of possible random number generators.
> //
> boot_rng = "mt19937";
> //
> // Specify the seed value to be used when computing bootstrap
confidence
> // intervals.  If left unspecified, the seed will change for each
run and
> // the computed bootstrap confidence intervals will not be
reproducable.
> //
> boot_seed = "";
> //
> // Specify a comma-separated list of interpolation method(s) to be
used for
> // smoothing the data fields prior to comparing them.  The value at
each grid
> // point is replaced by the measure computed over the neighborhood
defined
> // around the grid point.  String values are interpreted as follows:
> //    MIN     = Minimum in the neighborhood
> //    MAX     = Maximum in the neighborhood
> //    MEDIAN  = Median in the neighborhood
> //    UW_MEAN = Unweighted mean in the neighborhood
> //
> //    NOTE: The distance-weighted mean (DW_MEAN) is not an option
here since
> //          it will have no effect on a gridded field.
> //
> //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
> //          it reduces to an unweighted mean on a grid.
> //
> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> //
> interp_method[] = [ "UW_MEAN" ];
> //
> // Specify a comma-separated list of box widths to be used by the
interpolation
> // techniques listed above.  All values must be odd.  A value of 1
indicates
> // that no smoothing should be performed.  For values greater than
1, the n*n
> // grid points around each point will be used to smooth the data
fields.
> //
> // e.g. interp_width = [ 1, 3, 5 ];
> //
> interp_width[] = [ 1 ];
> //
> // The interp_flag controls how the smoothing defined above should
be applied:
> // (1) Smooth only the forecast field
> // (2) Smooth only the observation field
> // (3) Smooth both the forecast and observation fields
> //
> interp_flag = 1;
> //
> // When smoothing, compute a ratio of the number of valid data
points to
> // the total number of points in the neighborhood.  If that ratio is
less
> // than this threshold, do not compute a smoothed forecast value.
This
> // threshold must be between 0 and 1.  Setting this threshold to 1
will
> // require that each observation be surrounded by n*n valid forecast
> // points.
> //
> // e.g. interp_thresh = 1.0;
> //
> interp_thresh = 1.0;
> //
> // Specify a comma-separated list of box widths to be used to define
the
> // neighborhood size for the neighborhood verification methods. All
values
> // must be odd.  For values greater than 1, the n*n grid points
around each
> // point will be used to define the neighborhood.
> //
> // e.g. nbr_width = [ 3, 5 ];
> //
> nbr_width[] = [ 3, 5 ];
> //
> // When applying the neighborhood verification methods, compute a
ratio
> // of the number of valid data points to the total number of points
in
> // the neighborhood.  If that ratio is less than this threshold, do
not
> // include it in the computations.  This threshold must be between 0
> // and 1.  Setting this threshold to 1 will require that each point
be
> // surrounded by n*n valid forecast points.
> //
> // e.g. nbr_thresh = 1.0;
> //
> nbr_thresh = 1.0;
> //
> // When applying the neighborhood verification methods, apply a
threshold
> // to the fractional coverage values to define contingency tables
from
> // which to compute statistics.
> //
> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
> //
> cov_thresh[] = [ "ge0.5" ];
> //
> // Specify flags to indicate the type of data to be output:
> //
> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation Rates:
> //           Total (TOTAL),
> //           Forecast Rate (F_RATE),
> //           Hit Rate (H_RATE),
> //           Observation Rate (O_RATE)
> //
> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> //           Total (TOTAL),
> //           Forecast Yes and Observation Yes Count (FY_OY),
> //           Forecast Yes and Observation No Count (FY_ON),
> //           Forecast No and Observation Yes Count (FN_OY),
> //           Forecast No and Observation No Count (FN_ON)
> //
> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> //           Total (TOTAL),
> //           Base Rate (BASER),
> //           Forecast Mean (FMEAN),
> //           Accuracy (ACC),
> //           Frequency Bias (FBIAS),
> //           Probability of Detecting Yes (PODY),
> //           Probability of Detecting No (PODN),
> //           Probability of False Detection (POFD),
> //           False Alarm Ratio (FAR),
> //           Critical Success Index (CSI),
> //           Gilbert Skill Score (GSS),
> //           Hanssen and Kuipers Discriminant (HK),
> //           Heidke Skill Score (HSS),
> //           Odds Ratio (ODDS),
> //           NOTE: All statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
> //           Total (TOTAL),
> //           Number of Categories (N_CAT),
> //           Contingency Table Count columns repeated N_CAT*N_CAT
times
> //
> //    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
> //           Total (TOTAL),
> //           Number of Categories (N_CAT),
> //           Accuracy (ACC),
> //           Hanssen and Kuipers Discriminant (HK),
> //           Heidke Skill Score (HSS),
> //           Gerrity Score (GER),
> //           NOTE: All statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> //           Total (TOTAL),
> //           Forecast Mean (FBAR),
> //           Forecast Standard Deviation (FSTDEV),
> //           Observation Mean (OBAR),
> //           Observation Standard Deviation (OSTDEV),
> //           Pearson's Correlation Coefficient (PR_CORR),
> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> //           Number of ranks compared (RANKS),
> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> //           Number of tied ranks in the observation field
(ORANK_TIES),
> //           Mean Error (ME),
> //           Standard Deviation of the Error (ESTDEV),
> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> //           Mean Absolute Error (MAE),
> //           Mean Squared Error (MSE),
> //           Bias-Corrected Mean Squared Error (BCMSE),
> //           Root Mean Squared Error (RMSE),
> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> //           NOTE: Most statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> //           Total (TOTAL),
> //           Forecast Mean (FBAR),
> //              = mean(f)
> //           Observation Mean (OBAR),
> //              = mean(o)
> //           Forecast*Observation Product Mean (FOBAR),
> //              = mean(f*o)
> //           Forecast Squared Mean (FFBAR),
> //              = mean(f^2)
> //           Observation Squared Mean (OOBAR)
> //              = mean(o^2)
> //
> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
> //           Total (TOTAL),
> //           U-Forecast Mean (UFBAR),
> //              = mean(uf)
> //           V-Forecast Mean (VFBAR),
> //              = mean(vf)
> //           U-Observation Mean (UOBAR),
> //              = mean(uo)
> //           V-Observation Mean (VOBAR),
> //              = mean(vo)
> //           U-Product Plus V-Product (UVFOBAR),
> //              = mean(uf*uo+vf*vo)
> //           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
> //              = mean(uf^2+vf^2)
> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> //              = mean(uo^2+vo^2)
> //
> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency Table
Counts:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Row Observation Yes Count (OY_i),
> //           Row Observation No Count (ON_i),
> //           NOTE: Previous 3 columns repeated for each row in the
table
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Base Rate (BASER) with confidence interval limits,
> //           Reliability (RELIABILITY),
> //           Resolution (RESOLUTION),
> //           Uncertainty (UNCERTAINTY),
> //           Area Under the ROC Curve (ROC_AUC),
> //           Brier Score (BRIER) with confidence interval limits,
> //           Probability Threshold Value (THRESH_i)
> //           NOTE: Previous column repeated for each probability
threshold.
> //
> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics of
> //                                 Probabilistic Variables:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Observation Yes Count Divided by Total (OY_TP_i),
> //           Observation No Count Divided by Total (ON_TP_i),
> //           Calibration (CALIBRATION_i),
> //           Refinement (REFINEMENT_i),
> //           Likelikhood (LIKELIHOOD_i),
> //           Base Rate (BASER_i),
> //           NOTE: Previous 7 columns repeated for each row in the
table
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (12) STAT and PRC Text Files, ROC Curve Points for
> //                                 Probabilistic Variables:
> //           Total (TOTAL),
> //           Number of Forecast Probability Thresholds (N_THRESH),
> //           Probability Threshold Value (THRESH_i),
> //           Probability of Detecting Yes (PODY_i),
> //           Probability of False Detection (POFD_i),
> //           NOTE: Previous 3 columns repeated for each row in the
table
> //           Last Probability Threshold Value (THRESH_n)
> //
> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
> //           Total (TOTAL),
> //           Forecast Yes and Observation Yes Count (FY_OY),
> //           Forecast Yes and Observation No Count (FY_ON),
> //           Forecast No and Observation Yes Count (FN_OY),
> //           Forecast No and Observation No Count (FN_ON),
> //           Fractional Threshold Value (FRAC_T)
> //
> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
> //           Total (TOTAL),
> //           Base Rate (BASER),
> //           Forecast Mean (FMEAN),
> //           Accuracy (ACC),
> //           Bias (BIAS),
> //           Probability of Detecting Yes (PODY),
> //           Probability of Detecting No (PODN),
> //           Probability of False Detection (POFD),
> //           False Alarm Ratio (FAR),
> //           Critical Success Index (CSI),
> //           Gilbert Skill Score (GSS),
> //           Hanssen and Kuipers Discriminant (HK),
> //           Heidke Skill Score (HSS),
> //           Odds Ratio (ODDS),
> //           NOTE: Most statistics listed above contain parametric
and/or
> //                 non-parametric confidence interval limits.
> //
> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
> //           Total (TOTAL),
> //           Fractions Brier Score (FBS),
> //           Fractions Skill Score (FSS)
> //
> //   (16) NetCDF File containing difference fields for each grib
> //        code/mask combination.  A non-zero value indicates that
> //        this NetCDF file should be produced.  A value of 0
> //        indicates that it should not be produced.
> //
> // Values for flags (1) through (15) are interpreted as follows:
> //    (0) Do not generate output of this type
> //    (1) Write output to a STAT file
> //    (2) Write output to a STAT file and a text file
> //
> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1 ];
> //
> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> // Coefficients should be computed.  Computing them over large
datasets is
> // computationally intensive and slows down the runtime execution
significantly.
> //    (0) Do not compute these correlation coefficients
> //    (1) Compute these correlation coefficients
> //
> rank_corr_flag = 0;
> //
> // Specify the GRIB Table 2 parameter table version number to be
used
> // for interpreting GRIB codes.
> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> //
> grib_ptv = 2;
> //
> // Directory where temporary files should be written.
> //
> tmp_dir = "/tmp";
> //
> // Prefix to be used for the output file names.
> //
> output_prefix = "APCP_24";
> //
> // Indicate a version number for the contents of this configuration
file.
> // The value should generally not be modified.
> //
> version = "V3.0";
>
>
> geeta
>
>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> From: met_help at ucar.edu
>> To: geeta124 at hotmail.com
>> Date: Thu, 20 Feb 2014 10:21:20 -0700
>>
>> Geeta,
>>
>> I see that you're using METv3.0.  The current version is METv4.1,
and it'd be good to switch to that version when possible.  There have
been major changes to the MET configuration file format since
>> METv3.0, so be sure to use the default config files for METv4.1.
>>
>> I ran METv3.0 grid_stat on the data files you sent and reproduced
the error message you saw:
>>      ***WARNING***: process_scores() -> 61(*,*) not found in file:
2011060100_WRFPRS_day1_003Z.nc
>>
>> Since the input files are both NetCDF files, you need to specify
the name of the NetCDF variable that should be used.  So I modified
your config file:
>>      FROM: fcst_field[] = [ "61/A24" ];
>>      TO:   fcst_field[] = [ "APCP_24(*,*)" ];
>>
>> When I reran with this change, I got this error:
>>      NetCDF: Attribute not found
>>
>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
>>                   :MET_version = "V3.0.1" ;
>>
>> I switched that to be consistent with the version of MET you're
running:
>>                   :MET_version = "V3.0" ;
>>
>> And then I got this error:
>> ERROR: parse_poly_mask() -> the dimensions of the masking region
(185, 129) must match the dimensions of the data (53, 53).
>>
>> So I modified the config file to change the masking region
settings:
>>      FROM: mask_grid[] = [ "DTC165", "DTC166" ];
>>            mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
>>                            "MET_BASE/data/poly/LMV.poly" ];
>>
>>      TO:   mask_grid[] = [ "FULL" ];
>>            mask_poly[] = [];
>>
>> And then it ran fine.
>>
>> To summarize...
>>    (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
>>    (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
>>    (3) Consider updating to using METv4.1 instead.
>>
>> Thanks,
>> John
>>
>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>
>>> Hi John,
>>> I am bothering you with a few more. Hope u ll bear with me.
>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
>>>
>>> 1. Now I was reading  about 3 approaches of FUZZY verf which are
a. Multi event contingency Table (My question is -----Can we define a
hit as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).
>>>
>>> b) Pragmatic approach  (donot know what's that???)
>>>
>>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
>>>
>>> I donot understand these. Can u lead me to the right direction or
provide some hints????
>>>
>>> 2. How Can I prepare the QUILT plots (Spatial scale vs Threshold)
for a score???
>>> Can the QUILT plot be prepared for any score like HK, HSS, FBS or
FSS????
>>>
>>>
>>> thanks
>>> geeta
>>>
>>> From: geeta124 at hotmail.com
>>> To: met_help at ucar.edu
>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
>>>
>>>
>>>
>>>
>>> Hi John,
>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
>>> Kindly check that.
>>>
>>> geeta
>>>
>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>> From: met_help at ucar.edu
>>>> To: geeta124 at hotmail.com
>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
>>>>
>>>> Geeta,
>>>>
>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
>>>>
>>>> I was trying to make the point that the "interpolation methods"
in the grid_stat config file could be used as a form of "upscaling".
You are right, there is no *need* to interpolate the data since
>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
>>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
>>>>
>>>> The default interpolation width is 1, meaning that no smoothing
is performed.  However, you could use multiple smoothing widths and
see how your performance changes the more you smooth the data.
>>>>
>>>> Does that make sense?
>>>>
>>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
>>>>
>>>> fcst = {
>>>>       wind_thresh = [ NA ];
>>>>
>>>>       field = [
>>>>          {
>>>>            name       = "APCP_24";
>>>>            level      = [ "(*,*)" ];
>>>>            cat_thresh = [ >0.0, >=5.0 ];
>>>>          }
>>>>       ];
>>>>
>>>> };
>>>>
>>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
>>>> instructions:
>>>>       http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>
>>>>> Hi John,
>>>>>     I have run grid-stat. Following is the error.
>>>>>
>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>>>>> GSL_RNG_TYPE=mt19937
>>>>> GSL_RNG_SEED=18446744073321512274
>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>>> Observation File: ../trmm_nc_data/02june2011.nc
>>>>> Configuration File: GridStatConfig_APCP_24
>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>>>
>>>>>
--------------------------------------------------------------------------------
>>>>>
>>>>>
>>>>> Pls suggest.
>>>>>
>>>>> geeta
>>>>>
>>>>> From: geeta124 at hotmail.com
>>>>> To: met_help at ucar.edu
>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Thanks a lot John for your inputs and clarifications.
>>>>>
>>>>> Still following doubts are there.
>>>>>
>>>>> 1. when I run copygb, what it does is to make the observation
and Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
>>>>> R u calling that as Upscaling???? So this process is not a part
of GRID-stat. So essentially copygb is doing the upscaling part.
>>>>>
>>>>> 2. There are interpolation methods in the grid-stat config file.
(analogous to that in point-stat. in point-stat, there are 3-4 like
nearest neigh, mean, distance weighted etc).
>>>>>
>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
>>>>>
>>>>> geeta
>>>>>
>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>>>> From: met_help at ucar.edu
>>>>>> To: geeta124 at hotmail.com
>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
>>>>>>
>>>>>> Geeta,
>>>>>>
>>>>>> You are correct, the input forecast and observation files must
be on the same grid.  In Grid-Stat, there are two ways you can perform
"fuzzy" verification.
>>>>>>
>>>>>> (1) The first way is by applying an interpolation method to the
data.  Since the data are already on the same grid, this is really a
"smoothing" operation instead.  This is called "upscaling".
>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
>>>>>> config file, you could try:
>>>>>>
>>>>>> interp = {
>>>>>>        field      = BOTH;
>>>>>>        vld_thresh = 1.0;
>>>>>>
>>>>>>        type = [
>>>>>>           { method = UW_MEAN; width  = 1; },
>>>>>>           { method = UW_MEAN; width  = 3; },
>>>>>>           { method = UW_MEAN; width  = 6; },
>>>>>>           { method = UW_MEAN; width  = 9; }
>>>>>>        ];
>>>>>> };
>>>>>>
>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
>>>>>>
>>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
>>>>>> type.  Be sure to turn the NBRCNT output line on in the Grid-
Stat config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
>>>>>> the neighborhood size.  As the neighborhood size increases, FSS
increases.  And you look to see how large of a neighborhood size you
need to get a "useful" (FSS > 0.5) forecast.
>>>>>>
>>>>>> Here's how this method works.  You pick one or more thresholds
(cat_thresh) for your field.  Grid-Stat applies the threshold to
produce a 0/1 binary field of your data.  For each neighborhood size,
n,
>>>>>> it places an n x n box around each grid point and counts up the
number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
>>>>>> computed by comparing the forecast and observation fractional
coverage fields to each other.
>>>>>>
>>>>>> If you're verifying a single field using 3 different thresholds
and 6 different neighborhood sizes, you'd get 18 NBRCNT lines in the
output file.
>>>>>>
>>>>>> Here's an example of how you might set this up in the Grid-Stat
config file:
>>>>>>
>>>>>> nbrhd = {
>>>>>>        vld_thresh = 1.0;
>>>>>>        width      = [ 3, 5, 9, 11, 13, 15 ];
>>>>>>        cov_thresh = [ >=0.5 ];
>>>>>> }
>>>>>>
>>>>>> For a given threshold, you should look to see how FSS changes
as you increase the neighborhood size.
>>>>>>
>>>>>> Hopefully that helps get you going.
>>>>>>
>>>>>> Thanks,
>>>>>> John Halley Gotway
>>>>>> met_help at ucar.edu
>>>>>>
>>>>>>
>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
>>>>>>>
>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
>>>>>>>            Queue: met_help
>>>>>>>          Subject: Unable to visualize Fuzzy verf.
>>>>>>>            Owner: Nobody
>>>>>>>       Requestors: geeta124 at hotmail.com
>>>>>>>           Status: new
>>>>>>>      Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>>>
>>>>>>>
>>>>>>> Hi John/ met_help.
>>>>>>>
>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and the
model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search window
is defined around each grid point, within which the obs and the FCST
events are counted.  1. I want to know HOW is this SQUARE WINDOW is
defined (I mean in the configuration file) of Grid stat.  2. How Can I
change the size of the SQUARE window. 3. If My model resolution in
10km and I am interested in the synoptic scale phenomenon, then what
should be the window size???????????????  your help is urgently
required.
>>>>>>>
>>>>>>> geeta
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Tue Feb 25 23:26:56 2014

thanks John,
I have NOT done anything with the NC files as yet. I was asking you
how to go about changing the attribute????

I am posting the o/p of one of the Forecast files in netcdf format
using ncdump. It Shows MET="v3.0" what you desire.
bash-3.2$ ncdump -h test.nc
netcdf test {
dimensions:
        lat = 53 ;
        lon = 53 ;
variables:
        float lat(lat, lon) ;
                lat:long_name = "latitude" ;
                lat:units = "degrees_north" ;
                lat:standard_name = "latitude" ;
        float lon(lat, lon) ;
                lon:long_name = "longitude" ;
                lon:units = "degrees_east" ;
                lon:standard_name = "longitude" ;
        float APCP_24(lat, lon) ;
                APCP_24:name = "APCP" ;
                APCP_24:long_name = "Total precipitation" ;
                APCP_24:level = "A24" ;
                APCP_24:units = "kg/m^2" ;
                APCP_24:grib_code = 61 ;
                APCP_24:_FillValue = -9999.f ;
                APCP_24:init_time = "20110601_000000" ;
                APCP_24:init_time_ut = 1306886400 ;
                APCP_24:valid_time = "20110602_030000" ;
                APCP_24:valid_time_ut = 1306983600 ;
                APCP_24:accum_time = "240000" ;
                APCP_24:accum_time_sec = 86400 ;

// global attributes:
                :FileOrigins = "File 2011060100_WRFPRS_d01.003Z.nc
generated 20140130_092500 UTC on host rmcdlh by the MET pcp_combine
tool" ;
                :MET_version = "V3.0" ;
                :MET_tool = "pcp_combine" ;
                :RunCommand = "Subtraction: 2011060100_WRFPRS_d01.027
with accumulation of 270000 minus 2011060100_WRFPRS_d01.003 with
accumulation of 030000." ;
                :Projection = "LatLon" ;
                :lat_ll = "9.000000 degrees_north" ;
                :lon_ll = "74.000000 degrees_east" ;
                :delta_lat = "0.250000 degrees" ;
                :delta_lon = "0.250000 degrees" ;
                :Nlat = "53 grid_points" ;
                :Nlon = "53 grid_points" ;
}
bash-3.2$
*************************************
O/P of OBSERVATION FILE (NETCDF format) *
*************************************
bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
netcdf test {
dimensions:
        lon = 53 ;
        lat = 53 ;
variables:
        double lon(lon) ;
                lon:units = "degrees_east" ;
        double lat(lat) ;
                lat:units = "degrees_north" ;
        float APCP_03(lat, lon) ;
                APCP_03:units = "kg/m^2" ;
                APCP_03:missing_value = -9999.f ;
                APCP_03:long_name = "Total precipitation" ;
                APCP_03:name = "APCP" ;
                APCP_03:level = "A3" ;
                APCP_03:grib_code = 61.f ;
                APCP_03:_FillValue = -9999.f ;
                APCP_03:init_time = "20110602_000000" ;
                APCP_03:init_time_ut = 1306972800. ;
                APCP_03:valid_time = "20110602_030000" ;
                APCP_03:valid_time_ut = 1306983600. ;
                APCP_03:accum_time = "030000" ;
                APCP_03:accum_time_sec = 10800.f ;

// global attributes:
                :FileOrigins = "File ../../../vpt/geeta/02june2011.nc
generated 20140123_163031 on host ncmr0102 by the Rscript trmm2nc.R" ;
                :MET_version = "V3.0.1" ;
                :Projection = "LatLon" ;
                :lat_ll = "9 degrees_north" ;
                :lon_ll = "74 degrees_east" ;
                :delta_lat = "0.25 degrees" ;
                :delta_lon = "0.25 degrees" ;
                :Nlat = "53 grid_points" ;
                :Nlon = "53 grid_points" ;
}


Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
DO you suspect the location of NETCDF ???????????????.

shall be looking forward to hearing from you.

thanks
geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Tue, 25 Feb 2014 11:26:10 -0700
>
> Geeta,
>
> Sorry, I was out of the office yesterday.  Looking back through this
ticket, I see that you're still getting the following error:
>     NetCDF: Attribute not found
>
> You said that you updated the NetCDF files to include the following
global attribute:
>     MET_version = "V3.0" ;
>
> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> I see 2 other questions in your emails:
>
> (1) How can you control the optional "upscaling" or "smoothing" done
by grid_stat?
>      In METv3.0, that is controlled by configuration file options
that begin with "interp_".  For example, try the following:
>         interp_method[] = [ "UW_MEAN" ];
>         interp_width[]  = [ 1, 3, 5 ];
>         interp_flag     = 3;
>
>      For each output line you were getting before, you should now
get 2 more.  Since interp_flag is set to 3, grid_stat will smooth both
the forecast and observation fields.  For interp_width = 3,
> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
> in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.
>
> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
>
> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
> plots a red dot for each observation lat/lon it finds in the data.
It is intended to just give you a quick look at the location of the
observations to make sure that they exist where you expect.  It
> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
> the computation of statistics.  The MPR line type includes columns
named "OBS_LAT" and "OBS_LON" giving the point observation location
information.  You could read the lat/lon information from the MPR
> line type and use whatever plotting tool you prefer to plot the
observation locations.
>
> If you do post more data to the ftp site, please write me back and
I'll go grab it.
>
> Thanks,
> John
>
>
> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > hi john,
> > you had discussed about the upscaling (of both obs and fcst or any
one of them). The forecast is compared with the observations which are
averaged to coarse scales.
> > How is this averaged defined in the configuration file.
> >
> > Pls let me know reg the global attributes ????
> >
> > geeta
> >
> > From: geeta124 at hotmail.com
> > To: met_help at ucar.edu
> > Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> > Date: Sun, 23 Feb 2014 22:24:04 +0530
> >
> >
> >
> >
> > hi John,
> > Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
> >
> > Can you provide some more hints.
> >
> > regards
> >
> > geeta
> >
> > From: geeta124 at hotmail.com
> > To: met_help at ucar.edu
> > Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> > Date: Fri, 21 Feb 2014 15:03:34 +0530
> >
> >
> >
> >
> > thanks John,
> > I have made the changes as per your config file.
> > But the error persists.
> > -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> > GSL_RNG_TYPE=mt19937
> > GSL_RNG_SEED=18446744073358673747
> > Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> > Observation File: ../trmm_nc_data/02june2011.nc
> > Configuration File: GridStatConfig_APCP_24
> > NetCDF: Attribute not found
> > -bash-3.2$
> >
> >
> > 2. I have used  ncdump to see my file attributes.
> > Are u referring to these attributes???
> >
> > // global attributes:
> >                  :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >                  :MET_version = "V3.0" ;
> >                  :MET_tool = "pcp_combine" ;
> >
> > Following is my Config file.
> >
____________________________________________________________________
> >
////////////////////////////////////////////////////////////////////////////////
> > //
> > // Default grid_stat configuration file
> > //
> >
////////////////////////////////////////////////////////////////////////////////
> > //
> > // Specify a name to designate the model being verified.  This
name will be
> > // written to the second column of the ASCII output generated.
> > //
> > model = "WRF";
> > //
> > // Specify a comma-separated list of fields to be verified.  The
forecast and
> > // observation fields may be specified separately.  If the
obs_field parameter
> > // is left blank, it will default to the contents of fcst_field.
> > //
> > // Each field is specified as a GRIB code or abbreviation followed
by an
> > // accumulation or vertical level indicator for GRIB files or as a
variable name
> > // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
> > //
> > // Specifying verification fields for GRIB files:
> > //    GC/ANNN for accumulation interval NNN
> > //    GC/ZNNN for vertical level NNN
> > //    GC/PNNN for pressure level NNN in hPa
> > //    GC/PNNN-NNN for a range of pressure levels in hPa
> > //    GC/LNNN for a generic level type
> > //    GC/RNNN for a specific GRIB record number
> > //    Where GC is the number of or abbreviation for the grib code
> > //    to be verified.
> > // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> > //
> > // Specifying verification fields for NetCDF files:
> > //    var_name(i,...,j,*,*) for a single field
> > //    Where var_name is the name of the NetCDF variable,
> > //    and i,...,j specifies fixed dimension values,
> > //    and *,* specifies the two dimensions for the gridded field.
> > //
> > //    NOTE: To verify winds as vectors rather than scalars,
> > //          specify UGRD (or 33) followd by VGRD (or 34) with the
> > //          same level values.
> > //
> > //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
> > //
> > // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for GRIB
input
> > // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ]; for
NetCDF input
> > //
> > fcst_field[] = [ "APCP_24(*,*)" ];
> > obs_field[]  = [ "APCP_03(*,*)" ];
> > //
> > // Specify a comma-separated list of groups of thresholds to be
applied to the
> > // fields listed above.  Thresholds for the forecast and
observation fields
> > // may be specified separately.  If the obs_thresh parameter is
left blank,
> > // it will default to the content of fcst_thresh.
> > //
> > // At least one threshold must be provided for each field listed
above.  The
> > // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
> > // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> > // thresholds to a field, separate the threshold values with a
space.
> > //
> > // Each threshold must be preceded by a two letter indicator for
the type of
> > // thresholding to be performed:
> > //    'lt' for less than     'le' for less than or equal to
> > //    'eq' for equal to      'ne' for not equal to
> > //    'gt' for greater than  'ge' for greater than or equal to
> > //
> > // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
> > //       and be preceeded by "ge".
> > //
> > // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0 ge80.0"
];
> > //
> > fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
> > obs_thresh[]  = [];
> > //
> > // Specify a comma-separated list of thresholds to be used when
computing
> > // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
> > // values derived from each U/V pair.  Only those U/V pairs which
meet the wind
> > // speed threshold criteria are retained.  If the obs_wind_thresh
parameter is
> > // left blank, it will default to the contents of
fcst_wind_thresh.
> > //
> > // To apply multiple wind speed thresholds, separate the threshold
values with a
> > // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
> > //
> > // Each threshold must be preceded by a two letter indicator for
the type of
> > // thresholding to be performed:
> > //    'lt' for less than     'le' for less than or equal to
> > //    'eq' for equal to      'ne' for not equal to
> > //    'gt' for greater than  'ge' for greater than or equal to
> > //    'NA' for no threshold
> > //
> > // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> > //
> > fcst_wind_thresh[] = [ "NA" ];
> > obs_wind_thresh[]  = [];
> > //
> > // Specify a comma-separated list of grids to be used in masking
the data over
> > // which to perform scoring.  An empty list indicates that no
masking grid
> > // should be performed.  The standard NCEP grids are named "GNNN"
where NNN
> > // indicates the three digit grid number.  Enter "FULL" to score
over the
> > // entire domain.
> > // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> > //
> > // e.g. mask_grid[] = [ "FULL" ];
> > //
> > mask_grid[] = [ "FULL" ];
> > //
> > // Specify a comma-separated list of masking regions to be
applied.
> > // An empty list indicates that no additional masks should be
used.
> > // The masking regions may be defined in one of 4 ways:
> > //
> > // (1) An ASCII file containing a lat/lon polygon.
> > //     Latitude in degrees north and longitude in degrees east.
> > //     By default, the first and last polygon points are
connected.
> > //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> > //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> > //
> > // (2) The NetCDF output of the gen_poly_mask tool.
> > //
> > // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> > //     to be used, and optionally, a threshold to be applied to
the field.
> > //     e.g. "sample.nc var_name gt0.00"
> > //
> > // (4) A GRIB data file, followed by a description of the field
> > //     to be used, and optionally, a threshold to be applied to
the field.
> > //     e.g. "sample.grb APCP/A3 gt0.00"
> > //
> > // Any NetCDF or GRIB file used must have the same grid dimensions
as the
> > // data being verified.
> > //
> > // MET_BASE may be used in the path for the files above.
> > //
> > // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> > //                      "poly_mask.ncf",
> > //                      "sample.nc APCP",
> > //                      "sample.grb HGT/Z0 gt100.0" ];
> > //
> > mask_poly[] = [];
> > //
> > // Specify a comma-separated list of values for alpha to be used
when computing
> > // confidence intervals.  Values of alpha must be between 0 and 1.
> > //
> > // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> > //
> > ci_alpha[] = [ 0.10, 0.05 ];
> > //
> > // Specify the method to be used for computing bootstrap
confidence intervals.
> > // The value for this is interpreted as follows:
> > //    (0) Use the BCa interval method (computationally intensive)
> > //    (1) Use the percentile interval method
> > //
> > boot_interval = 1;
> > //
> > // Specify a proportion between 0 and 1 to define the replicate
sample size
> > // to be used when computing percentile intervals.  The replicate
sample
> > // size is set to boot_rep_prop * n, where n is the number of raw
data points.
> > //
> > // e.g boot_rep_prop = 0.80;
> > //
> > boot_rep_prop = 1.0;
> > //
> > // Specify the number of times each set of matched pair data
should be
> > // resampled when computing bootstrap confidence intervals.  A
value of
> > // zero disables the computation of bootstrap condifence
intervals.
> > //
> > // e.g. n_boot_rep = 1000;
> > //
> > n_boot_rep = 0;
> > //
> > // Specify the name of the random number generator to be used.
See the MET
> > // Users Guide for a list of possible random number generators.
> > //
> > boot_rng = "mt19937";
> > //
> > // Specify the seed value to be used when computing bootstrap
confidence
> > // intervals.  If left unspecified, the seed will change for each
run and
> > // the computed bootstrap confidence intervals will not be
reproducable.
> > //
> > boot_seed = "";
> > //
> > // Specify a comma-separated list of interpolation method(s) to be
used for
> > // smoothing the data fields prior to comparing them.  The value
at each grid
> > // point is replaced by the measure computed over the neighborhood
defined
> > // around the grid point.  String values are interpreted as
follows:
> > //    MIN     = Minimum in the neighborhood
> > //    MAX     = Maximum in the neighborhood
> > //    MEDIAN  = Median in the neighborhood
> > //    UW_MEAN = Unweighted mean in the neighborhood
> > //
> > //    NOTE: The distance-weighted mean (DW_MEAN) is not an option
here since
> > //          it will have no effect on a gridded field.
> > //
> > //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
> > //          it reduces to an unweighted mean on a grid.
> > //
> > // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> > //
> > interp_method[] = [ "UW_MEAN" ];
> > //
> > // Specify a comma-separated list of box widths to be used by the
interpolation
> > // techniques listed above.  All values must be odd.  A value of 1
indicates
> > // that no smoothing should be performed.  For values greater than
1, the n*n
> > // grid points around each point will be used to smooth the data
fields.
> > //
> > // e.g. interp_width = [ 1, 3, 5 ];
> > //
> > interp_width[] = [ 1 ];
> > //
> > // The interp_flag controls how the smoothing defined above should
be applied:
> > // (1) Smooth only the forecast field
> > // (2) Smooth only the observation field
> > // (3) Smooth both the forecast and observation fields
> > //
> > interp_flag = 1;
> > //
> > // When smoothing, compute a ratio of the number of valid data
points to
> > // the total number of points in the neighborhood.  If that ratio
is less
> > // than this threshold, do not compute a smoothed forecast value.
This
> > // threshold must be between 0 and 1.  Setting this threshold to 1
will
> > // require that each observation be surrounded by n*n valid
forecast
> > // points.
> > //
> > // e.g. interp_thresh = 1.0;
> > //
> > interp_thresh = 1.0;
> > //
> > // Specify a comma-separated list of box widths to be used to
define the
> > // neighborhood size for the neighborhood verification methods.
All values
> > // must be odd.  For values greater than 1, the n*n grid points
around each
> > // point will be used to define the neighborhood.
> > //
> > // e.g. nbr_width = [ 3, 5 ];
> > //
> > nbr_width[] = [ 3, 5 ];
> > //
> > // When applying the neighborhood verification methods, compute a
ratio
> > // of the number of valid data points to the total number of
points in
> > // the neighborhood.  If that ratio is less than this threshold,
do not
> > // include it in the computations.  This threshold must be between
0
> > // and 1.  Setting this threshold to 1 will require that each
point be
> > // surrounded by n*n valid forecast points.
> > //
> > // e.g. nbr_thresh = 1.0;
> > //
> > nbr_thresh = 1.0;
> > //
> > // When applying the neighborhood verification methods, apply a
threshold
> > // to the fractional coverage values to define contingency tables
from
> > // which to compute statistics.
> > //
> > // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
> > //
> > cov_thresh[] = [ "ge0.5" ];
> > //
> > // Specify flags to indicate the type of data to be output:
> > //
> > //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> > //           Total (TOTAL),
> > //           Forecast Rate (F_RATE),
> > //           Hit Rate (H_RATE),
> > //           Observation Rate (O_RATE)
> > //
> > //    (2) STAT and CTC Text Files, Contingency Table Counts:
> > //           Total (TOTAL),
> > //           Forecast Yes and Observation Yes Count (FY_OY),
> > //           Forecast Yes and Observation No Count (FY_ON),
> > //           Forecast No and Observation Yes Count (FN_OY),
> > //           Forecast No and Observation No Count (FN_ON)
> > //
> > //    (3) STAT and CTS Text Files, Contingency Table Scores:
> > //           Total (TOTAL),
> > //           Base Rate (BASER),
> > //           Forecast Mean (FMEAN),
> > //           Accuracy (ACC),
> > //           Frequency Bias (FBIAS),
> > //           Probability of Detecting Yes (PODY),
> > //           Probability of Detecting No (PODN),
> > //           Probability of False Detection (POFD),
> > //           False Alarm Ratio (FAR),
> > //           Critical Success Index (CSI),
> > //           Gilbert Skill Score (GSS),
> > //           Hanssen and Kuipers Discriminant (HK),
> > //           Heidke Skill Score (HSS),
> > //           Odds Ratio (ODDS),
> > //           NOTE: All statistics listed above contain parametric
and/or
> > //                 non-parametric confidence interval limits.
> > //
> > //    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
> > //           Total (TOTAL),
> > //           Number of Categories (N_CAT),
> > //           Contingency Table Count columns repeated N_CAT*N_CAT
times
> > //
> > //    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
> > //           Total (TOTAL),
> > //           Number of Categories (N_CAT),
> > //           Accuracy (ACC),
> > //           Hanssen and Kuipers Discriminant (HK),
> > //           Heidke Skill Score (HSS),
> > //           Gerrity Score (GER),
> > //           NOTE: All statistics listed above contain parametric
and/or
> > //                 non-parametric confidence interval limits.
> > //
> > //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> > //           Total (TOTAL),
> > //           Forecast Mean (FBAR),
> > //           Forecast Standard Deviation (FSTDEV),
> > //           Observation Mean (OBAR),
> > //           Observation Standard Deviation (OSTDEV),
> > //           Pearson's Correlation Coefficient (PR_CORR),
> > //           Spearman's Rank Correlation Coefficient (SP_CORR),
> > //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> > //           Number of ranks compared (RANKS),
> > //           Number of tied ranks in the forecast field
(FRANK_TIES),
> > //           Number of tied ranks in the observation field
(ORANK_TIES),
> > //           Mean Error (ME),
> > //           Standard Deviation of the Error (ESTDEV),
> > //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> > //           Mean Absolute Error (MAE),
> > //           Mean Squared Error (MSE),
> > //           Bias-Corrected Mean Squared Error (BCMSE),
> > //           Root Mean Squared Error (RMSE),
> > //           Percentiles of the Error (E10, E25, E50, E75, E90)
> > //           NOTE: Most statistics listed above contain parametric
and/or
> > //                 non-parametric confidence interval limits.
> > //
> > //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> > //           Total (TOTAL),
> > //           Forecast Mean (FBAR),
> > //              = mean(f)
> > //           Observation Mean (OBAR),
> > //              = mean(o)
> > //           Forecast*Observation Product Mean (FOBAR),
> > //              = mean(f*o)
> > //           Forecast Squared Mean (FFBAR),
> > //              = mean(f^2)
> > //           Observation Squared Mean (OOBAR)
> > //              = mean(o^2)
> > //
> > //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
> > //           Total (TOTAL),
> > //           U-Forecast Mean (UFBAR),
> > //              = mean(uf)
> > //           V-Forecast Mean (VFBAR),
> > //              = mean(vf)
> > //           U-Observation Mean (UOBAR),
> > //              = mean(uo)
> > //           V-Observation Mean (VOBAR),
> > //              = mean(vo)
> > //           U-Product Plus V-Product (UVFOBAR),
> > //              = mean(uf*uo+vf*vo)
> > //           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
> > //              = mean(uf^2+vf^2)
> > //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> > //              = mean(uo^2+vo^2)
> > //
> > //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
> > //           Total (TOTAL),
> > //           Number of Forecast Probability Thresholds (N_THRESH),
> > //           Probability Threshold Value (THRESH_i),
> > //           Row Observation Yes Count (OY_i),
> > //           Row Observation No Count (ON_i),
> > //           NOTE: Previous 3 columns repeated for each row in the
table
> > //           Last Probability Threshold Value (THRESH_n)
> > //
> > //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
> > //           Total (TOTAL),
> > //           Number of Forecast Probability Thresholds (N_THRESH),
> > //           Base Rate (BASER) with confidence interval limits,
> > //           Reliability (RELIABILITY),
> > //           Resolution (RESOLUTION),
> > //           Uncertainty (UNCERTAINTY),
> > //           Area Under the ROC Curve (ROC_AUC),
> > //           Brier Score (BRIER) with confidence interval limits,
> > //           Probability Threshold Value (THRESH_i)
> > //           NOTE: Previous column repeated for each probability
threshold.
> > //
> > //   (11) STAT and PJC Text Files, Joint/Continuous Statistics of
> > //                                 Probabilistic Variables:
> > //           Total (TOTAL),
> > //           Number of Forecast Probability Thresholds (N_THRESH),
> > //           Probability Threshold Value (THRESH_i),
> > //           Observation Yes Count Divided by Total (OY_TP_i),
> > //           Observation No Count Divided by Total (ON_TP_i),
> > //           Calibration (CALIBRATION_i),
> > //           Refinement (REFINEMENT_i),
> > //           Likelikhood (LIKELIHOOD_i),
> > //           Base Rate (BASER_i),
> > //           NOTE: Previous 7 columns repeated for each row in the
table
> > //           Last Probability Threshold Value (THRESH_n)
> > //
> > //   (12) STAT and PRC Text Files, ROC Curve Points for
> > //                                 Probabilistic Variables:
> > //           Total (TOTAL),
> > //           Number of Forecast Probability Thresholds (N_THRESH),
> > //           Probability Threshold Value (THRESH_i),
> > //           Probability of Detecting Yes (PODY_i),
> > //           Probability of False Detection (POFD_i),
> > //           NOTE: Previous 3 columns repeated for each row in the
table
> > //           Last Probability Threshold Value (THRESH_n)
> > //
> > //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
> > //           Total (TOTAL),
> > //           Forecast Yes and Observation Yes Count (FY_OY),
> > //           Forecast Yes and Observation No Count (FY_ON),
> > //           Forecast No and Observation Yes Count (FN_OY),
> > //           Forecast No and Observation No Count (FN_ON),
> > //           Fractional Threshold Value (FRAC_T)
> > //
> > //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
> > //           Total (TOTAL),
> > //           Base Rate (BASER),
> > //           Forecast Mean (FMEAN),
> > //           Accuracy (ACC),
> > //           Bias (BIAS),
> > //           Probability of Detecting Yes (PODY),
> > //           Probability of Detecting No (PODN),
> > //           Probability of False Detection (POFD),
> > //           False Alarm Ratio (FAR),
> > //           Critical Success Index (CSI),
> > //           Gilbert Skill Score (GSS),
> > //           Hanssen and Kuipers Discriminant (HK),
> > //           Heidke Skill Score (HSS),
> > //           Odds Ratio (ODDS),
> > //           NOTE: Most statistics listed above contain parametric
and/or
> > //                 non-parametric confidence interval limits.
> > //
> > //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
> > //           Total (TOTAL),
> > //           Fractions Brier Score (FBS),
> > //           Fractions Skill Score (FSS)
> > //
> > //   (16) NetCDF File containing difference fields for each grib
> > //        code/mask combination.  A non-zero value indicates that
> > //        this NetCDF file should be produced.  A value of 0
> > //        indicates that it should not be produced.
> > //
> > // Values for flags (1) through (15) are interpreted as follows:
> > //    (0) Do not generate output of this type
> > //    (1) Write output to a STAT file
> > //    (2) Write output to a STAT file and a text file
> > //
> > output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1
];
> > //
> > // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> > // Coefficients should be computed.  Computing them over large
datasets is
> > // computationally intensive and slows down the runtime execution
significantly.
> > //    (0) Do not compute these correlation coefficients
> > //    (1) Compute these correlation coefficients
> > //
> > rank_corr_flag = 0;
> > //
> > // Specify the GRIB Table 2 parameter table version number to be
used
> > // for interpreting GRIB codes.
> > // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> > //
> > grib_ptv = 2;
> > //
> > // Directory where temporary files should be written.
> > //
> > tmp_dir = "/tmp";
> > //
> > // Prefix to be used for the output file names.
> > //
> > output_prefix = "APCP_24";
> > //
> > // Indicate a version number for the contents of this
configuration file.
> > // The value should generally not be modified.
> > //
> > version = "V3.0";
> >
> >
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Thu, 20 Feb 2014 10:21:20 -0700
> >>
> >> Geeta,
> >>
> >> I see that you're using METv3.0.  The current version is METv4.1,
and it'd be good to switch to that version when possible.  There have
been major changes to the MET configuration file format since
> >> METv3.0, so be sure to use the default config files for METv4.1.
> >>
> >> I ran METv3.0 grid_stat on the data files you sent and reproduced
the error message you saw:
> >>      ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
> >>
> >> Since the input files are both NetCDF files, you need to specify
the name of the NetCDF variable that should be used.  So I modified
your config file:
> >>      FROM: fcst_field[] = [ "61/A24" ];
> >>      TO:   fcst_field[] = [ "APCP_24(*,*)" ];
> >>
> >> When I reran with this change, I got this error:
> >>      NetCDF: Attribute not found
> >>
> >> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
> >>                   :MET_version = "V3.0.1" ;
> >>
> >> I switched that to be consistent with the version of MET you're
running:
> >>                   :MET_version = "V3.0" ;
> >>
> >> And then I got this error:
> >> ERROR: parse_poly_mask() -> the dimensions of the masking region
(185, 129) must match the dimensions of the data (53, 53).
> >>
> >> So I modified the config file to change the masking region
settings:
> >>      FROM: mask_grid[] = [ "DTC165", "DTC166" ];
> >>            mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
> >>                            "MET_BASE/data/poly/LMV.poly" ];
> >>
> >>      TO:   mask_grid[] = [ "FULL" ];
> >>            mask_poly[] = [];
> >>
> >> And then it ran fine.
> >>
> >> To summarize...
> >>    (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
> >>    (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
> >>    (3) Consider updating to using METv4.1 instead.
> >>
> >> Thanks,
> >> John
> >>
> >> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> Hi John,
> >>> I am bothering you with a few more. Hope u ll bear with me.
> >>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >>>
> >>> 1. Now I was reading  about 3 approaches of FUZZY verf which are
a. Multi event contingency Table (My question is -----Can we define a
hit as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).
> >>>
> >>> b) Pragmatic approach  (donot know what's that???)
> >>>
> >>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >>>
> >>> I donot understand these. Can u lead me to the right direction
or provide some hints????
> >>>
> >>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
> >>> Can the QUILT plot be prepared for any score like HK, HSS, FBS
or FSS????
> >>>
> >>>
> >>> thanks
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Thu, 20 Feb 2014 11:30:25 +0530
> >>>
> >>>
> >>>
> >>>
> >>> Hi John,
> >>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> >>> Kindly check that.
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>>>
> >>>> I was trying to make the point that the "interpolation methods"
in the grid_stat config file could be used as a form of "upscaling".
You are right, there is no *need* to interpolate the data since
> >>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>>>
> >>>> The default interpolation width is 1, meaning that no smoothing
is performed.  However, you could use multiple smoothing widths and
see how your performance changes the more you smooth the data.
> >>>>
> >>>> Does that make sense?
> >>>>
> >>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>>>
> >>>> fcst = {
> >>>>       wind_thresh = [ NA ];
> >>>>
> >>>>       field = [
> >>>>          {
> >>>>            name       = "APCP_24";
> >>>>            level      = [ "(*,*)" ];
> >>>>            cat_thresh = [ >0.0, >=5.0 ];
> >>>>          }
> >>>>       ];
> >>>>
> >>>> };
> >>>>
> >>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >>>> instructions:
> >>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
> >>>>>
> >>>>> Hi John,
> >>>>>     I have run grid-stat. Following is the error.
> >>>>>
> >>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>>>> GSL_RNG_TYPE=mt19937
> >>>>> GSL_RNG_SEED=18446744073321512274
> >>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>> Observation File: ../trmm_nc_data/02june2011.nc
> >>>>> Configuration File: GridStatConfig_APCP_24
> >>>>> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>
> >>>>>
--------------------------------------------------------------------------------
> >>>>>
> >>>>>
> >>>>> Pls suggest.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>> From: geeta124 at hotmail.com
> >>>>> To: met_help at ucar.edu
> >>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Thanks a lot John for your inputs and clarifications.
> >>>>>
> >>>>> Still following doubts are there.
> >>>>>
> >>>>> 1. when I run copygb, what it does is to make the observation
and Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
> >>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>>>
> >>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
> >>>>>
> >>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>> From: met_help at ucar.edu
> >>>>>> To: geeta124 at hotmail.com
> >>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>>>
> >>>>>> Geeta,
> >>>>>>
> >>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
> >>>>>>
> >>>>>> (1) The first way is by applying an interpolation method to
the data.  Since the data are already on the same grid, this is really
a "smoothing" operation instead.  This is called "upscaling".
> >>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>>>> config file, you could try:
> >>>>>>
> >>>>>> interp = {
> >>>>>>        field      = BOTH;
> >>>>>>        vld_thresh = 1.0;
> >>>>>>
> >>>>>>        type = [
> >>>>>>           { method = UW_MEAN; width  = 1; },
> >>>>>>           { method = UW_MEAN; width  = 3; },
> >>>>>>           { method = UW_MEAN; width  = 6; },
> >>>>>>           { method = UW_MEAN; width  = 9; }
> >>>>>>        ];
> >>>>>> };
> >>>>>>
> >>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>>>
> >>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>>>> type.  Be sure to turn the NBRCNT output line on in the Grid-
Stat config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
> >>>>>> the neighborhood size.  As the neighborhood size increases,
FSS increases.  And you look to see how large of a neighborhood size
you need to get a "useful" (FSS > 0.5) forecast.
> >>>>>>
> >>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
> >>>>>> it places an n x n box around each grid point and counts up
the number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>>>> computed by comparing the forecast and observation fractional
coverage fields to each other.
> >>>>>>
> >>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
> >>>>>>
> >>>>>> Here's an example of how you might set this up in the Grid-
Stat config file:
> >>>>>>
> >>>>>> nbrhd = {
> >>>>>>        vld_thresh = 1.0;
> >>>>>>        width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>>>        cov_thresh = [ >=0.5 ];
> >>>>>> }
> >>>>>>
> >>>>>> For a given threshold, you should look to see how FSS changes
as you increase the neighborhood size.
> >>>>>>
> >>>>>> Hopefully that helps get you going.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John Halley Gotway
> >>>>>> met_help at ucar.edu
> >>>>>>
> >>>>>>
> >>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>>>
> >>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>>>            Queue: met_help
> >>>>>>>          Subject: Unable to visualize Fuzzy verf.
> >>>>>>>            Owner: Nobody
> >>>>>>>       Requestors: geeta124 at hotmail.com
> >>>>>>>           Status: new
> >>>>>>>      Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi John/ met_help.
> >>>>>>>
> >>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and
the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Wed Feb 26 11:13:55 2014

Geeta,

The problem is in the observation files:

*************************************
O/P of OBSERVATION FILE (NETCDF format) *
*************************************
:MET_version = "V3.0.1" ;

If you change the "V3.0.1" to "V3.0", then METv3.0 grid_stat will be
able to process it fine.

Also, you should switch the timing variable attributes from floats to
integers:
Change from:
  APCP_03:init_time_ut = 1306972800. ;
  APCP_03:valid_time_ut = 1306983600. ;
  APCP_03:accum_time_sec = 10800.f ;
Change to:
  APCP_03:init_time_ut = 1306972800 ;
  APCP_03:valid_time_ut = 1306983600 ;
  APCP_03:accum_time_sec = 10800 ;

When you switch to METv4.1, it'll complain if those aren't integers.

Hope that helps.

Thanks,
John


On 02/25/2014 11:26 PM, Geeta Geeta via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>
> thanks John,
> I have NOT done anything with the NC files as yet. I was asking you
how to go about changing the attribute????
>
> I am posting the o/p of one of the Forecast files in netcdf format
using ncdump. It Shows MET="v3.0" what you desire.
> bash-3.2$ ncdump -h test.nc
> netcdf test {
> dimensions:
>          lat = 53 ;
>          lon = 53 ;
> variables:
>          float lat(lat, lon) ;
>                  lat:long_name = "latitude" ;
>                  lat:units = "degrees_north" ;
>                  lat:standard_name = "latitude" ;
>          float lon(lat, lon) ;
>                  lon:long_name = "longitude" ;
>                  lon:units = "degrees_east" ;
>                  lon:standard_name = "longitude" ;
>          float APCP_24(lat, lon) ;
>                  APCP_24:name = "APCP" ;
>                  APCP_24:long_name = "Total precipitation" ;
>                  APCP_24:level = "A24" ;
>                  APCP_24:units = "kg/m^2" ;
>                  APCP_24:grib_code = 61 ;
>                  APCP_24:_FillValue = -9999.f ;
>                  APCP_24:init_time = "20110601_000000" ;
>                  APCP_24:init_time_ut = 1306886400 ;
>                  APCP_24:valid_time = "20110602_030000" ;
>                  APCP_24:valid_time_ut = 1306983600 ;
>                  APCP_24:accum_time = "240000" ;
>                  APCP_24:accum_time_sec = 86400 ;
>
> // global attributes:
>                  :FileOrigins = "File 2011060100_WRFPRS_d01.003Z.nc
generated 20140130_092500 UTC on host rmcdlh by the MET pcp_combine
tool" ;
>                  :MET_version = "V3.0" ;
>                  :MET_tool = "pcp_combine" ;
>                  :RunCommand = "Subtraction:
2011060100_WRFPRS_d01.027 with accumulation of 270000 minus
2011060100_WRFPRS_d01.003 with accumulation of 030000." ;
>                  :Projection = "LatLon" ;
>                  :lat_ll = "9.000000 degrees_north" ;
>                  :lon_ll = "74.000000 degrees_east" ;
>                  :delta_lat = "0.250000 degrees" ;
>                  :delta_lon = "0.250000 degrees" ;
>                  :Nlat = "53 grid_points" ;
>                  :Nlon = "53 grid_points" ;
> }
> bash-3.2$
> *************************************
> O/P of OBSERVATION FILE (NETCDF format) *
> *************************************
> bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
> netcdf test {
> dimensions:
>          lon = 53 ;
>          lat = 53 ;
> variables:
>          double lon(lon) ;
>                  lon:units = "degrees_east" ;
>          double lat(lat) ;
>                  lat:units = "degrees_north" ;
>          float APCP_03(lat, lon) ;
>                  APCP_03:units = "kg/m^2" ;
>                  APCP_03:missing_value = -9999.f ;
>                  APCP_03:long_name = "Total precipitation" ;
>                  APCP_03:name = "APCP" ;
>                  APCP_03:level = "A3" ;
>                  APCP_03:grib_code = 61.f ;
>                  APCP_03:_FillValue = -9999.f ;
>                  APCP_03:init_time = "20110602_000000" ;
>                  APCP_03:init_time_ut = 1306972800. ;
>                  APCP_03:valid_time = "20110602_030000" ;
>                  APCP_03:valid_time_ut = 1306983600. ;
>                  APCP_03:accum_time = "030000" ;
>                  APCP_03:accum_time_sec = 10800.f ;
>
> // global attributes:
>                  :FileOrigins = "File
../../../vpt/geeta/02june2011.nc generated 20140123_163031 on host
ncmr0102 by the Rscript trmm2nc.R" ;
>                  :MET_version = "V3.0.1" ;
>                  :Projection = "LatLon" ;
>                  :lat_ll = "9 degrees_north" ;
>                  :lon_ll = "74 degrees_east" ;
>                  :delta_lat = "0.25 degrees" ;
>                  :delta_lon = "0.25 degrees" ;
>                  :Nlat = "53 grid_points" ;
>                  :Nlon = "53 grid_points" ;
> }
>
>
> Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
> DO you suspect the location of NETCDF ???????????????.
>
> shall be looking forward to hearing from you.
>
> thanks
> geeta
>
>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> From: met_help at ucar.edu
>> To: geeta124 at hotmail.com
>> Date: Tue, 25 Feb 2014 11:26:10 -0700
>>
>> Geeta,
>>
>> Sorry, I was out of the office yesterday.  Looking back through
this ticket, I see that you're still getting the following error:
>>      NetCDF: Attribute not found
>>
>> You said that you updated the NetCDF files to include the following
global attribute:
>>      MET_version = "V3.0" ;
>>
>> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
>>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> I see 2 other questions in your emails:
>>
>> (1) How can you control the optional "upscaling" or "smoothing"
done by grid_stat?
>>       In METv3.0, that is controlled by configuration file options
that begin with "interp_".  For example, try the following:
>>          interp_method[] = [ "UW_MEAN" ];
>>          interp_width[]  = [ 1, 3, 5 ];
>>          interp_flag     = 3;
>>
>>       For each output line you were getting before, you should now
get 2 more.  Since interp_flag is set to 3, grid_stat will smooth both
the forecast and observation fields.  For interp_width = 3,
>> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
>> in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.
>>
>> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
>>
>> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
>> plots a red dot for each observation lat/lon it finds in the data.
It is intended to just give you a quick look at the location of the
observations to make sure that they exist where you expect.  It
>> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
>> the computation of statistics.  The MPR line type includes columns
named "OBS_LAT" and "OBS_LON" giving the point observation location
information.  You could read the lat/lon information from the MPR
>> line type and use whatever plotting tool you prefer to plot the
observation locations.
>>
>> If you do post more data to the ftp site, please write me back and
I'll go grab it.
>>
>> Thanks,
>> John
>>
>>
>> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>
>>> hi john,
>>> you had discussed about the upscaling (of both obs and fcst or any
one of them). The forecast is compared with the observations which are
averaged to coarse scales.
>>> How is this averaged defined in the configuration file.
>>>
>>> Pls let me know reg the global attributes ????
>>>
>>> geeta
>>>
>>> From: geeta124 at hotmail.com
>>> To: met_help at ucar.edu
>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>> Date: Sun, 23 Feb 2014 22:24:04 +0530
>>>
>>>
>>>
>>>
>>> hi John,
>>> Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
>>>
>>> Can you provide some more hints.
>>>
>>> regards
>>>
>>> geeta
>>>
>>> From: geeta124 at hotmail.com
>>> To: met_help at ucar.edu
>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>> Date: Fri, 21 Feb 2014 15:03:34 +0530
>>>
>>>
>>>
>>>
>>> thanks John,
>>> I have made the changes as per your config file.
>>> But the error persists.
>>> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>>> GSL_RNG_TYPE=mt19937
>>> GSL_RNG_SEED=18446744073358673747
>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>> Observation File: ../trmm_nc_data/02june2011.nc
>>> Configuration File: GridStatConfig_APCP_24
>>> NetCDF: Attribute not found
>>> -bash-3.2$
>>>
>>>
>>> 2. I have used  ncdump to see my file attributes.
>>> Are u referring to these attributes???
>>>
>>> // global attributes:
>>>                   :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
>>>                   :MET_version = "V3.0" ;
>>>                   :MET_tool = "pcp_combine" ;
>>>
>>> Following is my Config file.
>>>
____________________________________________________________________
>>>
////////////////////////////////////////////////////////////////////////////////
>>> //
>>> // Default grid_stat configuration file
>>> //
>>>
////////////////////////////////////////////////////////////////////////////////
>>> //
>>> // Specify a name to designate the model being verified.  This
name will be
>>> // written to the second column of the ASCII output generated.
>>> //
>>> model = "WRF";
>>> //
>>> // Specify a comma-separated list of fields to be verified.  The
forecast and
>>> // observation fields may be specified separately.  If the
obs_field parameter
>>> // is left blank, it will default to the contents of fcst_field.
>>> //
>>> // Each field is specified as a GRIB code or abbreviation followed
by an
>>> // accumulation or vertical level indicator for GRIB files or as a
variable name
>>> // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
>>> //
>>> // Specifying verification fields for GRIB files:
>>> //    GC/ANNN for accumulation interval NNN
>>> //    GC/ZNNN for vertical level NNN
>>> //    GC/PNNN for pressure level NNN in hPa
>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>>> //    GC/LNNN for a generic level type
>>> //    GC/RNNN for a specific GRIB record number
>>> //    Where GC is the number of or abbreviation for the grib code
>>> //    to be verified.
>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>> //
>>> // Specifying verification fields for NetCDF files:
>>> //    var_name(i,...,j,*,*) for a single field
>>> //    Where var_name is the name of the NetCDF variable,
>>> //    and i,...,j specifies fixed dimension values,
>>> //    and *,* specifies the two dimensions for the gridded field.
>>> //
>>> //    NOTE: To verify winds as vectors rather than scalars,
>>> //          specify UGRD (or 33) followd by VGRD (or 34) with the
>>> //          same level values.
>>> //
>>> //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
>>> //
>>> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for GRIB
input
>>> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ]; for
NetCDF input
>>> //
>>> fcst_field[] = [ "APCP_24(*,*)" ];
>>> obs_field[]  = [ "APCP_03(*,*)" ];
>>> //
>>> // Specify a comma-separated list of groups of thresholds to be
applied to the
>>> // fields listed above.  Thresholds for the forecast and
observation fields
>>> // may be specified separately.  If the obs_thresh parameter is
left blank,
>>> // it will default to the content of fcst_thresh.
>>> //
>>> // At least one threshold must be provided for each field listed
above.  The
>>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
>>> // thresholds to a field, separate the threshold values with a
space.
>>> //
>>> // Each threshold must be preceded by a two letter indicator for
the type of
>>> // thresholding to be performed:
>>> //    'lt' for less than     'le' for less than or equal to
>>> //    'eq' for equal to      'ne' for not equal to
>>> //    'gt' for greater than  'ge' for greater than or equal to
>>> //
>>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
>>> //       and be preceeded by "ge".
>>> //
>>> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0 ge80.0"
];
>>> //
>>> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
>>> obs_thresh[]  = [];
>>> //
>>> // Specify a comma-separated list of thresholds to be used when
computing
>>> // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
>>> // values derived from each U/V pair.  Only those U/V pairs which
meet the wind
>>> // speed threshold criteria are retained.  If the obs_wind_thresh
parameter is
>>> // left blank, it will default to the contents of
fcst_wind_thresh.
>>> //
>>> // To apply multiple wind speed thresholds, separate the threshold
values with a
>>> // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
>>> //
>>> // Each threshold must be preceded by a two letter indicator for
the type of
>>> // thresholding to be performed:
>>> //    'lt' for less than     'le' for less than or equal to
>>> //    'eq' for equal to      'ne' for not equal to
>>> //    'gt' for greater than  'ge' for greater than or equal to
>>> //    'NA' for no threshold
>>> //
>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>>> //
>>> fcst_wind_thresh[] = [ "NA" ];
>>> obs_wind_thresh[]  = [];
>>> //
>>> // Specify a comma-separated list of grids to be used in masking
the data over
>>> // which to perform scoring.  An empty list indicates that no
masking grid
>>> // should be performed.  The standard NCEP grids are named "GNNN"
where NNN
>>> // indicates the three digit grid number.  Enter "FULL" to score
over the
>>> // entire domain.
>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>> //
>>> // e.g. mask_grid[] = [ "FULL" ];
>>> //
>>> mask_grid[] = [ "FULL" ];
>>> //
>>> // Specify a comma-separated list of masking regions to be
applied.
>>> // An empty list indicates that no additional masks should be
used.
>>> // The masking regions may be defined in one of 4 ways:
>>> //
>>> // (1) An ASCII file containing a lat/lon polygon.
>>> //     Latitude in degrees north and longitude in degrees east.
>>> //     By default, the first and last polygon points are
connected.
>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>>> //
>>> // (2) The NetCDF output of the gen_poly_mask tool.
>>> //
>>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
>>> //     to be used, and optionally, a threshold to be applied to
the field.
>>> //     e.g. "sample.nc var_name gt0.00"
>>> //
>>> // (4) A GRIB data file, followed by a description of the field
>>> //     to be used, and optionally, a threshold to be applied to
the field.
>>> //     e.g. "sample.grb APCP/A3 gt0.00"
>>> //
>>> // Any NetCDF or GRIB file used must have the same grid dimensions
as the
>>> // data being verified.
>>> //
>>> // MET_BASE may be used in the path for the files above.
>>> //
>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>>> //                      "poly_mask.ncf",
>>> //                      "sample.nc APCP",
>>> //                      "sample.grb HGT/Z0 gt100.0" ];
>>> //
>>> mask_poly[] = [];
>>> //
>>> // Specify a comma-separated list of values for alpha to be used
when computing
>>> // confidence intervals.  Values of alpha must be between 0 and 1.
>>> //
>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>>> //
>>> ci_alpha[] = [ 0.10, 0.05 ];
>>> //
>>> // Specify the method to be used for computing bootstrap
confidence intervals.
>>> // The value for this is interpreted as follows:
>>> //    (0) Use the BCa interval method (computationally intensive)
>>> //    (1) Use the percentile interval method
>>> //
>>> boot_interval = 1;
>>> //
>>> // Specify a proportion between 0 and 1 to define the replicate
sample size
>>> // to be used when computing percentile intervals.  The replicate
sample
>>> // size is set to boot_rep_prop * n, where n is the number of raw
data points.
>>> //
>>> // e.g boot_rep_prop = 0.80;
>>> //
>>> boot_rep_prop = 1.0;
>>> //
>>> // Specify the number of times each set of matched pair data
should be
>>> // resampled when computing bootstrap confidence intervals.  A
value of
>>> // zero disables the computation of bootstrap condifence
intervals.
>>> //
>>> // e.g. n_boot_rep = 1000;
>>> //
>>> n_boot_rep = 0;
>>> //
>>> // Specify the name of the random number generator to be used.
See the MET
>>> // Users Guide for a list of possible random number generators.
>>> //
>>> boot_rng = "mt19937";
>>> //
>>> // Specify the seed value to be used when computing bootstrap
confidence
>>> // intervals.  If left unspecified, the seed will change for each
run and
>>> // the computed bootstrap confidence intervals will not be
reproducable.
>>> //
>>> boot_seed = "";
>>> //
>>> // Specify a comma-separated list of interpolation method(s) to be
used for
>>> // smoothing the data fields prior to comparing them.  The value
at each grid
>>> // point is replaced by the measure computed over the neighborhood
defined
>>> // around the grid point.  String values are interpreted as
follows:
>>> //    MIN     = Minimum in the neighborhood
>>> //    MAX     = Maximum in the neighborhood
>>> //    MEDIAN  = Median in the neighborhood
>>> //    UW_MEAN = Unweighted mean in the neighborhood
>>> //
>>> //    NOTE: The distance-weighted mean (DW_MEAN) is not an option
here since
>>> //          it will have no effect on a gridded field.
>>> //
>>> //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
>>> //          it reduces to an unweighted mean on a grid.
>>> //
>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>>> //
>>> interp_method[] = [ "UW_MEAN" ];
>>> //
>>> // Specify a comma-separated list of box widths to be used by the
interpolation
>>> // techniques listed above.  All values must be odd.  A value of 1
indicates
>>> // that no smoothing should be performed.  For values greater than
1, the n*n
>>> // grid points around each point will be used to smooth the data
fields.
>>> //
>>> // e.g. interp_width = [ 1, 3, 5 ];
>>> //
>>> interp_width[] = [ 1 ];
>>> //
>>> // The interp_flag controls how the smoothing defined above should
be applied:
>>> // (1) Smooth only the forecast field
>>> // (2) Smooth only the observation field
>>> // (3) Smooth both the forecast and observation fields
>>> //
>>> interp_flag = 1;
>>> //
>>> // When smoothing, compute a ratio of the number of valid data
points to
>>> // the total number of points in the neighborhood.  If that ratio
is less
>>> // than this threshold, do not compute a smoothed forecast value.
This
>>> // threshold must be between 0 and 1.  Setting this threshold to 1
will
>>> // require that each observation be surrounded by n*n valid
forecast
>>> // points.
>>> //
>>> // e.g. interp_thresh = 1.0;
>>> //
>>> interp_thresh = 1.0;
>>> //
>>> // Specify a comma-separated list of box widths to be used to
define the
>>> // neighborhood size for the neighborhood verification methods.
All values
>>> // must be odd.  For values greater than 1, the n*n grid points
around each
>>> // point will be used to define the neighborhood.
>>> //
>>> // e.g. nbr_width = [ 3, 5 ];
>>> //
>>> nbr_width[] = [ 3, 5 ];
>>> //
>>> // When applying the neighborhood verification methods, compute a
ratio
>>> // of the number of valid data points to the total number of
points in
>>> // the neighborhood.  If that ratio is less than this threshold,
do not
>>> // include it in the computations.  This threshold must be between
0
>>> // and 1.  Setting this threshold to 1 will require that each
point be
>>> // surrounded by n*n valid forecast points.
>>> //
>>> // e.g. nbr_thresh = 1.0;
>>> //
>>> nbr_thresh = 1.0;
>>> //
>>> // When applying the neighborhood verification methods, apply a
threshold
>>> // to the fractional coverage values to define contingency tables
from
>>> // which to compute statistics.
>>> //
>>> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
>>> //
>>> cov_thresh[] = [ "ge0.5" ];
>>> //
>>> // Specify flags to indicate the type of data to be output:
>>> //
>>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
>>> //           Total (TOTAL),
>>> //           Forecast Rate (F_RATE),
>>> //           Hit Rate (H_RATE),
>>> //           Observation Rate (O_RATE)
>>> //
>>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
>>> //           Total (TOTAL),
>>> //           Forecast Yes and Observation Yes Count (FY_OY),
>>> //           Forecast Yes and Observation No Count (FY_ON),
>>> //           Forecast No and Observation Yes Count (FN_OY),
>>> //           Forecast No and Observation No Count (FN_ON)
>>> //
>>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
>>> //           Total (TOTAL),
>>> //           Base Rate (BASER),
>>> //           Forecast Mean (FMEAN),
>>> //           Accuracy (ACC),
>>> //           Frequency Bias (FBIAS),
>>> //           Probability of Detecting Yes (PODY),
>>> //           Probability of Detecting No (PODN),
>>> //           Probability of False Detection (POFD),
>>> //           False Alarm Ratio (FAR),
>>> //           Critical Success Index (CSI),
>>> //           Gilbert Skill Score (GSS),
>>> //           Hanssen and Kuipers Discriminant (HK),
>>> //           Heidke Skill Score (HSS),
>>> //           Odds Ratio (ODDS),
>>> //           NOTE: All statistics listed above contain parametric
and/or
>>> //                 non-parametric confidence interval limits.
>>> //
>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category Contingency
Table Counts:
>>> //           Total (TOTAL),
>>> //           Number of Categories (N_CAT),
>>> //           Contingency Table Count columns repeated N_CAT*N_CAT
times
>>> //
>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category Contingency
Table Scores:
>>> //           Total (TOTAL),
>>> //           Number of Categories (N_CAT),
>>> //           Accuracy (ACC),
>>> //           Hanssen and Kuipers Discriminant (HK),
>>> //           Heidke Skill Score (HSS),
>>> //           Gerrity Score (GER),
>>> //           NOTE: All statistics listed above contain parametric
and/or
>>> //                 non-parametric confidence interval limits.
>>> //
>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
>>> //           Total (TOTAL),
>>> //           Forecast Mean (FBAR),
>>> //           Forecast Standard Deviation (FSTDEV),
>>> //           Observation Mean (OBAR),
>>> //           Observation Standard Deviation (OSTDEV),
>>> //           Pearson's Correlation Coefficient (PR_CORR),
>>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
>>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
>>> //           Number of ranks compared (RANKS),
>>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
>>> //           Number of tied ranks in the observation field
(ORANK_TIES),
>>> //           Mean Error (ME),
>>> //           Standard Deviation of the Error (ESTDEV),
>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>>> //           Mean Absolute Error (MAE),
>>> //           Mean Squared Error (MSE),
>>> //           Bias-Corrected Mean Squared Error (BCMSE),
>>> //           Root Mean Squared Error (RMSE),
>>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
>>> //           NOTE: Most statistics listed above contain parametric
and/or
>>> //                 non-parametric confidence interval limits.
>>> //
>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>>> //           Total (TOTAL),
>>> //           Forecast Mean (FBAR),
>>> //              = mean(f)
>>> //           Observation Mean (OBAR),
>>> //              = mean(o)
>>> //           Forecast*Observation Product Mean (FOBAR),
>>> //              = mean(f*o)
>>> //           Forecast Squared Mean (FFBAR),
>>> //              = mean(f^2)
>>> //           Observation Squared Mean (OOBAR)
>>> //              = mean(o^2)
>>> //
>>> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
>>> //           Total (TOTAL),
>>> //           U-Forecast Mean (UFBAR),
>>> //              = mean(uf)
>>> //           V-Forecast Mean (VFBAR),
>>> //              = mean(vf)
>>> //           U-Observation Mean (UOBAR),
>>> //              = mean(uo)
>>> //           V-Observation Mean (VOBAR),
>>> //              = mean(vo)
>>> //           U-Product Plus V-Product (UVFOBAR),
>>> //              = mean(uf*uo+vf*vo)
>>> //           U-Forecast Squared Plus V-Forecast Squared (UVFFBAR),
>>> //              = mean(uf^2+vf^2)
>>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
>>> //              = mean(uo^2+vo^2)
>>> //
>>> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
>>> //           Total (TOTAL),
>>> //           Number of Forecast Probability Thresholds (N_THRESH),
>>> //           Probability Threshold Value (THRESH_i),
>>> //           Row Observation Yes Count (OY_i),
>>> //           Row Observation No Count (ON_i),
>>> //           NOTE: Previous 3 columns repeated for each row in the
table
>>> //           Last Probability Threshold Value (THRESH_n)
>>> //
>>> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
>>> //           Total (TOTAL),
>>> //           Number of Forecast Probability Thresholds (N_THRESH),
>>> //           Base Rate (BASER) with confidence interval limits,
>>> //           Reliability (RELIABILITY),
>>> //           Resolution (RESOLUTION),
>>> //           Uncertainty (UNCERTAINTY),
>>> //           Area Under the ROC Curve (ROC_AUC),
>>> //           Brier Score (BRIER) with confidence interval limits,
>>> //           Probability Threshold Value (THRESH_i)
>>> //           NOTE: Previous column repeated for each probability
threshold.
>>> //
>>> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics of
>>> //                                 Probabilistic Variables:
>>> //           Total (TOTAL),
>>> //           Number of Forecast Probability Thresholds (N_THRESH),
>>> //           Probability Threshold Value (THRESH_i),
>>> //           Observation Yes Count Divided by Total (OY_TP_i),
>>> //           Observation No Count Divided by Total (ON_TP_i),
>>> //           Calibration (CALIBRATION_i),
>>> //           Refinement (REFINEMENT_i),
>>> //           Likelikhood (LIKELIHOOD_i),
>>> //           Base Rate (BASER_i),
>>> //           NOTE: Previous 7 columns repeated for each row in the
table
>>> //           Last Probability Threshold Value (THRESH_n)
>>> //
>>> //   (12) STAT and PRC Text Files, ROC Curve Points for
>>> //                                 Probabilistic Variables:
>>> //           Total (TOTAL),
>>> //           Number of Forecast Probability Thresholds (N_THRESH),
>>> //           Probability Threshold Value (THRESH_i),
>>> //           Probability of Detecting Yes (PODY_i),
>>> //           Probability of False Detection (POFD_i),
>>> //           NOTE: Previous 3 columns repeated for each row in the
table
>>> //           Last Probability Threshold Value (THRESH_n)
>>> //
>>> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
>>> //           Total (TOTAL),
>>> //           Forecast Yes and Observation Yes Count (FY_OY),
>>> //           Forecast Yes and Observation No Count (FY_ON),
>>> //           Forecast No and Observation Yes Count (FN_OY),
>>> //           Forecast No and Observation No Count (FN_ON),
>>> //           Fractional Threshold Value (FRAC_T)
>>> //
>>> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
>>> //           Total (TOTAL),
>>> //           Base Rate (BASER),
>>> //           Forecast Mean (FMEAN),
>>> //           Accuracy (ACC),
>>> //           Bias (BIAS),
>>> //           Probability of Detecting Yes (PODY),
>>> //           Probability of Detecting No (PODN),
>>> //           Probability of False Detection (POFD),
>>> //           False Alarm Ratio (FAR),
>>> //           Critical Success Index (CSI),
>>> //           Gilbert Skill Score (GSS),
>>> //           Hanssen and Kuipers Discriminant (HK),
>>> //           Heidke Skill Score (HSS),
>>> //           Odds Ratio (ODDS),
>>> //           NOTE: Most statistics listed above contain parametric
and/or
>>> //                 non-parametric confidence interval limits.
>>> //
>>> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
>>> //           Total (TOTAL),
>>> //           Fractions Brier Score (FBS),
>>> //           Fractions Skill Score (FSS)
>>> //
>>> //   (16) NetCDF File containing difference fields for each grib
>>> //        code/mask combination.  A non-zero value indicates that
>>> //        this NetCDF file should be produced.  A value of 0
>>> //        indicates that it should not be produced.
>>> //
>>> // Values for flags (1) through (15) are interpreted as follows:
>>> //    (0) Do not generate output of this type
>>> //    (1) Write output to a STAT file
>>> //    (2) Write output to a STAT file and a text file
>>> //
>>> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1
];
>>> //
>>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
>>> // Coefficients should be computed.  Computing them over large
datasets is
>>> // computationally intensive and slows down the runtime execution
significantly.
>>> //    (0) Do not compute these correlation coefficients
>>> //    (1) Compute these correlation coefficients
>>> //
>>> rank_corr_flag = 0;
>>> //
>>> // Specify the GRIB Table 2 parameter table version number to be
used
>>> // for interpreting GRIB codes.
>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>> //
>>> grib_ptv = 2;
>>> //
>>> // Directory where temporary files should be written.
>>> //
>>> tmp_dir = "/tmp";
>>> //
>>> // Prefix to be used for the output file names.
>>> //
>>> output_prefix = "APCP_24";
>>> //
>>> // Indicate a version number for the contents of this
configuration file.
>>> // The value should generally not be modified.
>>> //
>>> version = "V3.0";
>>>
>>>
>>> geeta
>>>
>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>> From: met_help at ucar.edu
>>>> To: geeta124 at hotmail.com
>>>> Date: Thu, 20 Feb 2014 10:21:20 -0700
>>>>
>>>> Geeta,
>>>>
>>>> I see that you're using METv3.0.  The current version is METv4.1,
and it'd be good to switch to that version when possible.  There have
been major changes to the MET configuration file format since
>>>> METv3.0, so be sure to use the default config files for METv4.1.
>>>>
>>>> I ran METv3.0 grid_stat on the data files you sent and reproduced
the error message you saw:
>>>>       ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
>>>>
>>>> Since the input files are both NetCDF files, you need to specify
the name of the NetCDF variable that should be used.  So I modified
your config file:
>>>>       FROM: fcst_field[] = [ "61/A24" ];
>>>>       TO:   fcst_field[] = [ "APCP_24(*,*)" ];
>>>>
>>>> When I reran with this change, I got this error:
>>>>       NetCDF: Attribute not found
>>>>
>>>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
>>>>                    :MET_version = "V3.0.1" ;
>>>>
>>>> I switched that to be consistent with the version of MET you're
running:
>>>>                    :MET_version = "V3.0" ;
>>>>
>>>> And then I got this error:
>>>> ERROR: parse_poly_mask() -> the dimensions of the masking region
(185, 129) must match the dimensions of the data (53, 53).
>>>>
>>>> So I modified the config file to change the masking region
settings:
>>>>       FROM: mask_grid[] = [ "DTC165", "DTC166" ];
>>>>             mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
>>>>                             "MET_BASE/data/poly/LMV.poly" ];
>>>>
>>>>       TO:   mask_grid[] = [ "FULL" ];
>>>>             mask_poly[] = [];
>>>>
>>>> And then it ran fine.
>>>>
>>>> To summarize...
>>>>     (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
>>>>     (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
>>>>     (3) Consider updating to using METv4.1 instead.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>
>>>>> Hi John,
>>>>> I am bothering you with a few more. Hope u ll bear with me.
>>>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
>>>>>
>>>>> 1. Now I was reading  about 3 approaches of FUZZY verf which are
a. Multi event contingency Table (My question is -----Can we define a
hit as RF b/w 0.1 to 2.5 in the config file. Normally we select the
threshold as ge0.1 or ge2.5 etc. Is the provision of giving a range in
config file there?????).
>>>>>
>>>>> b) Pragmatic approach  (donot know what's that???)
>>>>>
>>>>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
>>>>>
>>>>> I donot understand these. Can u lead me to the right direction
or provide some hints????
>>>>>
>>>>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
>>>>> Can the QUILT plot be prepared for any score like HK, HSS, FBS
or FSS????
>>>>>
>>>>>
>>>>> thanks
>>>>> geeta
>>>>>
>>>>> From: geeta124 at hotmail.com
>>>>> To: met_help at ucar.edu
>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Hi John,
>>>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
>>>>> Kindly check that.
>>>>>
>>>>> geeta
>>>>>
>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>>>> From: met_help at ucar.edu
>>>>>> To: geeta124 at hotmail.com
>>>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
>>>>>>
>>>>>> Geeta,
>>>>>>
>>>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
>>>>>>
>>>>>> I was trying to make the point that the "interpolation methods"
in the grid_stat config file could be used as a form of "upscaling".
You are right, there is no *need* to interpolate the data since
>>>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
>>>>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
>>>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
>>>>>>
>>>>>> The default interpolation width is 1, meaning that no smoothing
is performed.  However, you could use multiple smoothing widths and
see how your performance changes the more you smooth the data.
>>>>>>
>>>>>> Does that make sense?
>>>>>>
>>>>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
>>>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
>>>>>>
>>>>>> fcst = {
>>>>>>        wind_thresh = [ NA ];
>>>>>>
>>>>>>        field = [
>>>>>>           {
>>>>>>             name       = "APCP_24";
>>>>>>             level      = [ "(*,*)" ];
>>>>>>             cat_thresh = [ >0.0, >=5.0 ];
>>>>>>           }
>>>>>>        ];
>>>>>>
>>>>>> };
>>>>>>
>>>>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
>>>>>> instructions:
>>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>>>
>>>>>> Thanks,
>>>>>> John
>>>>>>
>>>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
>>>>>>>
>>>>>>> Hi John,
>>>>>>>      I have run grid-stat. Following is the error.
>>>>>>>
>>>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>>>>>>> GSL_RNG_TYPE=mt19937
>>>>>>> GSL_RNG_SEED=18446744073321512274
>>>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>>>>> Observation File: ../trmm_nc_data/02june2011.nc
>>>>>>> Configuration File: GridStatConfig_APCP_24
>>>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in file:
./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>>>>>
>>>>>>>
--------------------------------------------------------------------------------
>>>>>>>
>>>>>>>
>>>>>>> Pls suggest.
>>>>>>>
>>>>>>> geeta
>>>>>>>
>>>>>>> From: geeta124 at hotmail.com
>>>>>>> To: met_help at ucar.edu
>>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>>>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks a lot John for your inputs and clarifications.
>>>>>>>
>>>>>>> Still following doubts are there.
>>>>>>>
>>>>>>> 1. when I run copygb, what it does is to make the observation
and Model FC uniform ( I mean same GRID and RESOLUTION). These two
parameters are only important.
>>>>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
>>>>>>>
>>>>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
>>>>>>>
>>>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
>>>>>>>
>>>>>>> geeta
>>>>>>>
>>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>>>>>>>> From: met_help at ucar.edu
>>>>>>>> To: geeta124 at hotmail.com
>>>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
>>>>>>>>
>>>>>>>> Geeta,
>>>>>>>>
>>>>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
>>>>>>>>
>>>>>>>> (1) The first way is by applying an interpolation method to
the data.  Since the data are already on the same grid, this is really
a "smoothing" operation instead.  This is called "upscaling".
>>>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
>>>>>>>> config file, you could try:
>>>>>>>>
>>>>>>>> interp = {
>>>>>>>>         field      = BOTH;
>>>>>>>>         vld_thresh = 1.0;
>>>>>>>>
>>>>>>>>         type = [
>>>>>>>>            { method = UW_MEAN; width  = 1; },
>>>>>>>>            { method = UW_MEAN; width  = 3; },
>>>>>>>>            { method = UW_MEAN; width  = 6; },
>>>>>>>>            { method = UW_MEAN; width  = 9; }
>>>>>>>>         ];
>>>>>>>> };
>>>>>>>>
>>>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
>>>>>>>>
>>>>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
>>>>>>>> type.  Be sure to turn the NBRCNT output line on in the Grid-
Stat config file.  For neighborhood verification, you pick multiple
neighborhood sizes and look to see how the FSS changes as you increase
>>>>>>>> the neighborhood size.  As the neighborhood size increases,
FSS increases.  And you look to see how large of a neighborhood size
you need to get a "useful" (FSS > 0.5) forecast.
>>>>>>>>
>>>>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
>>>>>>>> it places an n x n box around each grid point and counts up
the number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
>>>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
>>>>>>>> computed by comparing the forecast and observation fractional
coverage fields to each other.
>>>>>>>>
>>>>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
>>>>>>>>
>>>>>>>> Here's an example of how you might set this up in the Grid-
Stat config file:
>>>>>>>>
>>>>>>>> nbrhd = {
>>>>>>>>         vld_thresh = 1.0;
>>>>>>>>         width      = [ 3, 5, 9, 11, 13, 15 ];
>>>>>>>>         cov_thresh = [ >=0.5 ];
>>>>>>>> }
>>>>>>>>
>>>>>>>> For a given threshold, you should look to see how FSS changes
as you increase the neighborhood size.
>>>>>>>>
>>>>>>>> Hopefully that helps get you going.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> John Halley Gotway
>>>>>>>> met_help at ucar.edu
>>>>>>>>
>>>>>>>>
>>>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
>>>>>>>>>
>>>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
>>>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
>>>>>>>>>             Queue: met_help
>>>>>>>>>           Subject: Unable to visualize Fuzzy verf.
>>>>>>>>>             Owner: Nobody
>>>>>>>>>        Requestors: geeta124 at hotmail.com
>>>>>>>>>            Status: new
>>>>>>>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi John/ met_help.
>>>>>>>>>
>>>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
>>>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
>>>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and
the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
>>>>>>>>>
>>>>>>>>> geeta
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Thu Feb 27 01:39:40 2014

thanks john,
thanks for pointing out the error.
How can I do that???
these are the o/p I get when I do ncdump???.



geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Wed, 26 Feb 2014 11:13:55 -0700
>
> Geeta,
>
> The problem is in the observation files:
>
> *************************************
> O/P of OBSERVATION FILE (NETCDF format) *
> *************************************
> :MET_version = "V3.0.1" ;
>
> If you change the "V3.0.1" to "V3.0", then METv3.0 grid_stat will be
able to process it fine.
>
> Also, you should switch the timing variable attributes from floats
to integers:
> Change from:
>   APCP_03:init_time_ut = 1306972800. ;
>   APCP_03:valid_time_ut = 1306983600. ;
>   APCP_03:accum_time_sec = 10800.f ;
> Change to:
>   APCP_03:init_time_ut = 1306972800 ;
>   APCP_03:valid_time_ut = 1306983600 ;
>   APCP_03:accum_time_sec = 10800 ;
>
> When you switch to METv4.1, it'll complain if those aren't integers.
>
> Hope that helps.
>
> Thanks,
> John
>
>
> On 02/25/2014 11:26 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > thanks John,
> > I have NOT done anything with the NC files as yet. I was asking
you how to go about changing the attribute????
> >
> > I am posting the o/p of one of the Forecast files in netcdf format
using ncdump. It Shows MET="v3.0" what you desire.
> > bash-3.2$ ncdump -h test.nc
> > netcdf test {
> > dimensions:
> >          lat = 53 ;
> >          lon = 53 ;
> > variables:
> >          float lat(lat, lon) ;
> >                  lat:long_name = "latitude" ;
> >                  lat:units = "degrees_north" ;
> >                  lat:standard_name = "latitude" ;
> >          float lon(lat, lon) ;
> >                  lon:long_name = "longitude" ;
> >                  lon:units = "degrees_east" ;
> >                  lon:standard_name = "longitude" ;
> >          float APCP_24(lat, lon) ;
> >                  APCP_24:name = "APCP" ;
> >                  APCP_24:long_name = "Total precipitation" ;
> >                  APCP_24:level = "A24" ;
> >                  APCP_24:units = "kg/m^2" ;
> >                  APCP_24:grib_code = 61 ;
> >                  APCP_24:_FillValue = -9999.f ;
> >                  APCP_24:init_time = "20110601_000000" ;
> >                  APCP_24:init_time_ut = 1306886400 ;
> >                  APCP_24:valid_time = "20110602_030000" ;
> >                  APCP_24:valid_time_ut = 1306983600 ;
> >                  APCP_24:accum_time = "240000" ;
> >                  APCP_24:accum_time_sec = 86400 ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >                  :MET_version = "V3.0" ;
> >                  :MET_tool = "pcp_combine" ;
> >                  :RunCommand = "Subtraction:
2011060100_WRFPRS_d01.027 with accumulation of 270000 minus
2011060100_WRFPRS_d01.003 with accumulation of 030000." ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9.000000 degrees_north" ;
> >                  :lon_ll = "74.000000 degrees_east" ;
> >                  :delta_lat = "0.250000 degrees" ;
> >                  :delta_lon = "0.250000 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> > bash-3.2$
> > *************************************
> > O/P of OBSERVATION FILE (NETCDF format) *
> > *************************************
> > bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
> > netcdf test {
> > dimensions:
> >          lon = 53 ;
> >          lat = 53 ;
> > variables:
> >          double lon(lon) ;
> >                  lon:units = "degrees_east" ;
> >          double lat(lat) ;
> >                  lat:units = "degrees_north" ;
> >          float APCP_03(lat, lon) ;
> >                  APCP_03:units = "kg/m^2" ;
> >                  APCP_03:missing_value = -9999.f ;
> >                  APCP_03:long_name = "Total precipitation" ;
> >                  APCP_03:name = "APCP" ;
> >                  APCP_03:level = "A3" ;
> >                  APCP_03:grib_code = 61.f ;
> >                  APCP_03:_FillValue = -9999.f ;
> >                  APCP_03:init_time = "20110602_000000" ;
> >                  APCP_03:init_time_ut = 1306972800. ;
> >                  APCP_03:valid_time = "20110602_030000" ;
> >                  APCP_03:valid_time_ut = 1306983600. ;
> >                  APCP_03:accum_time = "030000" ;
> >                  APCP_03:accum_time_sec = 10800.f ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
../../../vpt/geeta/02june2011.nc generated 20140123_163031 on host
ncmr0102 by the Rscript trmm2nc.R" ;
> >                  :MET_version = "V3.0.1" ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9 degrees_north" ;
> >                  :lon_ll = "74 degrees_east" ;
> >                  :delta_lat = "0.25 degrees" ;
> >                  :delta_lon = "0.25 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> >
> >
> > Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
> > DO you suspect the location of NETCDF ???????????????.
> >
> > shall be looking forward to hearing from you.
> >
> > thanks
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Tue, 25 Feb 2014 11:26:10 -0700
> >>
> >> Geeta,
> >>
> >> Sorry, I was out of the office yesterday.  Looking back through
this ticket, I see that you're still getting the following error:
> >>      NetCDF: Attribute not found
> >>
> >> You said that you updated the NetCDF files to include the
following global attribute:
> >>      MET_version = "V3.0" ;
> >>
> >> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> I see 2 other questions in your emails:
> >>
> >> (1) How can you control the optional "upscaling" or "smoothing"
done by grid_stat?
> >>       In METv3.0, that is controlled by configuration file
options that begin with "interp_".  For example, try the following:
> >>          interp_method[] = [ "UW_MEAN" ];
> >>          interp_width[]  = [ 1, 3, 5 ];
> >>          interp_flag     = 3;
> >>
> >>       For each output line you were getting before, you should
now get 2 more.  Since interp_flag is set to 3, grid_stat will smooth
both the forecast and observation fields.  For interp_width = 3,
> >> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
> >> in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.
> >>
> >> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
> >>
> >> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
> >> plots a red dot for each observation lat/lon it finds in the
data.  It is intended to just give you a quick look at the location of
the observations to make sure that they exist where you expect.  It
> >> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
> >> the computation of statistics.  The MPR line type includes
columns named "OBS_LAT" and "OBS_LON" giving the point observation
location information.  You could read the lat/lon information from the
MPR
> >> line type and use whatever plotting tool you prefer to plot the
observation locations.
> >>
> >> If you do post more data to the ftp site, please write me back
and I'll go grab it.
> >>
> >> Thanks,
> >> John
> >>
> >>
> >> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> hi john,
> >>> you had discussed about the upscaling (of both obs and fcst or
any one of them). The forecast is compared with the observations which
are averaged to coarse scales.
> >>> How is this averaged defined in the configuration file.
> >>>
> >>> Pls let me know reg the global attributes ????
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Sun, 23 Feb 2014 22:24:04 +0530
> >>>
> >>>
> >>>
> >>>
> >>> hi John,
> >>> Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
> >>>
> >>> Can you provide some more hints.
> >>>
> >>> regards
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Fri, 21 Feb 2014 15:03:34 +0530
> >>>
> >>>
> >>>
> >>>
> >>> thanks John,
> >>> I have made the changes as per your config file.
> >>> But the error persists.
> >>> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744073358673747
> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>> Observation File: ../trmm_nc_data/02june2011.nc
> >>> Configuration File: GridStatConfig_APCP_24
> >>> NetCDF: Attribute not found
> >>> -bash-3.2$
> >>>
> >>>
> >>> 2. I have used  ncdump to see my file attributes.
> >>> Are u referring to these attributes???
> >>>
> >>> // global attributes:
> >>>                   :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >>>                   :MET_version = "V3.0" ;
> >>>                   :MET_tool = "pcp_combine" ;
> >>>
> >>> Following is my Config file.
> >>>
____________________________________________________________________
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Default grid_stat configuration file
> >>> //
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Specify a name to designate the model being verified.  This
name will be
> >>> // written to the second column of the ASCII output generated.
> >>> //
> >>> model = "WRF";
> >>> //
> >>> // Specify a comma-separated list of fields to be verified.  The
forecast and
> >>> // observation fields may be specified separately.  If the
obs_field parameter
> >>> // is left blank, it will default to the contents of fcst_field.
> >>> //
> >>> // Each field is specified as a GRIB code or abbreviation
followed by an
> >>> // accumulation or vertical level indicator for GRIB files or as
a variable name
> >>> // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
> >>> //
> >>> // Specifying verification fields for GRIB files:
> >>> //    GC/ANNN for accumulation interval NNN
> >>> //    GC/ZNNN for vertical level NNN
> >>> //    GC/PNNN for pressure level NNN in hPa
> >>> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >>> //    GC/LNNN for a generic level type
> >>> //    GC/RNNN for a specific GRIB record number
> >>> //    Where GC is the number of or abbreviation for the grib
code
> >>> //    to be verified.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> // Specifying verification fields for NetCDF files:
> >>> //    var_name(i,...,j,*,*) for a single field
> >>> //    Where var_name is the name of the NetCDF variable,
> >>> //    and i,...,j specifies fixed dimension values,
> >>> //    and *,* specifies the two dimensions for the gridded
field.
> >>> //
> >>> //    NOTE: To verify winds as vectors rather than scalars,
> >>> //          specify UGRD (or 33) followd by VGRD (or 34) with
the
> >>> //          same level values.
> >>> //
> >>> //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
> >>> //
> >>> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for
GRIB input
> >>> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ];
for NetCDF input
> >>> //
> >>> fcst_field[] = [ "APCP_24(*,*)" ];
> >>> obs_field[]  = [ "APCP_03(*,*)" ];
> >>> //
> >>> // Specify a comma-separated list of groups of thresholds to be
applied to the
> >>> // fields listed above.  Thresholds for the forecast and
observation fields
> >>> // may be specified separately.  If the obs_thresh parameter is
left blank,
> >>> // it will default to the content of fcst_thresh.
> >>> //
> >>> // At least one threshold must be provided for each field listed
above.  The
> >>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
> >>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> >>> // thresholds to a field, separate the threshold values with a
space.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //
> >>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
> >>> //       and be preceeded by "ge".
> >>> //
> >>> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0
ge80.0" ];
> >>> //
> >>> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
> >>> obs_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of thresholds to be used when
computing
> >>> // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
> >>> // values derived from each U/V pair.  Only those U/V pairs
which meet the wind
> >>> // speed threshold criteria are retained.  If the
obs_wind_thresh parameter is
> >>> // left blank, it will default to the contents of
fcst_wind_thresh.
> >>> //
> >>> // To apply multiple wind speed thresholds, separate the
threshold values with a
> >>> // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //    'NA' for no threshold
> >>> //
> >>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >>> //
> >>> fcst_wind_thresh[] = [ "NA" ];
> >>> obs_wind_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of grids to be used in masking
the data over
> >>> // which to perform scoring.  An empty list indicates that no
masking grid
> >>> // should be performed.  The standard NCEP grids are named
"GNNN" where NNN
> >>> // indicates the three digit grid number.  Enter "FULL" to score
over the
> >>> // entire domain.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>> //
> >>> // e.g. mask_grid[] = [ "FULL" ];
> >>> //
> >>> mask_grid[] = [ "FULL" ];
> >>> //
> >>> // Specify a comma-separated list of masking regions to be
applied.
> >>> // An empty list indicates that no additional masks should be
used.
> >>> // The masking regions may be defined in one of 4 ways:
> >>> //
> >>> // (1) An ASCII file containing a lat/lon polygon.
> >>> //     Latitude in degrees north and longitude in degrees east.
> >>> //     By default, the first and last polygon points are
connected.
> >>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> >>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >>> //
> >>> // (2) The NetCDF output of the gen_poly_mask tool.
> >>> //
> >>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.nc var_name gt0.00"
> >>> //
> >>> // (4) A GRIB data file, followed by a description of the field
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.grb APCP/A3 gt0.00"
> >>> //
> >>> // Any NetCDF or GRIB file used must have the same grid
dimensions as the
> >>> // data being verified.
> >>> //
> >>> // MET_BASE may be used in the path for the files above.
> >>> //
> >>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >>> //                      "poly_mask.ncf",
> >>> //                      "sample.nc APCP",
> >>> //                      "sample.grb HGT/Z0 gt100.0" ];
> >>> //
> >>> mask_poly[] = [];
> >>> //
> >>> // Specify a comma-separated list of values for alpha to be used
when computing
> >>> // confidence intervals.  Values of alpha must be between 0 and
1.
> >>> //
> >>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >>> //
> >>> ci_alpha[] = [ 0.10, 0.05 ];
> >>> //
> >>> // Specify the method to be used for computing bootstrap
confidence intervals.
> >>> // The value for this is interpreted as follows:
> >>> //    (0) Use the BCa interval method (computationally
intensive)
> >>> //    (1) Use the percentile interval method
> >>> //
> >>> boot_interval = 1;
> >>> //
> >>> // Specify a proportion between 0 and 1 to define the replicate
sample size
> >>> // to be used when computing percentile intervals.  The
replicate sample
> >>> // size is set to boot_rep_prop * n, where n is the number of
raw data points.
> >>> //
> >>> // e.g boot_rep_prop = 0.80;
> >>> //
> >>> boot_rep_prop = 1.0;
> >>> //
> >>> // Specify the number of times each set of matched pair data
should be
> >>> // resampled when computing bootstrap confidence intervals.  A
value of
> >>> // zero disables the computation of bootstrap condifence
intervals.
> >>> //
> >>> // e.g. n_boot_rep = 1000;
> >>> //
> >>> n_boot_rep = 0;
> >>> //
> >>> // Specify the name of the random number generator to be used.
See the MET
> >>> // Users Guide for a list of possible random number generators.
> >>> //
> >>> boot_rng = "mt19937";
> >>> //
> >>> // Specify the seed value to be used when computing bootstrap
confidence
> >>> // intervals.  If left unspecified, the seed will change for
each run and
> >>> // the computed bootstrap confidence intervals will not be
reproducable.
> >>> //
> >>> boot_seed = "";
> >>> //
> >>> // Specify a comma-separated list of interpolation method(s) to
be used for
> >>> // smoothing the data fields prior to comparing them.  The value
at each grid
> >>> // point is replaced by the measure computed over the
neighborhood defined
> >>> // around the grid point.  String values are interpreted as
follows:
> >>> //    MIN     = Minimum in the neighborhood
> >>> //    MAX     = Maximum in the neighborhood
> >>> //    MEDIAN  = Median in the neighborhood
> >>> //    UW_MEAN = Unweighted mean in the neighborhood
> >>> //
> >>> //    NOTE: The distance-weighted mean (DW_MEAN) is not an
option here since
> >>> //          it will have no effect on a gridded field.
> >>> //
> >>> //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
> >>> //          it reduces to an unweighted mean on a grid.
> >>> //
> >>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >>> //
> >>> interp_method[] = [ "UW_MEAN" ];
> >>> //
> >>> // Specify a comma-separated list of box widths to be used by
the interpolation
> >>> // techniques listed above.  All values must be odd.  A value of
1 indicates
> >>> // that no smoothing should be performed.  For values greater
than 1, the n*n
> >>> // grid points around each point will be used to smooth the data
fields.
> >>> //
> >>> // e.g. interp_width = [ 1, 3, 5 ];
> >>> //
> >>> interp_width[] = [ 1 ];
> >>> //
> >>> // The interp_flag controls how the smoothing defined above
should be applied:
> >>> // (1) Smooth only the forecast field
> >>> // (2) Smooth only the observation field
> >>> // (3) Smooth both the forecast and observation fields
> >>> //
> >>> interp_flag = 1;
> >>> //
> >>> // When smoothing, compute a ratio of the number of valid data
points to
> >>> // the total number of points in the neighborhood.  If that
ratio is less
> >>> // than this threshold, do not compute a smoothed forecast
value.  This
> >>> // threshold must be between 0 and 1.  Setting this threshold to
1 will
> >>> // require that each observation be surrounded by n*n valid
forecast
> >>> // points.
> >>> //
> >>> // e.g. interp_thresh = 1.0;
> >>> //
> >>> interp_thresh = 1.0;
> >>> //
> >>> // Specify a comma-separated list of box widths to be used to
define the
> >>> // neighborhood size for the neighborhood verification methods.
All values
> >>> // must be odd.  For values greater than 1, the n*n grid points
around each
> >>> // point will be used to define the neighborhood.
> >>> //
> >>> // e.g. nbr_width = [ 3, 5 ];
> >>> //
> >>> nbr_width[] = [ 3, 5 ];
> >>> //
> >>> // When applying the neighborhood verification methods, compute
a ratio
> >>> // of the number of valid data points to the total number of
points in
> >>> // the neighborhood.  If that ratio is less than this threshold,
do not
> >>> // include it in the computations.  This threshold must be
between 0
> >>> // and 1.  Setting this threshold to 1 will require that each
point be
> >>> // surrounded by n*n valid forecast points.
> >>> //
> >>> // e.g. nbr_thresh = 1.0;
> >>> //
> >>> nbr_thresh = 1.0;
> >>> //
> >>> // When applying the neighborhood verification methods, apply a
threshold
> >>> // to the fractional coverage values to define contingency
tables from
> >>> // which to compute statistics.
> >>> //
> >>> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
> >>> //
> >>> cov_thresh[] = [ "ge0.5" ];
> >>> //
> >>> // Specify flags to indicate the type of data to be output:
> >>> //
> >>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> >>> //           Total (TOTAL),
> >>> //           Forecast Rate (F_RATE),
> >>> //           Hit Rate (H_RATE),
> >>> //           Observation Rate (O_RATE)
> >>> //
> >>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON)
> >>> //
> >>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Frequency Bias (FBIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Contingency Table Count columns repeated
N_CAT*N_CAT times
> >>> //
> >>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Accuracy (ACC),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Gerrity Score (GER),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //           Forecast Standard Deviation (FSTDEV),
> >>> //           Observation Mean (OBAR),
> >>> //           Observation Standard Deviation (OSTDEV),
> >>> //           Pearson's Correlation Coefficient (PR_CORR),
> >>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> >>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> >>> //           Number of ranks compared (RANKS),
> >>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> >>> //           Number of tied ranks in the observation field
(ORANK_TIES),
> >>> //           Mean Error (ME),
> >>> //           Standard Deviation of the Error (ESTDEV),
> >>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >>> //           Mean Absolute Error (MAE),
> >>> //           Mean Squared Error (MSE),
> >>> //           Bias-Corrected Mean Squared Error (BCMSE),
> >>> //           Root Mean Squared Error (RMSE),
> >>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //              = mean(f)
> >>> //           Observation Mean (OBAR),
> >>> //              = mean(o)
> >>> //           Forecast*Observation Product Mean (FOBAR),
> >>> //              = mean(f*o)
> >>> //           Forecast Squared Mean (FFBAR),
> >>> //              = mean(f^2)
> >>> //           Observation Squared Mean (OOBAR)
> >>> //              = mean(o^2)
> >>> //
> >>> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           U-Forecast Mean (UFBAR),
> >>> //              = mean(uf)
> >>> //           V-Forecast Mean (VFBAR),
> >>> //              = mean(vf)
> >>> //           U-Observation Mean (UOBAR),
> >>> //              = mean(uo)
> >>> //           V-Observation Mean (VOBAR),
> >>> //              = mean(vo)
> >>> //           U-Product Plus V-Product (UVFOBAR),
> >>> //              = mean(uf*uo+vf*vo)
> >>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >>> //              = mean(uf^2+vf^2)
> >>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> >>> //              = mean(uo^2+vo^2)
> >>> //
> >>> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Row Observation Yes Count (OY_i),
> >>> //           Row Observation No Count (ON_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Base Rate (BASER) with confidence interval limits,
> >>> //           Reliability (RELIABILITY),
> >>> //           Resolution (RESOLUTION),
> >>> //           Uncertainty (UNCERTAINTY),
> >>> //           Area Under the ROC Curve (ROC_AUC),
> >>> //           Brier Score (BRIER) with confidence interval
limits,
> >>> //           Probability Threshold Value (THRESH_i)
> >>> //           NOTE: Previous column repeated for each probability
threshold.
> >>> //
> >>> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics
of
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Observation Yes Count Divided by Total (OY_TP_i),
> >>> //           Observation No Count Divided by Total (ON_TP_i),
> >>> //           Calibration (CALIBRATION_i),
> >>> //           Refinement (REFINEMENT_i),
> >>> //           Likelikhood (LIKELIHOOD_i),
> >>> //           Base Rate (BASER_i),
> >>> //           NOTE: Previous 7 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (12) STAT and PRC Text Files, ROC Curve Points for
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Probability of Detecting Yes (PODY_i),
> >>> //           Probability of False Detection (POFD_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON),
> >>> //           Fractional Threshold Value (FRAC_T)
> >>> //
> >>> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Bias (BIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
> >>> //           Total (TOTAL),
> >>> //           Fractions Brier Score (FBS),
> >>> //           Fractions Skill Score (FSS)
> >>> //
> >>> //   (16) NetCDF File containing difference fields for each grib
> >>> //        code/mask combination.  A non-zero value indicates
that
> >>> //        this NetCDF file should be produced.  A value of 0
> >>> //        indicates that it should not be produced.
> >>> //
> >>> // Values for flags (1) through (15) are interpreted as follows:
> >>> //    (0) Do not generate output of this type
> >>> //    (1) Write output to a STAT file
> >>> //    (2) Write output to a STAT file and a text file
> >>> //
> >>> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1
];
> >>> //
> >>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> >>> // Coefficients should be computed.  Computing them over large
datasets is
> >>> // computationally intensive and slows down the runtime
execution significantly.
> >>> //    (0) Do not compute these correlation coefficients
> >>> //    (1) Compute these correlation coefficients
> >>> //
> >>> rank_corr_flag = 0;
> >>> //
> >>> // Specify the GRIB Table 2 parameter table version number to be
used
> >>> // for interpreting GRIB codes.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> grib_ptv = 2;
> >>> //
> >>> // Directory where temporary files should be written.
> >>> //
> >>> tmp_dir = "/tmp";
> >>> //
> >>> // Prefix to be used for the output file names.
> >>> //
> >>> output_prefix = "APCP_24";
> >>> //
> >>> // Indicate a version number for the contents of this
configuration file.
> >>> // The value should generally not be modified.
> >>> //
> >>> version = "V3.0";
> >>>
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Thu, 20 Feb 2014 10:21:20 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> I see that you're using METv3.0.  The current version is
METv4.1, and it'd be good to switch to that version when possible.
There have been major changes to the MET configuration file format
since
> >>>> METv3.0, so be sure to use the default config files for
METv4.1.
> >>>>
> >>>> I ran METv3.0 grid_stat on the data files you sent and
reproduced the error message you saw:
> >>>>       ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
> >>>>
> >>>> Since the input files are both NetCDF files, you need to
specify the name of the NetCDF variable that should be used.  So I
modified your config file:
> >>>>       FROM: fcst_field[] = [ "61/A24" ];
> >>>>       TO:   fcst_field[] = [ "APCP_24(*,*)" ];
> >>>>
> >>>> When I reran with this change, I got this error:
> >>>>       NetCDF: Attribute not found
> >>>>
> >>>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
> >>>>                    :MET_version = "V3.0.1" ;
> >>>>
> >>>> I switched that to be consistent with the version of MET you're
running:
> >>>>                    :MET_version = "V3.0" ;
> >>>>
> >>>> And then I got this error:
> >>>> ERROR: parse_poly_mask() -> the dimensions of the masking
region (185, 129) must match the dimensions of the data (53, 53).
> >>>>
> >>>> So I modified the config file to change the masking region
settings:
> >>>>       FROM: mask_grid[] = [ "DTC165", "DTC166" ];
> >>>>             mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
> >>>>                             "MET_BASE/data/poly/LMV.poly" ];
> >>>>
> >>>>       TO:   mask_grid[] = [ "FULL" ];
> >>>>             mask_poly[] = [];
> >>>>
> >>>> And then it ran fine.
> >>>>
> >>>> To summarize...
> >>>>     (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
> >>>>     (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
> >>>>     (3) Consider updating to using METv4.1 instead.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
> >>>>>
> >>>>> Hi John,
> >>>>> I am bothering you with a few more. Hope u ll bear with me.
> >>>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >>>>>
> >>>>> 1. Now I was reading  about 3 approaches of FUZZY verf which
are   a. Multi event contingency Table (My question is -----Can we
define a hit as RF b/w 0.1 to 2.5 in the config file. Normally we
select the threshold as ge0.1 or ge2.5 etc. Is the provision of giving
a range in config file there?????).
> >>>>>
> >>>>> b) Pragmatic approach  (donot know what's that???)
> >>>>>
> >>>>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >>>>>
> >>>>> I donot understand these. Can u lead me to the right direction
or provide some hints????
> >>>>>
> >>>>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
> >>>>> Can the QUILT plot be prepared for any score like HK, HSS, FBS
or FSS????
> >>>>>
> >>>>>
> >>>>> thanks
> >>>>> geeta
> >>>>>
> >>>>> From: geeta124 at hotmail.com
> >>>>> To: met_help at ucar.edu
> >>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Hi John,
> >>>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> >>>>> Kindly check that.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>> From: met_help at ucar.edu
> >>>>>> To: geeta124 at hotmail.com
> >>>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>>>>>
> >>>>>> Geeta,
> >>>>>>
> >>>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>>>>>
> >>>>>> I was trying to make the point that the "interpolation
methods" in the grid_stat config file could be used as a form of
"upscaling".  You are right, there is no *need* to interpolate the
data since
> >>>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >>>>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >>>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>>>>>
> >>>>>> The default interpolation width is 1, meaning that no
smoothing is performed.  However, you could use multiple smoothing
widths and see how your performance changes the more you smooth the
data.
> >>>>>>
> >>>>>> Does that make sense?
> >>>>>>
> >>>>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >>>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>>>>>
> >>>>>> fcst = {
> >>>>>>        wind_thresh = [ NA ];
> >>>>>>
> >>>>>>        field = [
> >>>>>>           {
> >>>>>>             name       = "APCP_24";
> >>>>>>             level      = [ "(*,*)" ];
> >>>>>>             cat_thresh = [ >0.0, >=5.0 ];
> >>>>>>           }
> >>>>>>        ];
> >>>>>>
> >>>>>> };
> >>>>>>
> >>>>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >>>>>> instructions:
> >>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John
> >>>>>>
> >>>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>
> >>>>>>> Hi John,
> >>>>>>>      I have run grid-stat. Following is the error.
> >>>>>>>
> >>>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>>>>>> GSL_RNG_TYPE=mt19937
> >>>>>>> GSL_RNG_SEED=18446744073321512274
> >>>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>> Observation File: ../trmm_nc_data/02june2011.nc
> >>>>>>> Configuration File: GridStatConfig_APCP_24
> >>>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in
file: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>>
> >>>>>>>
--------------------------------------------------------------------------------
> >>>>>>>
> >>>>>>>
> >>>>>>> Pls suggest.
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>> From: geeta124 at hotmail.com
> >>>>>>> To: met_help at ucar.edu
> >>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> Thanks a lot John for your inputs and clarifications.
> >>>>>>>
> >>>>>>> Still following doubts are there.
> >>>>>>>
> >>>>>>> 1. when I run copygb, what it does is to make the
observation and Model FC uniform ( I mean same GRID and RESOLUTION).
These two parameters are only important.
> >>>>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>>>>>
> >>>>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
> >>>>>>>
> >>>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>>> From: met_help at ucar.edu
> >>>>>>>> To: geeta124 at hotmail.com
> >>>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>>>>>
> >>>>>>>> Geeta,
> >>>>>>>>
> >>>>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
> >>>>>>>>
> >>>>>>>> (1) The first way is by applying an interpolation method to
the data.  Since the data are already on the same grid, this is really
a "smoothing" operation instead.  This is called "upscaling".
> >>>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>>>>>> config file, you could try:
> >>>>>>>>
> >>>>>>>> interp = {
> >>>>>>>>         field      = BOTH;
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>
> >>>>>>>>         type = [
> >>>>>>>>            { method = UW_MEAN; width  = 1; },
> >>>>>>>>            { method = UW_MEAN; width  = 3; },
> >>>>>>>>            { method = UW_MEAN; width  = 6; },
> >>>>>>>>            { method = UW_MEAN; width  = 9; }
> >>>>>>>>         ];
> >>>>>>>> };
> >>>>>>>>
> >>>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>>>>>
> >>>>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>>>>>> type.  Be sure to turn the NBRCNT output line on in the
Grid-Stat config file.  For neighborhood verification, you pick
multiple neighborhood sizes and look to see how the FSS changes as you
increase
> >>>>>>>> the neighborhood size.  As the neighborhood size increases,
FSS increases.  And you look to see how large of a neighborhood size
you need to get a "useful" (FSS > 0.5) forecast.
> >>>>>>>>
> >>>>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
> >>>>>>>> it places an n x n box around each grid point and counts up
the number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>>>>>> computed by comparing the forecast and observation
fractional coverage fields to each other.
> >>>>>>>>
> >>>>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
> >>>>>>>>
> >>>>>>>> Here's an example of how you might set this up in the Grid-
Stat config file:
> >>>>>>>>
> >>>>>>>> nbrhd = {
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>         width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>>>>>         cov_thresh = [ >=0.5 ];
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>> For a given threshold, you should look to see how FSS
changes as you increase the neighborhood size.
> >>>>>>>>
> >>>>>>>> Hopefully that helps get you going.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> John Halley Gotway
> >>>>>>>> met_help at ucar.edu
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>>>>>
> >>>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>>>>>             Queue: met_help
> >>>>>>>>>           Subject: Unable to visualize Fuzzy verf.
> >>>>>>>>>             Owner: Nobody
> >>>>>>>>>        Requestors: geeta124 at hotmail.com
> >>>>>>>>>            Status: new
> >>>>>>>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Hi John/ met_help.
> >>>>>>>>>
> >>>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and
the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
> >>>>>>>>>
> >>>>>>>>> geeta
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Thu Feb 27 01:44:02 2014

John,
Actually the observed data (TRMM ) was made in the netcdf fomat on
some other system.
I want to know how can I convert the TRMM/ASCIi or BIN files into the
NETCDF????


Can u pls suggest????

geeta

From: geeta124 at hotmail.com
To: met_help at ucar.edu
Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
Date: Thu, 27 Feb 2014 14:09:32 +0530




thanks john,
thanks for pointing out the error.
How can I do that???
these are the o/p I get when I do ncdump???.



geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Wed, 26 Feb 2014 11:13:55 -0700
>
> Geeta,
>
> The problem is in the observation files:
>
> *************************************
> O/P of OBSERVATION FILE (NETCDF format) *
> *************************************
> :MET_version = "V3.0.1" ;
>
> If you change the "V3.0.1" to "V3.0", then METv3.0 grid_stat will be
able to process it fine.
>
> Also, you should switch the timing variable attributes from floats
to integers:
> Change from:
>   APCP_03:init_time_ut = 1306972800. ;
>   APCP_03:valid_time_ut = 1306983600. ;
>   APCP_03:accum_time_sec = 10800.f ;
> Change to:
>   APCP_03:init_time_ut = 1306972800 ;
>   APCP_03:valid_time_ut = 1306983600 ;
>   APCP_03:accum_time_sec = 10800 ;
>
> When you switch to METv4.1, it'll complain if those aren't integers.
>
> Hope that helps.
>
> Thanks,
> John
>
>
> On 02/25/2014 11:26 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > thanks John,
> > I have NOT done anything with the NC files as yet. I was asking
you how to go about changing the attribute????
> >
> > I am posting the o/p of one of the Forecast files in netcdf format
using ncdump. It Shows MET="v3.0" what you desire.
> > bash-3.2$ ncdump -h test.nc
> > netcdf test {
> > dimensions:
> >          lat = 53 ;
> >          lon = 53 ;
> > variables:
> >          float lat(lat, lon) ;
> >                  lat:long_name = "latitude" ;
> >                  lat:units = "degrees_north" ;
> >                  lat:standard_name = "latitude" ;
> >          float lon(lat, lon) ;
> >                  lon:long_name = "longitude" ;
> >                  lon:units = "degrees_east" ;
> >                  lon:standard_name = "longitude" ;
> >          float APCP_24(lat, lon) ;
> >                  APCP_24:name = "APCP" ;
> >                  APCP_24:long_name = "Total precipitation" ;
> >                  APCP_24:level = "A24" ;
> >                  APCP_24:units = "kg/m^2" ;
> >                  APCP_24:grib_code = 61 ;
> >                  APCP_24:_FillValue = -9999.f ;
> >                  APCP_24:init_time = "20110601_000000" ;
> >                  APCP_24:init_time_ut = 1306886400 ;
> >                  APCP_24:valid_time = "20110602_030000" ;
> >                  APCP_24:valid_time_ut = 1306983600 ;
> >                  APCP_24:accum_time = "240000" ;
> >                  APCP_24:accum_time_sec = 86400 ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >                  :MET_version = "V3.0" ;
> >                  :MET_tool = "pcp_combine" ;
> >                  :RunCommand = "Subtraction:
2011060100_WRFPRS_d01.027 with accumulation of 270000 minus
2011060100_WRFPRS_d01.003 with accumulation of 030000." ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9.000000 degrees_north" ;
> >                  :lon_ll = "74.000000 degrees_east" ;
> >                  :delta_lat = "0.250000 degrees" ;
> >                  :delta_lon = "0.250000 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> > bash-3.2$
> > *************************************
> > O/P of OBSERVATION FILE (NETCDF format) *
> > *************************************
> > bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
> > netcdf test {
> > dimensions:
> >          lon = 53 ;
> >          lat = 53 ;
> > variables:
> >          double lon(lon) ;
> >                  lon:units = "degrees_east" ;
> >          double lat(lat) ;
> >                  lat:units = "degrees_north" ;
> >          float APCP_03(lat, lon) ;
> >                  APCP_03:units = "kg/m^2" ;
> >                  APCP_03:missing_value = -9999.f ;
> >                  APCP_03:long_name = "Total precipitation" ;
> >                  APCP_03:name = "APCP" ;
> >                  APCP_03:level = "A3" ;
> >                  APCP_03:grib_code = 61.f ;
> >                  APCP_03:_FillValue = -9999.f ;
> >                  APCP_03:init_time = "20110602_000000" ;
> >                  APCP_03:init_time_ut = 1306972800. ;
> >                  APCP_03:valid_time = "20110602_030000" ;
> >                  APCP_03:valid_time_ut = 1306983600. ;
> >                  APCP_03:accum_time = "030000" ;
> >                  APCP_03:accum_time_sec = 10800.f ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
../../../vpt/geeta/02june2011.nc generated 20140123_163031 on host
ncmr0102 by the Rscript trmm2nc.R" ;
> >                  :MET_version = "V3.0.1" ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9 degrees_north" ;
> >                  :lon_ll = "74 degrees_east" ;
> >                  :delta_lat = "0.25 degrees" ;
> >                  :delta_lon = "0.25 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> >
> >
> > Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
> > DO you suspect the location of NETCDF ???????????????.
> >
> > shall be looking forward to hearing from you.
> >
> > thanks
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Tue, 25 Feb 2014 11:26:10 -0700
> >>
> >> Geeta,
> >>
> >> Sorry, I was out of the office yesterday.  Looking back through
this ticket, I see that you're still getting the following error:
> >>      NetCDF: Attribute not found
> >>
> >> You said that you updated the NetCDF files to include the
following global attribute:
> >>      MET_version = "V3.0" ;
> >>
> >> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> I see 2 other questions in your emails:
> >>
> >> (1) How can you control the optional "upscaling" or "smoothing"
done by grid_stat?
> >>       In METv3.0, that is controlled by configuration file
options that begin with "interp_".  For example, try the following:
> >>          interp_method[] = [ "UW_MEAN" ];
> >>          interp_width[]  = [ 1, 3, 5 ];
> >>          interp_flag     = 3;
> >>
> >>       For each output line you were getting before, you should
now get 2 more.  Since interp_flag is set to 3, grid_stat will smooth
both the forecast and observation fields.  For interp_width = 3,
> >> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
> >> in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.
> >>
> >> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
> >>
> >> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
> >> plots a red dot for each observation lat/lon it finds in the
data.  It is intended to just give you a quick look at the location of
the observations to make sure that they exist where you expect.  It
> >> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
> >> the computation of statistics.  The MPR line type includes
columns named "OBS_LAT" and "OBS_LON" giving the point observation
location information.  You could read the lat/lon information from the
MPR
> >> line type and use whatever plotting tool you prefer to plot the
observation locations.
> >>
> >> If you do post more data to the ftp site, please write me back
and I'll go grab it.
> >>
> >> Thanks,
> >> John
> >>
> >>
> >> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> hi john,
> >>> you had discussed about the upscaling (of both obs and fcst or
any one of them). The forecast is compared with the observations which
are averaged to coarse scales.
> >>> How is this averaged defined in the configuration file.
> >>>
> >>> Pls let me know reg the global attributes ????
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Sun, 23 Feb 2014 22:24:04 +0530
> >>>
> >>>
> >>>
> >>>
> >>> hi John,
> >>> Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
> >>>
> >>> Can you provide some more hints.
> >>>
> >>> regards
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Fri, 21 Feb 2014 15:03:34 +0530
> >>>
> >>>
> >>>
> >>>
> >>> thanks John,
> >>> I have made the changes as per your config file.
> >>> But the error persists.
> >>> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744073358673747
> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>> Observation File: ../trmm_nc_data/02june2011.nc
> >>> Configuration File: GridStatConfig_APCP_24
> >>> NetCDF: Attribute not found
> >>> -bash-3.2$
> >>>
> >>>
> >>> 2. I have used  ncdump to see my file attributes.
> >>> Are u referring to these attributes???
> >>>
> >>> // global attributes:
> >>>                   :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >>>                   :MET_version = "V3.0" ;
> >>>                   :MET_tool = "pcp_combine" ;
> >>>
> >>> Following is my Config file.
> >>>
____________________________________________________________________
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Default grid_stat configuration file
> >>> //
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Specify a name to designate the model being verified.  This
name will be
> >>> // written to the second column of the ASCII output generated.
> >>> //
> >>> model = "WRF";
> >>> //
> >>> // Specify a comma-separated list of fields to be verified.  The
forecast and
> >>> // observation fields may be specified separately.  If the
obs_field parameter
> >>> // is left blank, it will default to the contents of fcst_field.
> >>> //
> >>> // Each field is specified as a GRIB code or abbreviation
followed by an
> >>> // accumulation or vertical level indicator for GRIB files or as
a variable name
> >>> // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
> >>> //
> >>> // Specifying verification fields for GRIB files:
> >>> //    GC/ANNN for accumulation interval NNN
> >>> //    GC/ZNNN for vertical level NNN
> >>> //    GC/PNNN for pressure level NNN in hPa
> >>> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >>> //    GC/LNNN for a generic level type
> >>> //    GC/RNNN for a specific GRIB record number
> >>> //    Where GC is the number of or abbreviation for the grib
code
> >>> //    to be verified.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> // Specifying verification fields for NetCDF files:
> >>> //    var_name(i,...,j,*,*) for a single field
> >>> //    Where var_name is the name of the NetCDF variable,
> >>> //    and i,...,j specifies fixed dimension values,
> >>> //    and *,* specifies the two dimensions for the gridded
field.
> >>> //
> >>> //    NOTE: To verify winds as vectors rather than scalars,
> >>> //          specify UGRD (or 33) followd by VGRD (or 34) with
the
> >>> //          same level values.
> >>> //
> >>> //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
> >>> //
> >>> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for
GRIB input
> >>> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ];
for NetCDF input
> >>> //
> >>> fcst_field[] = [ "APCP_24(*,*)" ];
> >>> obs_field[]  = [ "APCP_03(*,*)" ];
> >>> //
> >>> // Specify a comma-separated list of groups of thresholds to be
applied to the
> >>> // fields listed above.  Thresholds for the forecast and
observation fields
> >>> // may be specified separately.  If the obs_thresh parameter is
left blank,
> >>> // it will default to the content of fcst_thresh.
> >>> //
> >>> // At least one threshold must be provided for each field listed
above.  The
> >>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
> >>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> >>> // thresholds to a field, separate the threshold values with a
space.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //
> >>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
> >>> //       and be preceeded by "ge".
> >>> //
> >>> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0
ge80.0" ];
> >>> //
> >>> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
> >>> obs_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of thresholds to be used when
computing
> >>> // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
> >>> // values derived from each U/V pair.  Only those U/V pairs
which meet the wind
> >>> // speed threshold criteria are retained.  If the
obs_wind_thresh parameter is
> >>> // left blank, it will default to the contents of
fcst_wind_thresh.
> >>> //
> >>> // To apply multiple wind speed thresholds, separate the
threshold values with a
> >>> // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //    'NA' for no threshold
> >>> //
> >>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >>> //
> >>> fcst_wind_thresh[] = [ "NA" ];
> >>> obs_wind_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of grids to be used in masking
the data over
> >>> // which to perform scoring.  An empty list indicates that no
masking grid
> >>> // should be performed.  The standard NCEP grids are named
"GNNN" where NNN
> >>> // indicates the three digit grid number.  Enter "FULL" to score
over the
> >>> // entire domain.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>> //
> >>> // e.g. mask_grid[] = [ "FULL" ];
> >>> //
> >>> mask_grid[] = [ "FULL" ];
> >>> //
> >>> // Specify a comma-separated list of masking regions to be
applied.
> >>> // An empty list indicates that no additional masks should be
used.
> >>> // The masking regions may be defined in one of 4 ways:
> >>> //
> >>> // (1) An ASCII file containing a lat/lon polygon.
> >>> //     Latitude in degrees north and longitude in degrees east.
> >>> //     By default, the first and last polygon points are
connected.
> >>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> >>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >>> //
> >>> // (2) The NetCDF output of the gen_poly_mask tool.
> >>> //
> >>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.nc var_name gt0.00"
> >>> //
> >>> // (4) A GRIB data file, followed by a description of the field
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.grb APCP/A3 gt0.00"
> >>> //
> >>> // Any NetCDF or GRIB file used must have the same grid
dimensions as the
> >>> // data being verified.
> >>> //
> >>> // MET_BASE may be used in the path for the files above.
> >>> //
> >>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >>> //                      "poly_mask.ncf",
> >>> //                      "sample.nc APCP",
> >>> //                      "sample.grb HGT/Z0 gt100.0" ];
> >>> //
> >>> mask_poly[] = [];
> >>> //
> >>> // Specify a comma-separated list of values for alpha to be used
when computing
> >>> // confidence intervals.  Values of alpha must be between 0 and
1.
> >>> //
> >>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >>> //
> >>> ci_alpha[] = [ 0.10, 0.05 ];
> >>> //
> >>> // Specify the method to be used for computing bootstrap
confidence intervals.
> >>> // The value for this is interpreted as follows:
> >>> //    (0) Use the BCa interval method (computationally
intensive)
> >>> //    (1) Use the percentile interval method
> >>> //
> >>> boot_interval = 1;
> >>> //
> >>> // Specify a proportion between 0 and 1 to define the replicate
sample size
> >>> // to be used when computing percentile intervals.  The
replicate sample
> >>> // size is set to boot_rep_prop * n, where n is the number of
raw data points.
> >>> //
> >>> // e.g boot_rep_prop = 0.80;
> >>> //
> >>> boot_rep_prop = 1.0;
> >>> //
> >>> // Specify the number of times each set of matched pair data
should be
> >>> // resampled when computing bootstrap confidence intervals.  A
value of
> >>> // zero disables the computation of bootstrap condifence
intervals.
> >>> //
> >>> // e.g. n_boot_rep = 1000;
> >>> //
> >>> n_boot_rep = 0;
> >>> //
> >>> // Specify the name of the random number generator to be used.
See the MET
> >>> // Users Guide for a list of possible random number generators.
> >>> //
> >>> boot_rng = "mt19937";
> >>> //
> >>> // Specify the seed value to be used when computing bootstrap
confidence
> >>> // intervals.  If left unspecified, the seed will change for
each run and
> >>> // the computed bootstrap confidence intervals will not be
reproducable.
> >>> //
> >>> boot_seed = "";
> >>> //
> >>> // Specify a comma-separated list of interpolation method(s) to
be used for
> >>> // smoothing the data fields prior to comparing them.  The value
at each grid
> >>> // point is replaced by the measure computed over the
neighborhood defined
> >>> // around the grid point.  String values are interpreted as
follows:
> >>> //    MIN     = Minimum in the neighborhood
> >>> //    MAX     = Maximum in the neighborhood
> >>> //    MEDIAN  = Median in the neighborhood
> >>> //    UW_MEAN = Unweighted mean in the neighborhood
> >>> //
> >>> //    NOTE: The distance-weighted mean (DW_MEAN) is not an
option here since
> >>> //          it will have no effect on a gridded field.
> >>> //
> >>> //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
> >>> //          it reduces to an unweighted mean on a grid.
> >>> //
> >>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >>> //
> >>> interp_method[] = [ "UW_MEAN" ];
> >>> //
> >>> // Specify a comma-separated list of box widths to be used by
the interpolation
> >>> // techniques listed above.  All values must be odd.  A value of
1 indicates
> >>> // that no smoothing should be performed.  For values greater
than 1, the n*n
> >>> // grid points around each point will be used to smooth the data
fields.
> >>> //
> >>> // e.g. interp_width = [ 1, 3, 5 ];
> >>> //
> >>> interp_width[] = [ 1 ];
> >>> //
> >>> // The interp_flag controls how the smoothing defined above
should be applied:
> >>> // (1) Smooth only the forecast field
> >>> // (2) Smooth only the observation field
> >>> // (3) Smooth both the forecast and observation fields
> >>> //
> >>> interp_flag = 1;
> >>> //
> >>> // When smoothing, compute a ratio of the number of valid data
points to
> >>> // the total number of points in the neighborhood.  If that
ratio is less
> >>> // than this threshold, do not compute a smoothed forecast
value.  This
> >>> // threshold must be between 0 and 1.  Setting this threshold to
1 will
> >>> // require that each observation be surrounded by n*n valid
forecast
> >>> // points.
> >>> //
> >>> // e.g. interp_thresh = 1.0;
> >>> //
> >>> interp_thresh = 1.0;
> >>> //
> >>> // Specify a comma-separated list of box widths to be used to
define the
> >>> // neighborhood size for the neighborhood verification methods.
All values
> >>> // must be odd.  For values greater than 1, the n*n grid points
around each
> >>> // point will be used to define the neighborhood.
> >>> //
> >>> // e.g. nbr_width = [ 3, 5 ];
> >>> //
> >>> nbr_width[] = [ 3, 5 ];
> >>> //
> >>> // When applying the neighborhood verification methods, compute
a ratio
> >>> // of the number of valid data points to the total number of
points in
> >>> // the neighborhood.  If that ratio is less than this threshold,
do not
> >>> // include it in the computations.  This threshold must be
between 0
> >>> // and 1.  Setting this threshold to 1 will require that each
point be
> >>> // surrounded by n*n valid forecast points.
> >>> //
> >>> // e.g. nbr_thresh = 1.0;
> >>> //
> >>> nbr_thresh = 1.0;
> >>> //
> >>> // When applying the neighborhood verification methods, apply a
threshold
> >>> // to the fractional coverage values to define contingency
tables from
> >>> // which to compute statistics.
> >>> //
> >>> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
> >>> //
> >>> cov_thresh[] = [ "ge0.5" ];
> >>> //
> >>> // Specify flags to indicate the type of data to be output:
> >>> //
> >>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> >>> //           Total (TOTAL),
> >>> //           Forecast Rate (F_RATE),
> >>> //           Hit Rate (H_RATE),
> >>> //           Observation Rate (O_RATE)
> >>> //
> >>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON)
> >>> //
> >>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Frequency Bias (FBIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Contingency Table Count columns repeated
N_CAT*N_CAT times
> >>> //
> >>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Accuracy (ACC),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Gerrity Score (GER),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //           Forecast Standard Deviation (FSTDEV),
> >>> //           Observation Mean (OBAR),
> >>> //           Observation Standard Deviation (OSTDEV),
> >>> //           Pearson's Correlation Coefficient (PR_CORR),
> >>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> >>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> >>> //           Number of ranks compared (RANKS),
> >>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> >>> //           Number of tied ranks in the observation field
(ORANK_TIES),
> >>> //           Mean Error (ME),
> >>> //           Standard Deviation of the Error (ESTDEV),
> >>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >>> //           Mean Absolute Error (MAE),
> >>> //           Mean Squared Error (MSE),
> >>> //           Bias-Corrected Mean Squared Error (BCMSE),
> >>> //           Root Mean Squared Error (RMSE),
> >>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //              = mean(f)
> >>> //           Observation Mean (OBAR),
> >>> //              = mean(o)
> >>> //           Forecast*Observation Product Mean (FOBAR),
> >>> //              = mean(f*o)
> >>> //           Forecast Squared Mean (FFBAR),
> >>> //              = mean(f^2)
> >>> //           Observation Squared Mean (OOBAR)
> >>> //              = mean(o^2)
> >>> //
> >>> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           U-Forecast Mean (UFBAR),
> >>> //              = mean(uf)
> >>> //           V-Forecast Mean (VFBAR),
> >>> //              = mean(vf)
> >>> //           U-Observation Mean (UOBAR),
> >>> //              = mean(uo)
> >>> //           V-Observation Mean (VOBAR),
> >>> //              = mean(vo)
> >>> //           U-Product Plus V-Product (UVFOBAR),
> >>> //              = mean(uf*uo+vf*vo)
> >>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >>> //              = mean(uf^2+vf^2)
> >>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> >>> //              = mean(uo^2+vo^2)
> >>> //
> >>> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Row Observation Yes Count (OY_i),
> >>> //           Row Observation No Count (ON_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Base Rate (BASER) with confidence interval limits,
> >>> //           Reliability (RELIABILITY),
> >>> //           Resolution (RESOLUTION),
> >>> //           Uncertainty (UNCERTAINTY),
> >>> //           Area Under the ROC Curve (ROC_AUC),
> >>> //           Brier Score (BRIER) with confidence interval
limits,
> >>> //           Probability Threshold Value (THRESH_i)
> >>> //           NOTE: Previous column repeated for each probability
threshold.
> >>> //
> >>> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics
of
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Observation Yes Count Divided by Total (OY_TP_i),
> >>> //           Observation No Count Divided by Total (ON_TP_i),
> >>> //           Calibration (CALIBRATION_i),
> >>> //           Refinement (REFINEMENT_i),
> >>> //           Likelikhood (LIKELIHOOD_i),
> >>> //           Base Rate (BASER_i),
> >>> //           NOTE: Previous 7 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (12) STAT and PRC Text Files, ROC Curve Points for
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Probability of Detecting Yes (PODY_i),
> >>> //           Probability of False Detection (POFD_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON),
> >>> //           Fractional Threshold Value (FRAC_T)
> >>> //
> >>> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Bias (BIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
> >>> //           Total (TOTAL),
> >>> //           Fractions Brier Score (FBS),
> >>> //           Fractions Skill Score (FSS)
> >>> //
> >>> //   (16) NetCDF File containing difference fields for each grib
> >>> //        code/mask combination.  A non-zero value indicates
that
> >>> //        this NetCDF file should be produced.  A value of 0
> >>> //        indicates that it should not be produced.
> >>> //
> >>> // Values for flags (1) through (15) are interpreted as follows:
> >>> //    (0) Do not generate output of this type
> >>> //    (1) Write output to a STAT file
> >>> //    (2) Write output to a STAT file and a text file
> >>> //
> >>> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1
];
> >>> //
> >>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> >>> // Coefficients should be computed.  Computing them over large
datasets is
> >>> // computationally intensive and slows down the runtime
execution significantly.
> >>> //    (0) Do not compute these correlation coefficients
> >>> //    (1) Compute these correlation coefficients
> >>> //
> >>> rank_corr_flag = 0;
> >>> //
> >>> // Specify the GRIB Table 2 parameter table version number to be
used
> >>> // for interpreting GRIB codes.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> grib_ptv = 2;
> >>> //
> >>> // Directory where temporary files should be written.
> >>> //
> >>> tmp_dir = "/tmp";
> >>> //
> >>> // Prefix to be used for the output file names.
> >>> //
> >>> output_prefix = "APCP_24";
> >>> //
> >>> // Indicate a version number for the contents of this
configuration file.
> >>> // The value should generally not be modified.
> >>> //
> >>> version = "V3.0";
> >>>
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Thu, 20 Feb 2014 10:21:20 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> I see that you're using METv3.0.  The current version is
METv4.1, and it'd be good to switch to that version when possible.
There have been major changes to the MET configuration file format
since
> >>>> METv3.0, so be sure to use the default config files for
METv4.1.
> >>>>
> >>>> I ran METv3.0 grid_stat on the data files you sent and
reproduced the error message you saw:
> >>>>       ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
> >>>>
> >>>> Since the input files are both NetCDF files, you need to
specify the name of the NetCDF variable that should be used.  So I
modified your config file:
> >>>>       FROM: fcst_field[] = [ "61/A24" ];
> >>>>       TO:   fcst_field[] = [ "APCP_24(*,*)" ];
> >>>>
> >>>> When I reran with this change, I got this error:
> >>>>       NetCDF: Attribute not found
> >>>>
> >>>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
> >>>>                    :MET_version = "V3.0.1" ;
> >>>>
> >>>> I switched that to be consistent with the version of MET you're
running:
> >>>>                    :MET_version = "V3.0" ;
> >>>>
> >>>> And then I got this error:
> >>>> ERROR: parse_poly_mask() -> the dimensions of the masking
region (185, 129) must match the dimensions of the data (53, 53).
> >>>>
> >>>> So I modified the config file to change the masking region
settings:
> >>>>       FROM: mask_grid[] = [ "DTC165", "DTC166" ];
> >>>>             mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
> >>>>                             "MET_BASE/data/poly/LMV.poly" ];
> >>>>
> >>>>       TO:   mask_grid[] = [ "FULL" ];
> >>>>             mask_poly[] = [];
> >>>>
> >>>> And then it ran fine.
> >>>>
> >>>> To summarize...
> >>>>     (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
> >>>>     (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
> >>>>     (3) Consider updating to using METv4.1 instead.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
> >>>>>
> >>>>> Hi John,
> >>>>> I am bothering you with a few more. Hope u ll bear with me.
> >>>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >>>>>
> >>>>> 1. Now I was reading  about 3 approaches of FUZZY verf which
are   a. Multi event contingency Table (My question is -----Can we
define a hit as RF b/w 0.1 to 2.5 in the config file. Normally we
select the threshold as ge0.1 or ge2.5 etc. Is the provision of giving
a range in config file there?????).
> >>>>>
> >>>>> b) Pragmatic approach  (donot know what's that???)
> >>>>>
> >>>>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >>>>>
> >>>>> I donot understand these. Can u lead me to the right direction
or provide some hints????
> >>>>>
> >>>>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
> >>>>> Can the QUILT plot be prepared for any score like HK, HSS, FBS
or FSS????
> >>>>>
> >>>>>
> >>>>> thanks
> >>>>> geeta
> >>>>>
> >>>>> From: geeta124 at hotmail.com
> >>>>> To: met_help at ucar.edu
> >>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Hi John,
> >>>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> >>>>> Kindly check that.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>> From: met_help at ucar.edu
> >>>>>> To: geeta124 at hotmail.com
> >>>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>>>>>
> >>>>>> Geeta,
> >>>>>>
> >>>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>>>>>
> >>>>>> I was trying to make the point that the "interpolation
methods" in the grid_stat config file could be used as a form of
"upscaling".  You are right, there is no *need* to interpolate the
data since
> >>>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >>>>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >>>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>>>>>
> >>>>>> The default interpolation width is 1, meaning that no
smoothing is performed.  However, you could use multiple smoothing
widths and see how your performance changes the more you smooth the
data.
> >>>>>>
> >>>>>> Does that make sense?
> >>>>>>
> >>>>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >>>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>>>>>
> >>>>>> fcst = {
> >>>>>>        wind_thresh = [ NA ];
> >>>>>>
> >>>>>>        field = [
> >>>>>>           {
> >>>>>>             name       = "APCP_24";
> >>>>>>             level      = [ "(*,*)" ];
> >>>>>>             cat_thresh = [ >0.0, >=5.0 ];
> >>>>>>           }
> >>>>>>        ];
> >>>>>>
> >>>>>> };
> >>>>>>
> >>>>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >>>>>> instructions:
> >>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John
> >>>>>>
> >>>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>
> >>>>>>> Hi John,
> >>>>>>>      I have run grid-stat. Following is the error.
> >>>>>>>
> >>>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>>>>>> GSL_RNG_TYPE=mt19937
> >>>>>>> GSL_RNG_SEED=18446744073321512274
> >>>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>> Observation File: ../trmm_nc_data/02june2011.nc
> >>>>>>> Configuration File: GridStatConfig_APCP_24
> >>>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in
file: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>>
> >>>>>>>
--------------------------------------------------------------------------------
> >>>>>>>
> >>>>>>>
> >>>>>>> Pls suggest.
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>> From: geeta124 at hotmail.com
> >>>>>>> To: met_help at ucar.edu
> >>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> Thanks a lot John for your inputs and clarifications.
> >>>>>>>
> >>>>>>> Still following doubts are there.
> >>>>>>>
> >>>>>>> 1. when I run copygb, what it does is to make the
observation and Model FC uniform ( I mean same GRID and RESOLUTION).
These two parameters are only important.
> >>>>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>>>>>
> >>>>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
> >>>>>>>
> >>>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>>> From: met_help at ucar.edu
> >>>>>>>> To: geeta124 at hotmail.com
> >>>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>>>>>
> >>>>>>>> Geeta,
> >>>>>>>>
> >>>>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
> >>>>>>>>
> >>>>>>>> (1) The first way is by applying an interpolation method to
the data.  Since the data are already on the same grid, this is really
a "smoothing" operation instead.  This is called "upscaling".
> >>>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>>>>>> config file, you could try:
> >>>>>>>>
> >>>>>>>> interp = {
> >>>>>>>>         field      = BOTH;
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>
> >>>>>>>>         type = [
> >>>>>>>>            { method = UW_MEAN; width  = 1; },
> >>>>>>>>            { method = UW_MEAN; width  = 3; },
> >>>>>>>>            { method = UW_MEAN; width  = 6; },
> >>>>>>>>            { method = UW_MEAN; width  = 9; }
> >>>>>>>>         ];
> >>>>>>>> };
> >>>>>>>>
> >>>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>>>>>
> >>>>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>>>>>> type.  Be sure to turn the NBRCNT output line on in the
Grid-Stat config file.  For neighborhood verification, you pick
multiple neighborhood sizes and look to see how the FSS changes as you
increase
> >>>>>>>> the neighborhood size.  As the neighborhood size increases,
FSS increases.  And you look to see how large of a neighborhood size
you need to get a "useful" (FSS > 0.5) forecast.
> >>>>>>>>
> >>>>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
> >>>>>>>> it places an n x n box around each grid point and counts up
the number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>>>>>> computed by comparing the forecast and observation
fractional coverage fields to each other.
> >>>>>>>>
> >>>>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
> >>>>>>>>
> >>>>>>>> Here's an example of how you might set this up in the Grid-
Stat config file:
> >>>>>>>>
> >>>>>>>> nbrhd = {
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>         width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>>>>>         cov_thresh = [ >=0.5 ];
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>> For a given threshold, you should look to see how FSS
changes as you increase the neighborhood size.
> >>>>>>>>
> >>>>>>>> Hopefully that helps get you going.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> John Halley Gotway
> >>>>>>>> met_help at ucar.edu
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>>>>>
> >>>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>>>>>             Queue: met_help
> >>>>>>>>>           Subject: Unable to visualize Fuzzy verf.
> >>>>>>>>>             Owner: Nobody
> >>>>>>>>>        Requestors: geeta124 at hotmail.com
> >>>>>>>>>            Status: new
> >>>>>>>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Hi John/ met_help.
> >>>>>>>>>
> >>>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and
the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
> >>>>>>>>>
> >>>>>>>>> geeta
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Thu Feb 27 13:43:25 2014

Geeta,

There are a couple of R scripts you could use to reformat TRMM data
for
use in MET.  However, I doubt either would produce a NetCDF file that
METv3.0 could read.  You'd probably need to use the latest version of
MET, METv4.1.

Look at the bottom of this page for information on the "trmm2nc.R"
script that reformats TRMM ASCII data:
    http://www.dtcenter.org/met/users/downloads/analysis_scripts.php

And I've attached another R script named "trmmbin2nc.R" that reformats
TRMM binary data.  This script reads TRMM binary data, extracts a
subset
of the data using the settings at the top of the script, and writes a
NetCDF file that the MET tools can read.

Hope that helps get you going.

Thanks,
John


On 2014-02-27 01:44, Geeta Geeta via RT wrote:
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>
> John,
> Actually the observed data (TRMM ) was made in the netcdf fomat on
> some other system.
> I want to know how can I convert the TRMM/ASCIi or BIN files into
the
> NETCDF????
>
>
> Can u pls suggest????
>
> geeta
>
> From: geeta124 at hotmail.com
> To: met_help at ucar.edu
> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> Date: Thu, 27 Feb 2014 14:09:32 +0530
>
>
>
>
> thanks john,
> thanks for pointing out the error.
> How can I do that???
> these are the o/p I get when I do ncdump???.
>
>
>
> geeta
>
>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> From: met_help at ucar.edu
>> To: geeta124 at hotmail.com
>> Date: Wed, 26 Feb 2014 11:13:55 -0700
>>
>> Geeta,
>>
>> The problem is in the observation files:
>>
>> *************************************
>> O/P of OBSERVATION FILE (NETCDF format) *
>> *************************************
>> :MET_version = "V3.0.1" ;
>>
>> If you change the "V3.0.1" to "V3.0", then METv3.0 grid_stat will
be
>> able to process it fine.
>>
>> Also, you should switch the timing variable attributes from floats
to
>> integers:
>> Change from:
>>   APCP_03:init_time_ut = 1306972800. ;
>>   APCP_03:valid_time_ut = 1306983600. ;
>>   APCP_03:accum_time_sec = 10800.f ;
>> Change to:
>>   APCP_03:init_time_ut = 1306972800 ;
>>   APCP_03:valid_time_ut = 1306983600 ;
>>   APCP_03:accum_time_sec = 10800 ;
>>
>> When you switch to METv4.1, it'll complain if those aren't
integers.
>>
>> Hope that helps.
>>
>> Thanks,
>> John
>>
>>
>> On 02/25/2014 11:26 PM, Geeta Geeta via RT wrote:
>> >
>> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>> >
>> > thanks John,
>> > I have NOT done anything with the NC files as yet. I was asking
you how to go about changing the attribute????
>> >
>> > I am posting the o/p of one of the Forecast files in netcdf
format using ncdump. It Shows MET="v3.0" what you desire.
>> > bash-3.2$ ncdump -h test.nc
>> > netcdf test {
>> > dimensions:
>> >          lat = 53 ;
>> >          lon = 53 ;
>> > variables:
>> >          float lat(lat, lon) ;
>> >                  lat:long_name = "latitude" ;
>> >                  lat:units = "degrees_north" ;
>> >                  lat:standard_name = "latitude" ;
>> >          float lon(lat, lon) ;
>> >                  lon:long_name = "longitude" ;
>> >                  lon:units = "degrees_east" ;
>> >                  lon:standard_name = "longitude" ;
>> >          float APCP_24(lat, lon) ;
>> >                  APCP_24:name = "APCP" ;
>> >                  APCP_24:long_name = "Total precipitation" ;
>> >                  APCP_24:level = "A24" ;
>> >                  APCP_24:units = "kg/m^2" ;
>> >                  APCP_24:grib_code = 61 ;
>> >                  APCP_24:_FillValue = -9999.f ;
>> >                  APCP_24:init_time = "20110601_000000" ;
>> >                  APCP_24:init_time_ut = 1306886400 ;
>> >                  APCP_24:valid_time = "20110602_030000" ;
>> >                  APCP_24:valid_time_ut = 1306983600 ;
>> >                  APCP_24:accum_time = "240000" ;
>> >                  APCP_24:accum_time_sec = 86400 ;
>> >
>> > // global attributes:
>> >                  :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
>> >                  :MET_version = "V3.0" ;
>> >                  :MET_tool = "pcp_combine" ;
>> >                  :RunCommand = "Subtraction:
2011060100_WRFPRS_d01.027 with accumulation of 270000 minus
2011060100_WRFPRS_d01.003 with accumulation of 030000." ;
>> >                  :Projection = "LatLon" ;
>> >                  :lat_ll = "9.000000 degrees_north" ;
>> >                  :lon_ll = "74.000000 degrees_east" ;
>> >                  :delta_lat = "0.250000 degrees" ;
>> >                  :delta_lon = "0.250000 degrees" ;
>> >                  :Nlat = "53 grid_points" ;
>> >                  :Nlon = "53 grid_points" ;
>> > }
>> > bash-3.2$
>> > *************************************
>> > O/P of OBSERVATION FILE (NETCDF format) *
>> > *************************************
>> > bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
>> > netcdf test {
>> > dimensions:
>> >          lon = 53 ;
>> >          lat = 53 ;
>> > variables:
>> >          double lon(lon) ;
>> >                  lon:units = "degrees_east" ;
>> >          double lat(lat) ;
>> >                  lat:units = "degrees_north" ;
>> >          float APCP_03(lat, lon) ;
>> >                  APCP_03:units = "kg/m^2" ;
>> >                  APCP_03:missing_value = -9999.f ;
>> >                  APCP_03:long_name = "Total precipitation" ;
>> >                  APCP_03:name = "APCP" ;
>> >                  APCP_03:level = "A3" ;
>> >                  APCP_03:grib_code = 61.f ;
>> >                  APCP_03:_FillValue = -9999.f ;
>> >                  APCP_03:init_time = "20110602_000000" ;
>> >                  APCP_03:init_time_ut = 1306972800. ;
>> >                  APCP_03:valid_time = "20110602_030000" ;
>> >                  APCP_03:valid_time_ut = 1306983600. ;
>> >                  APCP_03:accum_time = "030000" ;
>> >                  APCP_03:accum_time_sec = 10800.f ;
>> >
>> > // global attributes:
>> >                  :FileOrigins = "File
../../../vpt/geeta/02june2011.nc generated 20140123_163031 on host
ncmr0102 by the Rscript trmm2nc.R" ;
>> >                  :MET_version = "V3.0.1" ;
>> >                  :Projection = "LatLon" ;
>> >                  :lat_ll = "9 degrees_north" ;
>> >                  :lon_ll = "74 degrees_east" ;
>> >                  :delta_lat = "0.25 degrees" ;
>> >                  :delta_lon = "0.25 degrees" ;
>> >                  :Nlat = "53 grid_points" ;
>> >                  :Nlon = "53 grid_points" ;
>> > }
>> >
>> >
>> > Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
>> > DO you suspect the location of NETCDF ???????????????.
>> >
>> > shall be looking forward to hearing from you.
>> >
>> > thanks
>> > geeta
>> >
>> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> >> From: met_help at ucar.edu
>> >> To: geeta124 at hotmail.com
>> >> Date: Tue, 25 Feb 2014 11:26:10 -0700
>> >>
>> >> Geeta,
>> >>
>> >> Sorry, I was out of the office yesterday.  Looking back through
this ticket, I see that you're still getting the following error:
>> >>      NetCDF: Attribute not found
>> >>
>> >> You said that you updated the NetCDF files to include the
following global attribute:
>> >>      MET_version = "V3.0" ;
>> >>
>> >> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
>> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>> >>
>> >> I see 2 other questions in your emails:
>> >>
>> >> (1) How can you control the optional "upscaling" or "smoothing"
done by grid_stat?
>> >>       In METv3.0, that is controlled by configuration file
options that begin with "interp_".  For example, try the following:
>> >>          interp_method[] = [ "UW_MEAN" ];
>> >>          interp_width[]  = [ 1, 3, 5 ];
>> >>          interp_flag     = 3;
>> >>
>> >>       For each output line you were getting before, you should
now get 2 more.  Since interp_flag is set to 3, grid_stat will smooth
both the forecast and observation fields.  For interp_width = 3,
>> >> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
>> >> in a 5x5 box around each grid point.  You can look to see how
the scores change as you do more and more smoothing.
>> >>
>> >> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
>> >>
>> >> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
>> >> plots a red dot for each observation lat/lon it finds in the
data.  It is intended to just give you a quick look at the location of
the observations to make sure that they exist where you expect.  It
>> >> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
>> >> the computation of statistics.  The MPR line type includes
columns named "OBS_LAT" and "OBS_LON" giving the point observation
location information.  You could read the lat/lon information from the
MPR
>> >> line type and use whatever plotting tool you prefer to plot the
observation locations.
>> >>
>> >> If you do post more data to the ftp site, please write me back
and I'll go grab it.
>> >>
>> >> Thanks,
>> >> John
>> >>
>> >>
>> >> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
>> >>>
>> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>> >>>
>> >>> hi john,
>> >>> you had discussed about the upscaling (of both obs and fcst or
any one of them). The forecast is compared with the observations which
are averaged to coarse scales.
>> >>> How is this averaged defined in the configuration file.
>> >>>
>> >>> Pls let me know reg the global attributes ????
>> >>>
>> >>> geeta
>> >>>
>> >>> From: geeta124 at hotmail.com
>> >>> To: met_help at ucar.edu
>> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> >>> Date: Sun, 23 Feb 2014 22:24:04 +0530
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> hi John,
>> >>> Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
>> >>>
>> >>> Can you provide some more hints.
>> >>>
>> >>> regards
>> >>>
>> >>> geeta
>> >>>
>> >>> From: geeta124 at hotmail.com
>> >>> To: met_help at ucar.edu
>> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> >>> Date: Fri, 21 Feb 2014 15:03:34 +0530
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> thanks John,
>> >>> I have made the changes as per your config file.
>> >>> But the error persists.
>> >>> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>> >>> GSL_RNG_TYPE=mt19937
>> >>> GSL_RNG_SEED=18446744073358673747
>> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>> >>> Observation File: ../trmm_nc_data/02june2011.nc
>> >>> Configuration File: GridStatConfig_APCP_24
>> >>> NetCDF: Attribute not found
>> >>> -bash-3.2$
>> >>>
>> >>>
>> >>> 2. I have used  ncdump to see my file attributes.
>> >>> Are u referring to these attributes???
>> >>>
>> >>> // global attributes:
>> >>>                   :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
>> >>>                   :MET_version = "V3.0" ;
>> >>>                   :MET_tool = "pcp_combine" ;
>> >>>
>> >>> Following is my Config file.
>> >>>
____________________________________________________________________
>> >>>
////////////////////////////////////////////////////////////////////////////////
>> >>> //
>> >>> // Default grid_stat configuration file
>> >>> //
>> >>>
////////////////////////////////////////////////////////////////////////////////
>> >>> //
>> >>> // Specify a name to designate the model being verified.  This
name will be
>> >>> // written to the second column of the ASCII output generated.
>> >>> //
>> >>> model = "WRF";
>> >>> //
>> >>> // Specify a comma-separated list of fields to be verified.
The forecast and
>> >>> // observation fields may be specified separately.  If the
obs_field parameter
>> >>> // is left blank, it will default to the contents of
fcst_field.
>> >>> //
>> >>> // Each field is specified as a GRIB code or abbreviation
followed by an
>> >>> // accumulation or vertical level indicator for GRIB files or
as a variable name
>> >>> // followed by a list of dimensions for NetCDF files output
from p_interp or MET.
>> >>> //
>> >>> // Specifying verification fields for GRIB files:
>> >>> //    GC/ANNN for accumulation interval NNN
>> >>> //    GC/ZNNN for vertical level NNN
>> >>> //    GC/PNNN for pressure level NNN in hPa
>> >>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>> >>> //    GC/LNNN for a generic level type
>> >>> //    GC/RNNN for a specific GRIB record number
>> >>> //    Where GC is the number of or abbreviation for the grib
code
>> >>> //    to be verified.
>> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>> >>> //
>> >>> // Specifying verification fields for NetCDF files:
>> >>> //    var_name(i,...,j,*,*) for a single field
>> >>> //    Where var_name is the name of the NetCDF variable,
>> >>> //    and i,...,j specifies fixed dimension values,
>> >>> //    and *,* specifies the two dimensions for the gridded
field.
>> >>> //
>> >>> //    NOTE: To verify winds as vectors rather than scalars,
>> >>> //          specify UGRD (or 33) followd by VGRD (or 34) with
the
>> >>> //          same level values.
>> >>> //
>> >>> //    NOTE: To process a probability field, add "/PROB", such
as "POP/Z0/PROB".
>> >>> //
>> >>> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for
GRIB input
>> >>> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ];
for NetCDF input
>> >>> //
>> >>> fcst_field[] = [ "APCP_24(*,*)" ];
>> >>> obs_field[]  = [ "APCP_03(*,*)" ];
>> >>> //
>> >>> // Specify a comma-separated list of groups of thresholds to be
applied to the
>> >>> // fields listed above.  Thresholds for the forecast and
observation fields
>> >>> // may be specified separately.  If the obs_thresh parameter is
left blank,
>> >>> // it will default to the content of fcst_thresh.
>> >>> //
>> >>> // At least one threshold must be provided for each field
listed above.  The
>> >>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
>> >>> // lengths of the "obs_field" and "obs_thresh" arrays.  To
apply multiple
>> >>> // thresholds to a field, separate the threshold values with a
space.
>> >>> //
>> >>> // Each threshold must be preceded by a two letter indicator
for the type of
>> >>> // thresholding to be performed:
>> >>> //    'lt' for less than     'le' for less than or equal to
>> >>> //    'eq' for equal to      'ne' for not equal to
>> >>> //    'gt' for greater than  'ge' for greater than or equal to
>> >>> //
>> >>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
>> >>> //       and be preceeded by "ge".
>> >>> //
>> >>> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0
ge80.0" ];
>> >>> //
>> >>> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
>> >>> obs_thresh[]  = [];
>> >>> //
>> >>> // Specify a comma-separated list of thresholds to be used when
computing
>> >>> // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
>> >>> // values derived from each U/V pair.  Only those U/V pairs
which meet the wind
>> >>> // speed threshold criteria are retained.  If the
obs_wind_thresh parameter is
>> >>> // left blank, it will default to the contents of
fcst_wind_thresh.
>> >>> //
>> >>> // To apply multiple wind speed thresholds, separate the
threshold values with a
>> >>> // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
>> >>> //
>> >>> // Each threshold must be preceded by a two letter indicator
for the type of
>> >>> // thresholding to be performed:
>> >>> //    'lt' for less than     'le' for less than or equal to
>> >>> //    'eq' for equal to      'ne' for not equal to
>> >>> //    'gt' for greater than  'ge' for greater than or equal to
>> >>> //    'NA' for no threshold
>> >>> //
>> >>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>> >>> //
>> >>> fcst_wind_thresh[] = [ "NA" ];
>> >>> obs_wind_thresh[]  = [];
>> >>> //
>> >>> // Specify a comma-separated list of grids to be used in
masking the data over
>> >>> // which to perform scoring.  An empty list indicates that no
masking grid
>> >>> // should be performed.  The standard NCEP grids are named
"GNNN" where NNN
>> >>> // indicates the three digit grid number.  Enter "FULL" to
score over the
>> >>> // entire domain.
>> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>> >>> //
>> >>> // e.g. mask_grid[] = [ "FULL" ];
>> >>> //
>> >>> mask_grid[] = [ "FULL" ];
>> >>> //
>> >>> // Specify a comma-separated list of masking regions to be
applied.
>> >>> // An empty list indicates that no additional masks should be
used.
>> >>> // The masking regions may be defined in one of 4 ways:
>> >>> //
>> >>> // (1) An ASCII file containing a lat/lon polygon.
>> >>> //     Latitude in degrees north and longitude in degrees east.
>> >>> //     By default, the first and last polygon points are
connected.
>> >>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
>> >>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>> >>> //
>> >>> // (2) The NetCDF output of the gen_poly_mask tool.
>> >>> //
>> >>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
>> >>> //     to be used, and optionally, a threshold to be applied to
the field.
>> >>> //     e.g. "sample.nc var_name gt0.00"
>> >>> //
>> >>> // (4) A GRIB data file, followed by a description of the field
>> >>> //     to be used, and optionally, a threshold to be applied to
the field.
>> >>> //     e.g. "sample.grb APCP/A3 gt0.00"
>> >>> //
>> >>> // Any NetCDF or GRIB file used must have the same grid
dimensions as the
>> >>> // data being verified.
>> >>> //
>> >>> // MET_BASE may be used in the path for the files above.
>> >>> //
>> >>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>> >>> //                      "poly_mask.ncf",
>> >>> //                      "sample.nc APCP",
>> >>> //                      "sample.grb HGT/Z0 gt100.0" ];
>> >>> //
>> >>> mask_poly[] = [];
>> >>> //
>> >>> // Specify a comma-separated list of values for alpha to be
used when computing
>> >>> // confidence intervals.  Values of alpha must be between 0 and
1.
>> >>> //
>> >>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>> >>> //
>> >>> ci_alpha[] = [ 0.10, 0.05 ];
>> >>> //
>> >>> // Specify the method to be used for computing bootstrap
confidence intervals.
>> >>> // The value for this is interpreted as follows:
>> >>> //    (0) Use the BCa interval method (computationally
intensive)
>> >>> //    (1) Use the percentile interval method
>> >>> //
>> >>> boot_interval = 1;
>> >>> //
>> >>> // Specify a proportion between 0 and 1 to define the replicate
sample size
>> >>> // to be used when computing percentile intervals.  The
replicate sample
>> >>> // size is set to boot_rep_prop * n, where n is the number of
raw data points.
>> >>> //
>> >>> // e.g boot_rep_prop = 0.80;
>> >>> //
>> >>> boot_rep_prop = 1.0;
>> >>> //
>> >>> // Specify the number of times each set of matched pair data
should be
>> >>> // resampled when computing bootstrap confidence intervals.  A
value of
>> >>> // zero disables the computation of bootstrap condifence
intervals.
>> >>> //
>> >>> // e.g. n_boot_rep = 1000;
>> >>> //
>> >>> n_boot_rep = 0;
>> >>> //
>> >>> // Specify the name of the random number generator to be used.
See the MET
>> >>> // Users Guide for a list of possible random number generators.
>> >>> //
>> >>> boot_rng = "mt19937";
>> >>> //
>> >>> // Specify the seed value to be used when computing bootstrap
confidence
>> >>> // intervals.  If left unspecified, the seed will change for
each run and
>> >>> // the computed bootstrap confidence intervals will not be
reproducable.
>> >>> //
>> >>> boot_seed = "";
>> >>> //
>> >>> // Specify a comma-separated list of interpolation method(s) to
be used for
>> >>> // smoothing the data fields prior to comparing them.  The
value at each grid
>> >>> // point is replaced by the measure computed over the
neighborhood defined
>> >>> // around the grid point.  String values are interpreted as
follows:
>> >>> //    MIN     = Minimum in the neighborhood
>> >>> //    MAX     = Maximum in the neighborhood
>> >>> //    MEDIAN  = Median in the neighborhood
>> >>> //    UW_MEAN = Unweighted mean in the neighborhood
>> >>> //
>> >>> //    NOTE: The distance-weighted mean (DW_MEAN) is not an
option here since
>> >>> //          it will have no effect on a gridded field.
>> >>> //
>> >>> //    NOTE: The least-squares fit (LS_FIT) is not an option
here since
>> >>> //          it reduces to an unweighted mean on a grid.
>> >>> //
>> >>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>> >>> //
>> >>> interp_method[] = [ "UW_MEAN" ];
>> >>> //
>> >>> // Specify a comma-separated list of box widths to be used by
the interpolation
>> >>> // techniques listed above.  All values must be odd.  A value
of 1 indicates
>> >>> // that no smoothing should be performed.  For values greater
than 1, the n*n
>> >>> // grid points around each point will be used to smooth the
data fields.
>> >>> //
>> >>> // e.g. interp_width = [ 1, 3, 5 ];
>> >>> //
>> >>> interp_width[] = [ 1 ];
>> >>> //
>> >>> // The interp_flag controls how the smoothing defined above
should be applied:
>> >>> // (1) Smooth only the forecast field
>> >>> // (2) Smooth only the observation field
>> >>> // (3) Smooth both the forecast and observation fields
>> >>> //
>> >>> interp_flag = 1;
>> >>> //
>> >>> // When smoothing, compute a ratio of the number of valid data
points to
>> >>> // the total number of points in the neighborhood.  If that
ratio is less
>> >>> // than this threshold, do not compute a smoothed forecast
value.  This
>> >>> // threshold must be between 0 and 1.  Setting this threshold
to 1 will
>> >>> // require that each observation be surrounded by n*n valid
forecast
>> >>> // points.
>> >>> //
>> >>> // e.g. interp_thresh = 1.0;
>> >>> //
>> >>> interp_thresh = 1.0;
>> >>> //
>> >>> // Specify a comma-separated list of box widths to be used to
define the
>> >>> // neighborhood size for the neighborhood verification methods.
All values
>> >>> // must be odd.  For values greater than 1, the n*n grid points
around each
>> >>> // point will be used to define the neighborhood.
>> >>> //
>> >>> // e.g. nbr_width = [ 3, 5 ];
>> >>> //
>> >>> nbr_width[] = [ 3, 5 ];
>> >>> //
>> >>> // When applying the neighborhood verification methods, compute
a ratio
>> >>> // of the number of valid data points to the total number of
points in
>> >>> // the neighborhood.  If that ratio is less than this
threshold, do not
>> >>> // include it in the computations.  This threshold must be
between 0
>> >>> // and 1.  Setting this threshold to 1 will require that each
point be
>> >>> // surrounded by n*n valid forecast points.
>> >>> //
>> >>> // e.g. nbr_thresh = 1.0;
>> >>> //
>> >>> nbr_thresh = 1.0;
>> >>> //
>> >>> // When applying the neighborhood verification methods, apply a
threshold
>> >>> // to the fractional coverage values to define contingency
tables from
>> >>> // which to compute statistics.
>> >>> //
>> >>> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
>> >>> //
>> >>> cov_thresh[] = [ "ge0.5" ];
>> >>> //
>> >>> // Specify flags to indicate the type of data to be output:
>> >>> //
>> >>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
>> >>> //           Total (TOTAL),
>> >>> //           Forecast Rate (F_RATE),
>> >>> //           Hit Rate (H_RATE),
>> >>> //           Observation Rate (O_RATE)
>> >>> //
>> >>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
>> >>> //           Total (TOTAL),
>> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
>> >>> //           Forecast Yes and Observation No Count (FY_ON),
>> >>> //           Forecast No and Observation Yes Count (FN_OY),
>> >>> //           Forecast No and Observation No Count (FN_ON)
>> >>> //
>> >>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
>> >>> //           Total (TOTAL),
>> >>> //           Base Rate (BASER),
>> >>> //           Forecast Mean (FMEAN),
>> >>> //           Accuracy (ACC),
>> >>> //           Frequency Bias (FBIAS),
>> >>> //           Probability of Detecting Yes (PODY),
>> >>> //           Probability of Detecting No (PODN),
>> >>> //           Probability of False Detection (POFD),
>> >>> //           False Alarm Ratio (FAR),
>> >>> //           Critical Success Index (CSI),
>> >>> //           Gilbert Skill Score (GSS),
>> >>> //           Hanssen and Kuipers Discriminant (HK),
>> >>> //           Heidke Skill Score (HSS),
>> >>> //           Odds Ratio (ODDS),
>> >>> //           NOTE: All statistics listed above contain
parametric and/or
>> >>> //                 non-parametric confidence interval limits.
>> >>> //
>> >>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table Counts:
>> >>> //           Total (TOTAL),
>> >>> //           Number of Categories (N_CAT),
>> >>> //           Contingency Table Count columns repeated
N_CAT*N_CAT times
>> >>> //
>> >>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table Scores:
>> >>> //           Total (TOTAL),
>> >>> //           Number of Categories (N_CAT),
>> >>> //           Accuracy (ACC),
>> >>> //           Hanssen and Kuipers Discriminant (HK),
>> >>> //           Heidke Skill Score (HSS),
>> >>> //           Gerrity Score (GER),
>> >>> //           NOTE: All statistics listed above contain
parametric and/or
>> >>> //                 non-parametric confidence interval limits.
>> >>> //
>> >>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
>> >>> //           Total (TOTAL),
>> >>> //           Forecast Mean (FBAR),
>> >>> //           Forecast Standard Deviation (FSTDEV),
>> >>> //           Observation Mean (OBAR),
>> >>> //           Observation Standard Deviation (OSTDEV),
>> >>> //           Pearson's Correlation Coefficient (PR_CORR),
>> >>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
>> >>> //           Kendall Tau Rank Correlation Coefficient
(KT_CORR),
>> >>> //           Number of ranks compared (RANKS),
>> >>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
>> >>> //           Number of tied ranks in the observation field
(ORANK_TIES),
>> >>> //           Mean Error (ME),
>> >>> //           Standard Deviation of the Error (ESTDEV),
>> >>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>> >>> //           Mean Absolute Error (MAE),
>> >>> //           Mean Squared Error (MSE),
>> >>> //           Bias-Corrected Mean Squared Error (BCMSE),
>> >>> //           Root Mean Squared Error (RMSE),
>> >>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
>> >>> //           NOTE: Most statistics listed above contain
parametric and/or
>> >>> //                 non-parametric confidence interval limits.
>> >>> //
>> >>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>> >>> //           Total (TOTAL),
>> >>> //           Forecast Mean (FBAR),
>> >>> //              = mean(f)
>> >>> //           Observation Mean (OBAR),
>> >>> //              = mean(o)
>> >>> //           Forecast*Observation Product Mean (FOBAR),
>> >>> //              = mean(f*o)
>> >>> //           Forecast Squared Mean (FFBAR),
>> >>> //              = mean(f^2)
>> >>> //           Observation Squared Mean (OOBAR)
>> >>> //              = mean(o^2)
>> >>> //
>> >>> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
>> >>> //           Total (TOTAL),
>> >>> //           U-Forecast Mean (UFBAR),
>> >>> //              = mean(uf)
>> >>> //           V-Forecast Mean (VFBAR),
>> >>> //              = mean(vf)
>> >>> //           U-Observation Mean (UOBAR),
>> >>> //              = mean(uo)
>> >>> //           V-Observation Mean (VOBAR),
>> >>> //              = mean(vo)
>> >>> //           U-Product Plus V-Product (UVFOBAR),
>> >>> //              = mean(uf*uo+vf*vo)
>> >>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
>> >>> //              = mean(uf^2+vf^2)
>> >>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
>> >>> //              = mean(uo^2+vo^2)
>> >>> //
>> >>> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
>> >>> //           Total (TOTAL),
>> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>> >>> //           Probability Threshold Value (THRESH_i),
>> >>> //           Row Observation Yes Count (OY_i),
>> >>> //           Row Observation No Count (ON_i),
>> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
>> >>> //           Last Probability Threshold Value (THRESH_n)
>> >>> //
>> >>> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
>> >>> //           Total (TOTAL),
>> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>> >>> //           Base Rate (BASER) with confidence interval limits,
>> >>> //           Reliability (RELIABILITY),
>> >>> //           Resolution (RESOLUTION),
>> >>> //           Uncertainty (UNCERTAINTY),
>> >>> //           Area Under the ROC Curve (ROC_AUC),
>> >>> //           Brier Score (BRIER) with confidence interval
limits,
>> >>> //           Probability Threshold Value (THRESH_i)
>> >>> //           NOTE: Previous column repeated for each
probability threshold.
>> >>> //
>> >>> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics
of
>> >>> //                                 Probabilistic Variables:
>> >>> //           Total (TOTAL),
>> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>> >>> //           Probability Threshold Value (THRESH_i),
>> >>> //           Observation Yes Count Divided by Total (OY_TP_i),
>> >>> //           Observation No Count Divided by Total (ON_TP_i),
>> >>> //           Calibration (CALIBRATION_i),
>> >>> //           Refinement (REFINEMENT_i),
>> >>> //           Likelikhood (LIKELIHOOD_i),
>> >>> //           Base Rate (BASER_i),
>> >>> //           NOTE: Previous 7 columns repeated for each row in
the table
>> >>> //           Last Probability Threshold Value (THRESH_n)
>> >>> //
>> >>> //   (12) STAT and PRC Text Files, ROC Curve Points for
>> >>> //                                 Probabilistic Variables:
>> >>> //           Total (TOTAL),
>> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>> >>> //           Probability Threshold Value (THRESH_i),
>> >>> //           Probability of Detecting Yes (PODY_i),
>> >>> //           Probability of False Detection (POFD_i),
>> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
>> >>> //           Last Probability Threshold Value (THRESH_n)
>> >>> //
>> >>> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
>> >>> //           Total (TOTAL),
>> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
>> >>> //           Forecast Yes and Observation No Count (FY_ON),
>> >>> //           Forecast No and Observation Yes Count (FN_OY),
>> >>> //           Forecast No and Observation No Count (FN_ON),
>> >>> //           Fractional Threshold Value (FRAC_T)
>> >>> //
>> >>> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
>> >>> //           Total (TOTAL),
>> >>> //           Base Rate (BASER),
>> >>> //           Forecast Mean (FMEAN),
>> >>> //           Accuracy (ACC),
>> >>> //           Bias (BIAS),
>> >>> //           Probability of Detecting Yes (PODY),
>> >>> //           Probability of Detecting No (PODN),
>> >>> //           Probability of False Detection (POFD),
>> >>> //           False Alarm Ratio (FAR),
>> >>> //           Critical Success Index (CSI),
>> >>> //           Gilbert Skill Score (GSS),
>> >>> //           Hanssen and Kuipers Discriminant (HK),
>> >>> //           Heidke Skill Score (HSS),
>> >>> //           Odds Ratio (ODDS),
>> >>> //           NOTE: Most statistics listed above contain
parametric and/or
>> >>> //                 non-parametric confidence interval limits.
>> >>> //
>> >>> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
>> >>> //           Total (TOTAL),
>> >>> //           Fractions Brier Score (FBS),
>> >>> //           Fractions Skill Score (FSS)
>> >>> //
>> >>> //   (16) NetCDF File containing difference fields for each
grib
>> >>> //        code/mask combination.  A non-zero value indicates
that
>> >>> //        this NetCDF file should be produced.  A value of 0
>> >>> //        indicates that it should not be produced.
>> >>> //
>> >>> // Values for flags (1) through (15) are interpreted as
follows:
>> >>> //    (0) Do not generate output of this type
>> >>> //    (1) Write output to a STAT file
>> >>> //    (2) Write output to a STAT file and a text file
>> >>> //
>> >>> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2,
1 ];
>> >>> //
>> >>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
>> >>> // Coefficients should be computed.  Computing them over large
datasets is
>> >>> // computationally intensive and slows down the runtime
execution significantly.
>> >>> //    (0) Do not compute these correlation coefficients
>> >>> //    (1) Compute these correlation coefficients
>> >>> //
>> >>> rank_corr_flag = 0;
>> >>> //
>> >>> // Specify the GRIB Table 2 parameter table version number to
be used
>> >>> // for interpreting GRIB codes.
>> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>> >>> //
>> >>> grib_ptv = 2;
>> >>> //
>> >>> // Directory where temporary files should be written.
>> >>> //
>> >>> tmp_dir = "/tmp";
>> >>> //
>> >>> // Prefix to be used for the output file names.
>> >>> //
>> >>> output_prefix = "APCP_24";
>> >>> //
>> >>> // Indicate a version number for the contents of this
configuration file.
>> >>> // The value should generally not be modified.
>> >>> //
>> >>> version = "V3.0";
>> >>>
>> >>>
>> >>> geeta
>> >>>
>> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>> >>>> From: met_help at ucar.edu
>> >>>> To: geeta124 at hotmail.com
>> >>>> Date: Thu, 20 Feb 2014 10:21:20 -0700
>> >>>>
>> >>>> Geeta,
>> >>>>
>> >>>> I see that you're using METv3.0.  The current version is
METv4.1, and it'd be good to switch to that version when possible.
There have been major changes to the MET configuration file format
since
>> >>>> METv3.0, so be sure to use the default config files for
METv4.1.
>> >>>>
>> >>>> I ran METv3.0 grid_stat on the data files you sent and
reproduced the error message you saw:
>> >>>>       ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
>> >>>>
>> >>>> Since the input files are both NetCDF files, you need to
specify the name of the NetCDF variable that should be used.  So I
modified your config file:
>> >>>>       FROM: fcst_field[] = [ "61/A24" ];
>> >>>>       TO:   fcst_field[] = [ "APCP_24(*,*)" ];
>> >>>>
>> >>>> When I reran with this change, I got this error:
>> >>>>       NetCDF: Attribute not found
>> >>>>
>> >>>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
>> >>>>                    :MET_version = "V3.0.1" ;
>> >>>>
>> >>>> I switched that to be consistent with the version of MET
you're running:
>> >>>>                    :MET_version = "V3.0" ;
>> >>>>
>> >>>> And then I got this error:
>> >>>> ERROR: parse_poly_mask() -> the dimensions of the masking
region (185, 129) must match the dimensions of the data (53, 53).
>> >>>>
>> >>>> So I modified the config file to change the masking region
settings:
>> >>>>       FROM: mask_grid[] = [ "DTC165", "DTC166" ];
>> >>>>             mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
>> >>>>                             "MET_BASE/data/poly/LMV.poly" ];
>> >>>>
>> >>>>       TO:   mask_grid[] = [ "FULL" ];
>> >>>>             mask_poly[] = [];
>> >>>>
>> >>>> And then it ran fine.
>> >>>>
>> >>>> To summarize...
>> >>>>     (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
>> >>>>     (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
>> >>>>     (3) Consider updating to using METv4.1 instead.
>> >>>>
>> >>>> Thanks,
>> >>>> John
>> >>>>
>> >>>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
>> >>>>>
>> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
>> >>>>>
>> >>>>> Hi John,
>> >>>>> I am bothering you with a few more. Hope u ll bear with me.
>> >>>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
>> >>>>>
>> >>>>> 1. Now I was reading  about 3 approaches of FUZZY verf which
are   a. Multi event contingency Table (My question is -----Can we
define a hit as RF b/w 0.1 to 2.5 in the config file. Normally we
select the threshold as ge0.1 or ge2.5 etc. Is the provision of giving
a range in config file there?????).
>> >>>>>
>> >>>>> b) Pragmatic approach  (donot know what's that???)
>> >>>>>
>> >>>>> c) Conditional Square root of Ranked probability score
(CSRR). (donot know what's that???)
>> >>>>>
>> >>>>> I donot understand these. Can u lead me to the right
direction or provide some hints????
>> >>>>>
>> >>>>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
>> >>>>> Can the QUILT plot be prepared for any score like HK, HSS,
FBS or FSS????
>> >>>>>
>> >>>>>
>> >>>>> thanks
>> >>>>> geeta
>> >>>>>
>> >>>>> From: geeta124 at hotmail.com
>> >>>>> To: met_help at ucar.edu
>> >>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>> >>>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> Hi John,
>> >>>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
>> >>>>> Kindly check that.
>> >>>>>
>> >>>>> geeta
>> >>>>>
>> >>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>> >>>>>> From: met_help at ucar.edu
>> >>>>>> To: geeta124 at hotmail.com
>> >>>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
>> >>>>>>
>> >>>>>> Geeta,
>> >>>>>>
>> >>>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
>> >>>>>>
>> >>>>>> I was trying to make the point that the "interpolation
methods" in the grid_stat config file could be used as a form of
"upscaling".  You are right, there is no *need* to interpolate the
data since
>> >>>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
>> >>>>>> option of UW_MEAN (for un-weighted mean) and width of 5.
For each grid point, grid_stat will replace the value at the grid
point with the average of the 25 points in a 5x5 box around that
point.
>> >>>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
>> >>>>>>
>> >>>>>> The default interpolation width is 1, meaning that no
smoothing is performed.  However, you could use multiple smoothing
widths and see how your performance changes the more you smooth the
data.
>> >>>>>>
>> >>>>>> Does that make sense?
>> >>>>>>
>> >>>>>> Regarding the runtime error you're getting, I see that
you're using input NetCDF files for the forecast and observation
fields.  In the config file, you need to specify the name and
dimensions of the
>> >>>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
>> >>>>>>
>> >>>>>> fcst = {
>> >>>>>>        wind_thresh = [ NA ];
>> >>>>>>
>> >>>>>>        field = [
>> >>>>>>           {
>> >>>>>>             name       = "APCP_24";
>> >>>>>>             level      = [ "(*,*)" ];
>> >>>>>>             cat_thresh = [ >0.0, >=5.0 ];
>> >>>>>>           }
>> >>>>>>        ];
>> >>>>>>
>> >>>>>> };
>> >>>>>>
>> >>>>>> If you continue to experience problems, please send me
sample forecast and observation files along with the GridStatConfig
file you're using.  You can post it to our anonymous ftp site
following these
>> >>>>>> instructions:
>> >>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
>> >>>>>>
>> >>>>>> Thanks,
>> >>>>>> John
>> >>>>>>
>> >>>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
>> >>>>>>>
>> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>> >>>>>>>
>> >>>>>>> Hi John,
>> >>>>>>>      I have run grid-stat. Following is the error.
>> >>>>>>>
>> >>>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>> >>>>>>> GSL_RNG_TYPE=mt19937
>> >>>>>>> GSL_RNG_SEED=18446744073321512274
>> >>>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>> >>>>>>> Observation File: ../trmm_nc_data/02june2011.nc
>> >>>>>>> Configuration File: GridStatConfig_APCP_24
>> >>>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in
file: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>> >>>>>>>
>> >>>>>>>
--------------------------------------------------------------------------------
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Pls suggest.
>> >>>>>>>
>> >>>>>>> geeta
>> >>>>>>>
>> >>>>>>> From: geeta124 at hotmail.com
>> >>>>>>> To: met_help at ucar.edu
>> >>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>> >>>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Thanks a lot John for your inputs and clarifications.
>> >>>>>>>
>> >>>>>>> Still following doubts are there.
>> >>>>>>>
>> >>>>>>> 1. when I run copygb, what it does is to make the
observation and Model FC uniform ( I mean same GRID and RESOLUTION).
These two parameters are only important.
>> >>>>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
>> >>>>>>>
>> >>>>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
>> >>>>>>>
>> >>>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
>> >>>>>>>
>> >>>>>>> geeta
>> >>>>>>>
>> >>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>> >>>>>>>> From: met_help at ucar.edu
>> >>>>>>>> To: geeta124 at hotmail.com
>> >>>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
>> >>>>>>>>
>> >>>>>>>> Geeta,
>> >>>>>>>>
>> >>>>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
>> >>>>>>>>
>> >>>>>>>> (1) The first way is by applying an interpolation method
to the data.  Since the data are already on the same grid, this is
really a "smoothing" operation instead.  This is called "upscaling".
>> >>>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
>> >>>>>>>> config file, you could try:
>> >>>>>>>>
>> >>>>>>>> interp = {
>> >>>>>>>>         field      = BOTH;
>> >>>>>>>>         vld_thresh = 1.0;
>> >>>>>>>>
>> >>>>>>>>         type = [
>> >>>>>>>>            { method = UW_MEAN; width  = 1; },
>> >>>>>>>>            { method = UW_MEAN; width  = 3; },
>> >>>>>>>>            { method = UW_MEAN; width  = 6; },
>> >>>>>>>>            { method = UW_MEAN; width  = 9; }
>> >>>>>>>>         ];
>> >>>>>>>> };
>> >>>>>>>>
>> >>>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
>> >>>>>>>>
>> >>>>>>>> (2) The second way is by applying neighborhood
verification methods.  The most common are the Fractions Brier Score
(FBS) and Fractions Skill Score (FSS), both contained in the NBRCNT
output line
>> >>>>>>>> type.  Be sure to turn the NBRCNT output line on in the
Grid-Stat config file.  For neighborhood verification, you pick
multiple neighborhood sizes and look to see how the FSS changes as you
increase
>> >>>>>>>> the neighborhood size.  As the neighborhood size
increases, FSS increases.  And you look to see how large of a
neighborhood size you need to get a "useful" (FSS > 0.5) forecast.
>> >>>>>>>>
>> >>>>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
>> >>>>>>>> it places an n x n box around each grid point and counts
up the number of events within that box.  For a 3 x 3 box, if 4 of the
9 points contained an event, the value for that point is 4/9.  This is
>> >>>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
>> >>>>>>>> computed by comparing the forecast and observation
fractional coverage fields to each other.
>> >>>>>>>>
>> >>>>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
>> >>>>>>>>
>> >>>>>>>> Here's an example of how you might set this up in the
Grid-Stat config file:
>> >>>>>>>>
>> >>>>>>>> nbrhd = {
>> >>>>>>>>         vld_thresh = 1.0;
>> >>>>>>>>         width      = [ 3, 5, 9, 11, 13, 15 ];
>> >>>>>>>>         cov_thresh = [ >=0.5 ];
>> >>>>>>>> }
>> >>>>>>>>
>> >>>>>>>> For a given threshold, you should look to see how FSS
changes as you increase the neighborhood size.
>> >>>>>>>>
>> >>>>>>>> Hopefully that helps get you going.
>> >>>>>>>>
>> >>>>>>>> Thanks,
>> >>>>>>>> John Halley Gotway
>> >>>>>>>> met_help at ucar.edu
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
>> >>>>>>>>>
>> >>>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
>> >>>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
>> >>>>>>>>>             Queue: met_help
>> >>>>>>>>>           Subject: Unable to visualize Fuzzy verf.
>> >>>>>>>>>             Owner: Nobody
>> >>>>>>>>>        Requestors: geeta124 at hotmail.com
>> >>>>>>>>>            Status: new
>> >>>>>>>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> Hi John/ met_help.
>> >>>>>>>>>
>> >>>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
>> >>>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
>> >>>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations
and the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
>> >>>>>>>>>
>> >>>>>>>>> geeta
>> >>>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>> >
>>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Thu Feb 27 13:43:25 2014

# Daily TRMM binary files:
#
ftp://disc2.nascom.nasa.gov/data/TRMM/Gridded/Derived_Products/3B42_V7/Daily/YYYY/3B42_daily.YYYY.MM.DD.7.bin
# 3-Hourly TRMM binary files:
#
ftp://disc2.nascom.nasa.gov/data/TRMM/Gridded/3B42_V7/YYYYMM/3B42.YYMMDD.HHz.7.precipitation.bin

########################################################################
#
# Required libraries.
#
########################################################################

library(ncdf)

########################################################################
#
# Constants and command line arguments
#
########################################################################

hdr_len    = 5           # Each pcp file begins with 5 header lines.
hdr_file   = "trmm.hdr"  # Temporary header file.
missing    = -9999       # Missing pcp value to be used in MET.
save       = FALSE       # If set to TRUE, call save.image()
sec_per_hr = 60*60

# Native domain specification
in_res     = 0.25        # Resolution in degrees
in_lat_ll  = -50.00      # Latitude of lower-left corner
in_lon_ll  =   0.00      # Longitude of lower-left corner
in_lat_ur  =  50.00      # Latitude of upper-right corner
in_lon_ur  = 359.75      # Longitude of upper-right corner

# Output domain specification
out_res    = 0.25        # Resolution in degrees
out_lat_ll =  -25.00     # Latitude of lower-left corner
out_lon_ll = -150.00     # Longitude of lower-left corner
out_lat_ur =   60.00     # Latitude of upper-right corner
out_lon_ur =   10.00     # Longitude of upper-right corner

rescale_lon <- function (x) {
  while(x < -180) x = x + 360;
  while(x >  180) x = x - 360;
  return(x)
}

########################################################################
#
# Handle the arguments.
#
########################################################################

# Retreive the arguments
args = commandArgs(TRUE)

# Check the number of arguments
if(length(args) < 2) {
   cat("Usage: trmmbin2nc.R\n")
   cat("        trmm_file\n")
   cat("        nc_file\n")
   cat("        [-save]\n")
   cat("        where \"trmm_file\" is a binary TRMM files.\n")
   cat("              \"nc_file\"   is the out NetCDF file to be
written.\n")
   cat("              \"-save\"     to call save.image().\n\n")
   quit()
}

# Store the input file names
trmm_file = args[1]
nc_file   = args[2]

# Parse optional arguments
for(i in 1:length(args)) {
   if(args[i] == "-save") {
      save = TRUE
   }
}

########################################################################
#
# Parse accumulation interval and time from file name.
#
########################################################################

# Daily files contain the string "daily"
if(grepl("daily", basename(trmm_file), ignore.case=TRUE)) {
  tok   = unlist(strsplit(basename(trmm_file), '.', fixed=TRUE))
  ftime = strptime(paste(tok[2], tok[3], tok[4]), format="%Y %m %d",
tz="GMT")
  init  = as.POSIXct(ftime -  0*sec_per_hr)
  valid = as.POSIXct(ftime + 24*sec_per_hr)

# 3-hourly files contain the string "[0-9][0-9]z"
} else if(grepl("[0-9][0-9]z", basename(trmm_file), ignore.case=TRUE))
{
  tok   = unlist(strsplit(basename(trmm_file), '.', fixed=TRUE))
  ftime = strptime(paste(tok[2], tok[3]), format="%y%m%d %H",
tz="GMT")
  init  = as.POSIXct(ftime - 0*sec_per_hr)
  valid = as.POSIXct(ftime + 3*sec_per_hr)

# Fail otherwise
} else {
  cat("\n\nERROR: Can\'t figure out the accumulation interval!\n\n")
  quit(1)
}

# Compute the accumulation interval
acc_sec = as.double(valid - init, units="secs")
acc_hr  = floor(acc_sec / 3600)
acc_str = sprintf("%.2i", acc_hr)

########################################################################
#
# Read the 1/4 degree binary TRMM data.
#
########################################################################

in_lat  = seq(in_lat_ll, in_lat_ur, in_res)
in_lon  = seq(in_lon_ll, in_lon_ur, in_res)
in_nlat = length(in_lat)
in_nlon = length(in_lon)
data    = readBin(trmm_file, "numeric", n=in_nlon*in_nlat, size = 4,
endian="big")
in_pcp  = array(data, dim=c(in_nlon, in_nlat), dimnames=c("lon",
"lat"))

# Rescale the input longitudes from -180.0 to 180
in_lon = sapply(in_lon, rescale_lon)

########################################################################
#
# Select subset of data to be written.
#
########################################################################

out_lat  = seq(out_lat_ll, out_lat_ur, out_res)
out_lon  = seq(out_lon_ll, out_lon_ur, out_res)
out_nlat = length(out_lat)
out_nlon = length(out_lon)

# Rescale the output longitudes from -180.0 to 180
out_lon = sort(sapply(out_lon, rescale_lon))

# Extract the output data
out_pcp = matrix(nrow=out_nlon, ncol=out_nlat)
out_cnt = 0
out_vld = 0
out_sum = 0

for(cur_out_lon in 1:out_nlon) {
  for(cur_out_lat in 1:out_nlat) {

    cur_in_lon = which(out_lon[cur_out_lon] == in_lon)
    cur_in_lat = which(out_lat[cur_out_lat] == in_lat)

    if(length(cur_in_lon) == 1 &&
       length(cur_in_lat) == 1) {
      out_pcp[cur_out_lon, cur_out_lat] = in_pcp[cur_in_lon,
cur_in_lat]
      out_vld = out_vld + 1
      out_sum = out_sum + out_pcp[cur_out_lon, cur_out_lat];
    }

    out_cnt = out_cnt + 1
  }
}

########################################################################
#
# Create the NetCDF output file.
#
########################################################################

# Define dimensions
dim_lat = dim.def.ncdf("lat", "degrees_north", out_lat,
                       create_dimvar=TRUE)
dim_lon = dim.def.ncdf("lon", "degrees_east",  out_lon,
                       create_dimvar=TRUE)

# Define variables
var_pcp = var.def.ncdf(paste("APCP_", acc_str, sep=''), "kg/m^2",
                       list(dim_lon, dim_lat), missing,
                       longname="Total precipitation", prec="single")

# Define file
nc = create.ncdf(nc_file, var_pcp)

# Add variable attributes
att.put.ncdf(nc, var_pcp, "name", "APCP")
att.put.ncdf(nc, var_pcp, "level", paste("A", acc_hr, sep=''))
att.put.ncdf(nc, var_pcp, "grib_code", 61)
att.put.ncdf(nc, var_pcp, "_FillValue", missing, prec="single")
att.put.ncdf(nc, var_pcp, "init_time", format(init, "%Y%m%d_%H%M%S",
tz="GMT"), prec="text")
att.put.ncdf(nc, var_pcp, "init_time_ut", as.numeric(init),
prec="int")
att.put.ncdf(nc, var_pcp, "valid_time", format(valid, "%Y%m%d_%H%M%S",
tz="GMT"), prec="text")
att.put.ncdf(nc, var_pcp, "valid_time_ut", as.numeric(valid),
prec="int")
att.put.ncdf(nc, var_pcp, "accum_time", paste(acc_str, "0000",
sep=''))
att.put.ncdf(nc, var_pcp, "accum_time_sec", acc_sec, prec="int")

# Add global attributes
cur_time = Sys.time()
att.put.ncdf(nc, 0, "FileOrigins", paste("File", nc_file, "generated",
format(Sys.time(), "%Y%m%d_%H%M%S"),
                                         "on host", Sys.info()[4], "by
the Rscript trmmbin2nc.R"))
att.put.ncdf(nc, 0, "MET_version", "V4.1")
att.put.ncdf(nc, 0, "Projection", "LatLon", prec="text")
att.put.ncdf(nc, 0, "lat_ll", paste(min(out_lat), "degrees_north"),
prec="text")
att.put.ncdf(nc, 0, "lon_ll", paste(min(out_lon), "degrees_east"),
prec="text")
att.put.ncdf(nc, 0, "delta_lat", paste(out_res, "degrees"),
prec="text")
att.put.ncdf(nc, 0, "delta_lon", paste(out_res, "degrees"),
prec="text")
att.put.ncdf(nc, 0, "Nlat", paste(out_nlat, "grid_points"),
prec="text")
att.put.ncdf(nc, 0, "Nlon", paste(out_nlon, "grid_points"),
prec="text")

# Add pcp to the file
put.var.ncdf(nc, var_pcp, out_pcp)

# Close the file
close.ncdf(nc)

# Print summary info

cat(paste("Output File:\t", nc_file, "\n", sep=""))
cat(paste("Output Domain:\t", out_nlat, " X ", out_nlon, " from (",
    out_lat_ll, ", ", out_lon_ll, ") to (",
    out_lat_ur, ", ", out_lon_ur, ") by ", out_res, " deg\n", sep=""))
cat(paste("Output Precip:\t", out_vld, " of ", out_cnt, " points
valid, ",
    round(out_sum, 2), " mm total, ",
    round(out_sum/out_vld, 2), " mm avg\n", sep=""))

########################################################################
#
# Finished.
#
########################################################################

# Optionally, save all of the pcp to an .RData file
if(save == TRUE) save.image()

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Fri Feb 28 00:02:31 2014

SO the solution is either to use 3.0 only for changing the trmm data
into the nc file .

or I should use 4.1 version. hopefully this will sort the issues that
I am getting presently.

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Wed, 26 Feb 2014 11:13:55 -0700
>
> Geeta,
>
> The problem is in the observation files:
>
> *************************************
> O/P of OBSERVATION FILE (NETCDF format) *
> *************************************
> :MET_version = "V3.0.1" ;
>
> If you change the "V3.0.1" to "V3.0", then METv3.0 grid_stat will be
able to process it fine.
>
> Also, you should switch the timing variable attributes from floats
to integers:
> Change from:
>   APCP_03:init_time_ut = 1306972800. ;
>   APCP_03:valid_time_ut = 1306983600. ;
>   APCP_03:accum_time_sec = 10800.f ;
> Change to:
>   APCP_03:init_time_ut = 1306972800 ;
>   APCP_03:valid_time_ut = 1306983600 ;
>   APCP_03:accum_time_sec = 10800 ;
>
> When you switch to METv4.1, it'll complain if those aren't integers.
>
> Hope that helps.
>
> Thanks,
> John
>
>
> On 02/25/2014 11:26 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > thanks John,
> > I have NOT done anything with the NC files as yet. I was asking
you how to go about changing the attribute????
> >
> > I am posting the o/p of one of the Forecast files in netcdf format
using ncdump. It Shows MET="v3.0" what you desire.
> > bash-3.2$ ncdump -h test.nc
> > netcdf test {
> > dimensions:
> >          lat = 53 ;
> >          lon = 53 ;
> > variables:
> >          float lat(lat, lon) ;
> >                  lat:long_name = "latitude" ;
> >                  lat:units = "degrees_north" ;
> >                  lat:standard_name = "latitude" ;
> >          float lon(lat, lon) ;
> >                  lon:long_name = "longitude" ;
> >                  lon:units = "degrees_east" ;
> >                  lon:standard_name = "longitude" ;
> >          float APCP_24(lat, lon) ;
> >                  APCP_24:name = "APCP" ;
> >                  APCP_24:long_name = "Total precipitation" ;
> >                  APCP_24:level = "A24" ;
> >                  APCP_24:units = "kg/m^2" ;
> >                  APCP_24:grib_code = 61 ;
> >                  APCP_24:_FillValue = -9999.f ;
> >                  APCP_24:init_time = "20110601_000000" ;
> >                  APCP_24:init_time_ut = 1306886400 ;
> >                  APCP_24:valid_time = "20110602_030000" ;
> >                  APCP_24:valid_time_ut = 1306983600 ;
> >                  APCP_24:accum_time = "240000" ;
> >                  APCP_24:accum_time_sec = 86400 ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >                  :MET_version = "V3.0" ;
> >                  :MET_tool = "pcp_combine" ;
> >                  :RunCommand = "Subtraction:
2011060100_WRFPRS_d01.027 with accumulation of 270000 minus
2011060100_WRFPRS_d01.003 with accumulation of 030000." ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9.000000 degrees_north" ;
> >                  :lon_ll = "74.000000 degrees_east" ;
> >                  :delta_lat = "0.250000 degrees" ;
> >                  :delta_lon = "0.250000 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> > bash-3.2$
> > *************************************
> > O/P of OBSERVATION FILE (NETCDF format) *
> > *************************************
> > bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
> > netcdf test {
> > dimensions:
> >          lon = 53 ;
> >          lat = 53 ;
> > variables:
> >          double lon(lon) ;
> >                  lon:units = "degrees_east" ;
> >          double lat(lat) ;
> >                  lat:units = "degrees_north" ;
> >          float APCP_03(lat, lon) ;
> >                  APCP_03:units = "kg/m^2" ;
> >                  APCP_03:missing_value = -9999.f ;
> >                  APCP_03:long_name = "Total precipitation" ;
> >                  APCP_03:name = "APCP" ;
> >                  APCP_03:level = "A3" ;
> >                  APCP_03:grib_code = 61.f ;
> >                  APCP_03:_FillValue = -9999.f ;
> >                  APCP_03:init_time = "20110602_000000" ;
> >                  APCP_03:init_time_ut = 1306972800. ;
> >                  APCP_03:valid_time = "20110602_030000" ;
> >                  APCP_03:valid_time_ut = 1306983600. ;
> >                  APCP_03:accum_time = "030000" ;
> >                  APCP_03:accum_time_sec = 10800.f ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
../../../vpt/geeta/02june2011.nc generated 20140123_163031 on host
ncmr0102 by the Rscript trmm2nc.R" ;
> >                  :MET_version = "V3.0.1" ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9 degrees_north" ;
> >                  :lon_ll = "74 degrees_east" ;
> >                  :delta_lat = "0.25 degrees" ;
> >                  :delta_lon = "0.25 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> >
> >
> > Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
> > DO you suspect the location of NETCDF ???????????????.
> >
> > shall be looking forward to hearing from you.
> >
> > thanks
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Tue, 25 Feb 2014 11:26:10 -0700
> >>
> >> Geeta,
> >>
> >> Sorry, I was out of the office yesterday.  Looking back through
this ticket, I see that you're still getting the following error:
> >>      NetCDF: Attribute not found
> >>
> >> You said that you updated the NetCDF files to include the
following global attribute:
> >>      MET_version = "V3.0" ;
> >>
> >> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> I see 2 other questions in your emails:
> >>
> >> (1) How can you control the optional "upscaling" or "smoothing"
done by grid_stat?
> >>       In METv3.0, that is controlled by configuration file
options that begin with "interp_".  For example, try the following:
> >>          interp_method[] = [ "UW_MEAN" ];
> >>          interp_width[]  = [ 1, 3, 5 ];
> >>          interp_flag     = 3;
> >>
> >>       For each output line you were getting before, you should
now get 2 more.  Since interp_flag is set to 3, grid_stat will smooth
both the forecast and observation fields.  For interp_width = 3,
> >> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
> >> in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.
> >>
> >> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
> >>
> >> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
> >> plots a red dot for each observation lat/lon it finds in the
data.  It is intended to just give you a quick look at the location of
the observations to make sure that they exist where you expect.  It
> >> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
> >> the computation of statistics.  The MPR line type includes
columns named "OBS_LAT" and "OBS_LON" giving the point observation
location information.  You could read the lat/lon information from the
MPR
> >> line type and use whatever plotting tool you prefer to plot the
observation locations.
> >>
> >> If you do post more data to the ftp site, please write me back
and I'll go grab it.
> >>
> >> Thanks,
> >> John
> >>
> >>
> >> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> hi john,
> >>> you had discussed about the upscaling (of both obs and fcst or
any one of them). The forecast is compared with the observations which
are averaged to coarse scales.
> >>> How is this averaged defined in the configuration file.
> >>>
> >>> Pls let me know reg the global attributes ????
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Sun, 23 Feb 2014 22:24:04 +0530
> >>>
> >>>
> >>>
> >>>
> >>> hi John,
> >>> Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
> >>>
> >>> Can you provide some more hints.
> >>>
> >>> regards
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Fri, 21 Feb 2014 15:03:34 +0530
> >>>
> >>>
> >>>
> >>>
> >>> thanks John,
> >>> I have made the changes as per your config file.
> >>> But the error persists.
> >>> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744073358673747
> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>> Observation File: ../trmm_nc_data/02june2011.nc
> >>> Configuration File: GridStatConfig_APCP_24
> >>> NetCDF: Attribute not found
> >>> -bash-3.2$
> >>>
> >>>
> >>> 2. I have used  ncdump to see my file attributes.
> >>> Are u referring to these attributes???
> >>>
> >>> // global attributes:
> >>>                   :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >>>                   :MET_version = "V3.0" ;
> >>>                   :MET_tool = "pcp_combine" ;
> >>>
> >>> Following is my Config file.
> >>>
____________________________________________________________________
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Default grid_stat configuration file
> >>> //
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Specify a name to designate the model being verified.  This
name will be
> >>> // written to the second column of the ASCII output generated.
> >>> //
> >>> model = "WRF";
> >>> //
> >>> // Specify a comma-separated list of fields to be verified.  The
forecast and
> >>> // observation fields may be specified separately.  If the
obs_field parameter
> >>> // is left blank, it will default to the contents of fcst_field.
> >>> //
> >>> // Each field is specified as a GRIB code or abbreviation
followed by an
> >>> // accumulation or vertical level indicator for GRIB files or as
a variable name
> >>> // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
> >>> //
> >>> // Specifying verification fields for GRIB files:
> >>> //    GC/ANNN for accumulation interval NNN
> >>> //    GC/ZNNN for vertical level NNN
> >>> //    GC/PNNN for pressure level NNN in hPa
> >>> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >>> //    GC/LNNN for a generic level type
> >>> //    GC/RNNN for a specific GRIB record number
> >>> //    Where GC is the number of or abbreviation for the grib
code
> >>> //    to be verified.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> // Specifying verification fields for NetCDF files:
> >>> //    var_name(i,...,j,*,*) for a single field
> >>> //    Where var_name is the name of the NetCDF variable,
> >>> //    and i,...,j specifies fixed dimension values,
> >>> //    and *,* specifies the two dimensions for the gridded
field.
> >>> //
> >>> //    NOTE: To verify winds as vectors rather than scalars,
> >>> //          specify UGRD (or 33) followd by VGRD (or 34) with
the
> >>> //          same level values.
> >>> //
> >>> //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
> >>> //
> >>> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for
GRIB input
> >>> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ];
for NetCDF input
> >>> //
> >>> fcst_field[] = [ "APCP_24(*,*)" ];
> >>> obs_field[]  = [ "APCP_03(*,*)" ];
> >>> //
> >>> // Specify a comma-separated list of groups of thresholds to be
applied to the
> >>> // fields listed above.  Thresholds for the forecast and
observation fields
> >>> // may be specified separately.  If the obs_thresh parameter is
left blank,
> >>> // it will default to the content of fcst_thresh.
> >>> //
> >>> // At least one threshold must be provided for each field listed
above.  The
> >>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
> >>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> >>> // thresholds to a field, separate the threshold values with a
space.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //
> >>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
> >>> //       and be preceeded by "ge".
> >>> //
> >>> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0
ge80.0" ];
> >>> //
> >>> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
> >>> obs_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of thresholds to be used when
computing
> >>> // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
> >>> // values derived from each U/V pair.  Only those U/V pairs
which meet the wind
> >>> // speed threshold criteria are retained.  If the
obs_wind_thresh parameter is
> >>> // left blank, it will default to the contents of
fcst_wind_thresh.
> >>> //
> >>> // To apply multiple wind speed thresholds, separate the
threshold values with a
> >>> // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //    'NA' for no threshold
> >>> //
> >>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >>> //
> >>> fcst_wind_thresh[] = [ "NA" ];
> >>> obs_wind_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of grids to be used in masking
the data over
> >>> // which to perform scoring.  An empty list indicates that no
masking grid
> >>> // should be performed.  The standard NCEP grids are named
"GNNN" where NNN
> >>> // indicates the three digit grid number.  Enter "FULL" to score
over the
> >>> // entire domain.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>> //
> >>> // e.g. mask_grid[] = [ "FULL" ];
> >>> //
> >>> mask_grid[] = [ "FULL" ];
> >>> //
> >>> // Specify a comma-separated list of masking regions to be
applied.
> >>> // An empty list indicates that no additional masks should be
used.
> >>> // The masking regions may be defined in one of 4 ways:
> >>> //
> >>> // (1) An ASCII file containing a lat/lon polygon.
> >>> //     Latitude in degrees north and longitude in degrees east.
> >>> //     By default, the first and last polygon points are
connected.
> >>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> >>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >>> //
> >>> // (2) The NetCDF output of the gen_poly_mask tool.
> >>> //
> >>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.nc var_name gt0.00"
> >>> //
> >>> // (4) A GRIB data file, followed by a description of the field
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.grb APCP/A3 gt0.00"
> >>> //
> >>> // Any NetCDF or GRIB file used must have the same grid
dimensions as the
> >>> // data being verified.
> >>> //
> >>> // MET_BASE may be used in the path for the files above.
> >>> //
> >>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >>> //                      "poly_mask.ncf",
> >>> //                      "sample.nc APCP",
> >>> //                      "sample.grb HGT/Z0 gt100.0" ];
> >>> //
> >>> mask_poly[] = [];
> >>> //
> >>> // Specify a comma-separated list of values for alpha to be used
when computing
> >>> // confidence intervals.  Values of alpha must be between 0 and
1.
> >>> //
> >>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >>> //
> >>> ci_alpha[] = [ 0.10, 0.05 ];
> >>> //
> >>> // Specify the method to be used for computing bootstrap
confidence intervals.
> >>> // The value for this is interpreted as follows:
> >>> //    (0) Use the BCa interval method (computationally
intensive)
> >>> //    (1) Use the percentile interval method
> >>> //
> >>> boot_interval = 1;
> >>> //
> >>> // Specify a proportion between 0 and 1 to define the replicate
sample size
> >>> // to be used when computing percentile intervals.  The
replicate sample
> >>> // size is set to boot_rep_prop * n, where n is the number of
raw data points.
> >>> //
> >>> // e.g boot_rep_prop = 0.80;
> >>> //
> >>> boot_rep_prop = 1.0;
> >>> //
> >>> // Specify the number of times each set of matched pair data
should be
> >>> // resampled when computing bootstrap confidence intervals.  A
value of
> >>> // zero disables the computation of bootstrap condifence
intervals.
> >>> //
> >>> // e.g. n_boot_rep = 1000;
> >>> //
> >>> n_boot_rep = 0;
> >>> //
> >>> // Specify the name of the random number generator to be used.
See the MET
> >>> // Users Guide for a list of possible random number generators.
> >>> //
> >>> boot_rng = "mt19937";
> >>> //
> >>> // Specify the seed value to be used when computing bootstrap
confidence
> >>> // intervals.  If left unspecified, the seed will change for
each run and
> >>> // the computed bootstrap confidence intervals will not be
reproducable.
> >>> //
> >>> boot_seed = "";
> >>> //
> >>> // Specify a comma-separated list of interpolation method(s) to
be used for
> >>> // smoothing the data fields prior to comparing them.  The value
at each grid
> >>> // point is replaced by the measure computed over the
neighborhood defined
> >>> // around the grid point.  String values are interpreted as
follows:
> >>> //    MIN     = Minimum in the neighborhood
> >>> //    MAX     = Maximum in the neighborhood
> >>> //    MEDIAN  = Median in the neighborhood
> >>> //    UW_MEAN = Unweighted mean in the neighborhood
> >>> //
> >>> //    NOTE: The distance-weighted mean (DW_MEAN) is not an
option here since
> >>> //          it will have no effect on a gridded field.
> >>> //
> >>> //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
> >>> //          it reduces to an unweighted mean on a grid.
> >>> //
> >>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >>> //
> >>> interp_method[] = [ "UW_MEAN" ];
> >>> //
> >>> // Specify a comma-separated list of box widths to be used by
the interpolation
> >>> // techniques listed above.  All values must be odd.  A value of
1 indicates
> >>> // that no smoothing should be performed.  For values greater
than 1, the n*n
> >>> // grid points around each point will be used to smooth the data
fields.
> >>> //
> >>> // e.g. interp_width = [ 1, 3, 5 ];
> >>> //
> >>> interp_width[] = [ 1 ];
> >>> //
> >>> // The interp_flag controls how the smoothing defined above
should be applied:
> >>> // (1) Smooth only the forecast field
> >>> // (2) Smooth only the observation field
> >>> // (3) Smooth both the forecast and observation fields
> >>> //
> >>> interp_flag = 1;
> >>> //
> >>> // When smoothing, compute a ratio of the number of valid data
points to
> >>> // the total number of points in the neighborhood.  If that
ratio is less
> >>> // than this threshold, do not compute a smoothed forecast
value.  This
> >>> // threshold must be between 0 and 1.  Setting this threshold to
1 will
> >>> // require that each observation be surrounded by n*n valid
forecast
> >>> // points.
> >>> //
> >>> // e.g. interp_thresh = 1.0;
> >>> //
> >>> interp_thresh = 1.0;
> >>> //
> >>> // Specify a comma-separated list of box widths to be used to
define the
> >>> // neighborhood size for the neighborhood verification methods.
All values
> >>> // must be odd.  For values greater than 1, the n*n grid points
around each
> >>> // point will be used to define the neighborhood.
> >>> //
> >>> // e.g. nbr_width = [ 3, 5 ];
> >>> //
> >>> nbr_width[] = [ 3, 5 ];
> >>> //
> >>> // When applying the neighborhood verification methods, compute
a ratio
> >>> // of the number of valid data points to the total number of
points in
> >>> // the neighborhood.  If that ratio is less than this threshold,
do not
> >>> // include it in the computations.  This threshold must be
between 0
> >>> // and 1.  Setting this threshold to 1 will require that each
point be
> >>> // surrounded by n*n valid forecast points.
> >>> //
> >>> // e.g. nbr_thresh = 1.0;
> >>> //
> >>> nbr_thresh = 1.0;
> >>> //
> >>> // When applying the neighborhood verification methods, apply a
threshold
> >>> // to the fractional coverage values to define contingency
tables from
> >>> // which to compute statistics.
> >>> //
> >>> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
> >>> //
> >>> cov_thresh[] = [ "ge0.5" ];
> >>> //
> >>> // Specify flags to indicate the type of data to be output:
> >>> //
> >>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> >>> //           Total (TOTAL),
> >>> //           Forecast Rate (F_RATE),
> >>> //           Hit Rate (H_RATE),
> >>> //           Observation Rate (O_RATE)
> >>> //
> >>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON)
> >>> //
> >>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Frequency Bias (FBIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Contingency Table Count columns repeated
N_CAT*N_CAT times
> >>> //
> >>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Accuracy (ACC),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Gerrity Score (GER),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //           Forecast Standard Deviation (FSTDEV),
> >>> //           Observation Mean (OBAR),
> >>> //           Observation Standard Deviation (OSTDEV),
> >>> //           Pearson's Correlation Coefficient (PR_CORR),
> >>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> >>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> >>> //           Number of ranks compared (RANKS),
> >>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> >>> //           Number of tied ranks in the observation field
(ORANK_TIES),
> >>> //           Mean Error (ME),
> >>> //           Standard Deviation of the Error (ESTDEV),
> >>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >>> //           Mean Absolute Error (MAE),
> >>> //           Mean Squared Error (MSE),
> >>> //           Bias-Corrected Mean Squared Error (BCMSE),
> >>> //           Root Mean Squared Error (RMSE),
> >>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //              = mean(f)
> >>> //           Observation Mean (OBAR),
> >>> //              = mean(o)
> >>> //           Forecast*Observation Product Mean (FOBAR),
> >>> //              = mean(f*o)
> >>> //           Forecast Squared Mean (FFBAR),
> >>> //              = mean(f^2)
> >>> //           Observation Squared Mean (OOBAR)
> >>> //              = mean(o^2)
> >>> //
> >>> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           U-Forecast Mean (UFBAR),
> >>> //              = mean(uf)
> >>> //           V-Forecast Mean (VFBAR),
> >>> //              = mean(vf)
> >>> //           U-Observation Mean (UOBAR),
> >>> //              = mean(uo)
> >>> //           V-Observation Mean (VOBAR),
> >>> //              = mean(vo)
> >>> //           U-Product Plus V-Product (UVFOBAR),
> >>> //              = mean(uf*uo+vf*vo)
> >>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >>> //              = mean(uf^2+vf^2)
> >>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> >>> //              = mean(uo^2+vo^2)
> >>> //
> >>> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Row Observation Yes Count (OY_i),
> >>> //           Row Observation No Count (ON_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Base Rate (BASER) with confidence interval limits,
> >>> //           Reliability (RELIABILITY),
> >>> //           Resolution (RESOLUTION),
> >>> //           Uncertainty (UNCERTAINTY),
> >>> //           Area Under the ROC Curve (ROC_AUC),
> >>> //           Brier Score (BRIER) with confidence interval
limits,
> >>> //           Probability Threshold Value (THRESH_i)
> >>> //           NOTE: Previous column repeated for each probability
threshold.
> >>> //
> >>> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics
of
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Observation Yes Count Divided by Total (OY_TP_i),
> >>> //           Observation No Count Divided by Total (ON_TP_i),
> >>> //           Calibration (CALIBRATION_i),
> >>> //           Refinement (REFINEMENT_i),
> >>> //           Likelikhood (LIKELIHOOD_i),
> >>> //           Base Rate (BASER_i),
> >>> //           NOTE: Previous 7 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (12) STAT and PRC Text Files, ROC Curve Points for
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Probability of Detecting Yes (PODY_i),
> >>> //           Probability of False Detection (POFD_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON),
> >>> //           Fractional Threshold Value (FRAC_T)
> >>> //
> >>> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Bias (BIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
> >>> //           Total (TOTAL),
> >>> //           Fractions Brier Score (FBS),
> >>> //           Fractions Skill Score (FSS)
> >>> //
> >>> //   (16) NetCDF File containing difference fields for each grib
> >>> //        code/mask combination.  A non-zero value indicates
that
> >>> //        this NetCDF file should be produced.  A value of 0
> >>> //        indicates that it should not be produced.
> >>> //
> >>> // Values for flags (1) through (15) are interpreted as follows:
> >>> //    (0) Do not generate output of this type
> >>> //    (1) Write output to a STAT file
> >>> //    (2) Write output to a STAT file and a text file
> >>> //
> >>> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1
];
> >>> //
> >>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> >>> // Coefficients should be computed.  Computing them over large
datasets is
> >>> // computationally intensive and slows down the runtime
execution significantly.
> >>> //    (0) Do not compute these correlation coefficients
> >>> //    (1) Compute these correlation coefficients
> >>> //
> >>> rank_corr_flag = 0;
> >>> //
> >>> // Specify the GRIB Table 2 parameter table version number to be
used
> >>> // for interpreting GRIB codes.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> grib_ptv = 2;
> >>> //
> >>> // Directory where temporary files should be written.
> >>> //
> >>> tmp_dir = "/tmp";
> >>> //
> >>> // Prefix to be used for the output file names.
> >>> //
> >>> output_prefix = "APCP_24";
> >>> //
> >>> // Indicate a version number for the contents of this
configuration file.
> >>> // The value should generally not be modified.
> >>> //
> >>> version = "V3.0";
> >>>
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Thu, 20 Feb 2014 10:21:20 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> I see that you're using METv3.0.  The current version is
METv4.1, and it'd be good to switch to that version when possible.
There have been major changes to the MET configuration file format
since
> >>>> METv3.0, so be sure to use the default config files for
METv4.1.
> >>>>
> >>>> I ran METv3.0 grid_stat on the data files you sent and
reproduced the error message you saw:
> >>>>       ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
> >>>>
> >>>> Since the input files are both NetCDF files, you need to
specify the name of the NetCDF variable that should be used.  So I
modified your config file:
> >>>>       FROM: fcst_field[] = [ "61/A24" ];
> >>>>       TO:   fcst_field[] = [ "APCP_24(*,*)" ];
> >>>>
> >>>> When I reran with this change, I got this error:
> >>>>       NetCDF: Attribute not found
> >>>>
> >>>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
> >>>>                    :MET_version = "V3.0.1" ;
> >>>>
> >>>> I switched that to be consistent with the version of MET you're
running:
> >>>>                    :MET_version = "V3.0" ;
> >>>>
> >>>> And then I got this error:
> >>>> ERROR: parse_poly_mask() -> the dimensions of the masking
region (185, 129) must match the dimensions of the data (53, 53).
> >>>>
> >>>> So I modified the config file to change the masking region
settings:
> >>>>       FROM: mask_grid[] = [ "DTC165", "DTC166" ];
> >>>>             mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
> >>>>                             "MET_BASE/data/poly/LMV.poly" ];
> >>>>
> >>>>       TO:   mask_grid[] = [ "FULL" ];
> >>>>             mask_poly[] = [];
> >>>>
> >>>> And then it ran fine.
> >>>>
> >>>> To summarize...
> >>>>     (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
> >>>>     (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
> >>>>     (3) Consider updating to using METv4.1 instead.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
> >>>>>
> >>>>> Hi John,
> >>>>> I am bothering you with a few more. Hope u ll bear with me.
> >>>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >>>>>
> >>>>> 1. Now I was reading  about 3 approaches of FUZZY verf which
are   a. Multi event contingency Table (My question is -----Can we
define a hit as RF b/w 0.1 to 2.5 in the config file. Normally we
select the threshold as ge0.1 or ge2.5 etc. Is the provision of giving
a range in config file there?????).
> >>>>>
> >>>>> b) Pragmatic approach  (donot know what's that???)
> >>>>>
> >>>>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >>>>>
> >>>>> I donot understand these. Can u lead me to the right direction
or provide some hints????
> >>>>>
> >>>>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
> >>>>> Can the QUILT plot be prepared for any score like HK, HSS, FBS
or FSS????
> >>>>>
> >>>>>
> >>>>> thanks
> >>>>> geeta
> >>>>>
> >>>>> From: geeta124 at hotmail.com
> >>>>> To: met_help at ucar.edu
> >>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Hi John,
> >>>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> >>>>> Kindly check that.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>> From: met_help at ucar.edu
> >>>>>> To: geeta124 at hotmail.com
> >>>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>>>>>
> >>>>>> Geeta,
> >>>>>>
> >>>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>>>>>
> >>>>>> I was trying to make the point that the "interpolation
methods" in the grid_stat config file could be used as a form of
"upscaling".  You are right, there is no *need* to interpolate the
data since
> >>>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >>>>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >>>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>>>>>
> >>>>>> The default interpolation width is 1, meaning that no
smoothing is performed.  However, you could use multiple smoothing
widths and see how your performance changes the more you smooth the
data.
> >>>>>>
> >>>>>> Does that make sense?
> >>>>>>
> >>>>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >>>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>>>>>
> >>>>>> fcst = {
> >>>>>>        wind_thresh = [ NA ];
> >>>>>>
> >>>>>>        field = [
> >>>>>>           {
> >>>>>>             name       = "APCP_24";
> >>>>>>             level      = [ "(*,*)" ];
> >>>>>>             cat_thresh = [ >0.0, >=5.0 ];
> >>>>>>           }
> >>>>>>        ];
> >>>>>>
> >>>>>> };
> >>>>>>
> >>>>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >>>>>> instructions:
> >>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John
> >>>>>>
> >>>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>
> >>>>>>> Hi John,
> >>>>>>>      I have run grid-stat. Following is the error.
> >>>>>>>
> >>>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>>>>>> GSL_RNG_TYPE=mt19937
> >>>>>>> GSL_RNG_SEED=18446744073321512274
> >>>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>> Observation File: ../trmm_nc_data/02june2011.nc
> >>>>>>> Configuration File: GridStatConfig_APCP_24
> >>>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in
file: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>>
> >>>>>>>
--------------------------------------------------------------------------------
> >>>>>>>
> >>>>>>>
> >>>>>>> Pls suggest.
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>> From: geeta124 at hotmail.com
> >>>>>>> To: met_help at ucar.edu
> >>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> Thanks a lot John for your inputs and clarifications.
> >>>>>>>
> >>>>>>> Still following doubts are there.
> >>>>>>>
> >>>>>>> 1. when I run copygb, what it does is to make the
observation and Model FC uniform ( I mean same GRID and RESOLUTION).
These two parameters are only important.
> >>>>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>>>>>
> >>>>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
> >>>>>>>
> >>>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>>> From: met_help at ucar.edu
> >>>>>>>> To: geeta124 at hotmail.com
> >>>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>>>>>
> >>>>>>>> Geeta,
> >>>>>>>>
> >>>>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
> >>>>>>>>
> >>>>>>>> (1) The first way is by applying an interpolation method to
the data.  Since the data are already on the same grid, this is really
a "smoothing" operation instead.  This is called "upscaling".
> >>>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>>>>>> config file, you could try:
> >>>>>>>>
> >>>>>>>> interp = {
> >>>>>>>>         field      = BOTH;
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>
> >>>>>>>>         type = [
> >>>>>>>>            { method = UW_MEAN; width  = 1; },
> >>>>>>>>            { method = UW_MEAN; width  = 3; },
> >>>>>>>>            { method = UW_MEAN; width  = 6; },
> >>>>>>>>            { method = UW_MEAN; width  = 9; }
> >>>>>>>>         ];
> >>>>>>>> };
> >>>>>>>>
> >>>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>>>>>
> >>>>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>>>>>> type.  Be sure to turn the NBRCNT output line on in the
Grid-Stat config file.  For neighborhood verification, you pick
multiple neighborhood sizes and look to see how the FSS changes as you
increase
> >>>>>>>> the neighborhood size.  As the neighborhood size increases,
FSS increases.  And you look to see how large of a neighborhood size
you need to get a "useful" (FSS > 0.5) forecast.
> >>>>>>>>
> >>>>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
> >>>>>>>> it places an n x n box around each grid point and counts up
the number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>>>>>> computed by comparing the forecast and observation
fractional coverage fields to each other.
> >>>>>>>>
> >>>>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
> >>>>>>>>
> >>>>>>>> Here's an example of how you might set this up in the Grid-
Stat config file:
> >>>>>>>>
> >>>>>>>> nbrhd = {
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>         width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>>>>>         cov_thresh = [ >=0.5 ];
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>> For a given threshold, you should look to see how FSS
changes as you increase the neighborhood size.
> >>>>>>>>
> >>>>>>>> Hopefully that helps get you going.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> John Halley Gotway
> >>>>>>>> met_help at ucar.edu
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>>>>>
> >>>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>>>>>             Queue: met_help
> >>>>>>>>>           Subject: Unable to visualize Fuzzy verf.
> >>>>>>>>>             Owner: Nobody
> >>>>>>>>>        Requestors: geeta124 at hotmail.com
> >>>>>>>>>            Status: new
> >>>>>>>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Hi John/ met_help.
> >>>>>>>>>
> >>>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and
the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
> >>>>>>>>>
> >>>>>>>>> geeta
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Unable to visualize Fuzzy verf.
From: Geeta Geeta
Time: Fri Feb 28 03:43:59 2014

John, finally what is the remedy for my problem????

geeta

> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> From: met_help at ucar.edu
> To: geeta124 at hotmail.com
> Date: Wed, 26 Feb 2014 11:13:55 -0700
>
> Geeta,
>
> The problem is in the observation files:
>
> *************************************
> O/P of OBSERVATION FILE (NETCDF format) *
> *************************************
> :MET_version = "V3.0.1" ;
>
> If you change the "V3.0.1" to "V3.0", then METv3.0 grid_stat will be
able to process it fine.
>
> Also, you should switch the timing variable attributes from floats
to integers:
> Change from:
>   APCP_03:init_time_ut = 1306972800. ;
>   APCP_03:valid_time_ut = 1306983600. ;
>   APCP_03:accum_time_sec = 10800.f ;
> Change to:
>   APCP_03:init_time_ut = 1306972800 ;
>   APCP_03:valid_time_ut = 1306983600 ;
>   APCP_03:accum_time_sec = 10800 ;
>
> When you switch to METv4.1, it'll complain if those aren't integers.
>
> Hope that helps.
>
> Thanks,
> John
>
>
> On 02/25/2014 11:26 PM, Geeta Geeta via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >
> > thanks John,
> > I have NOT done anything with the NC files as yet. I was asking
you how to go about changing the attribute????
> >
> > I am posting the o/p of one of the Forecast files in netcdf format
using ncdump. It Shows MET="v3.0" what you desire.
> > bash-3.2$ ncdump -h test.nc
> > netcdf test {
> > dimensions:
> >          lat = 53 ;
> >          lon = 53 ;
> > variables:
> >          float lat(lat, lon) ;
> >                  lat:long_name = "latitude" ;
> >                  lat:units = "degrees_north" ;
> >                  lat:standard_name = "latitude" ;
> >          float lon(lat, lon) ;
> >                  lon:long_name = "longitude" ;
> >                  lon:units = "degrees_east" ;
> >                  lon:standard_name = "longitude" ;
> >          float APCP_24(lat, lon) ;
> >                  APCP_24:name = "APCP" ;
> >                  APCP_24:long_name = "Total precipitation" ;
> >                  APCP_24:level = "A24" ;
> >                  APCP_24:units = "kg/m^2" ;
> >                  APCP_24:grib_code = 61 ;
> >                  APCP_24:_FillValue = -9999.f ;
> >                  APCP_24:init_time = "20110601_000000" ;
> >                  APCP_24:init_time_ut = 1306886400 ;
> >                  APCP_24:valid_time = "20110602_030000" ;
> >                  APCP_24:valid_time_ut = 1306983600 ;
> >                  APCP_24:accum_time = "240000" ;
> >                  APCP_24:accum_time_sec = 86400 ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >                  :MET_version = "V3.0" ;
> >                  :MET_tool = "pcp_combine" ;
> >                  :RunCommand = "Subtraction:
2011060100_WRFPRS_d01.027 with accumulation of 270000 minus
2011060100_WRFPRS_d01.003 with accumulation of 030000." ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9.000000 degrees_north" ;
> >                  :lon_ll = "74.000000 degrees_east" ;
> >                  :delta_lat = "0.250000 degrees" ;
> >                  :delta_lon = "0.250000 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> > bash-3.2$
> > *************************************
> > O/P of OBSERVATION FILE (NETCDF format) *
> > *************************************
> > bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
> > netcdf test {
> > dimensions:
> >          lon = 53 ;
> >          lat = 53 ;
> > variables:
> >          double lon(lon) ;
> >                  lon:units = "degrees_east" ;
> >          double lat(lat) ;
> >                  lat:units = "degrees_north" ;
> >          float APCP_03(lat, lon) ;
> >                  APCP_03:units = "kg/m^2" ;
> >                  APCP_03:missing_value = -9999.f ;
> >                  APCP_03:long_name = "Total precipitation" ;
> >                  APCP_03:name = "APCP" ;
> >                  APCP_03:level = "A3" ;
> >                  APCP_03:grib_code = 61.f ;
> >                  APCP_03:_FillValue = -9999.f ;
> >                  APCP_03:init_time = "20110602_000000" ;
> >                  APCP_03:init_time_ut = 1306972800. ;
> >                  APCP_03:valid_time = "20110602_030000" ;
> >                  APCP_03:valid_time_ut = 1306983600. ;
> >                  APCP_03:accum_time = "030000" ;
> >                  APCP_03:accum_time_sec = 10800.f ;
> >
> > // global attributes:
> >                  :FileOrigins = "File
../../../vpt/geeta/02june2011.nc generated 20140123_163031 on host
ncmr0102 by the Rscript trmm2nc.R" ;
> >                  :MET_version = "V3.0.1" ;
> >                  :Projection = "LatLon" ;
> >                  :lat_ll = "9 degrees_north" ;
> >                  :lon_ll = "74 degrees_east" ;
> >                  :delta_lat = "0.25 degrees" ;
> >                  :delta_lon = "0.25 degrees" ;
> >                  :Nlat = "53 grid_points" ;
> >                  :Nlon = "53 grid_points" ;
> > }
> >
> >
> > Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
> > DO you suspect the location of NETCDF ???????????????.
> >
> > shall be looking forward to hearing from you.
> >
> > thanks
> > geeta
> >
> >> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >> From: met_help at ucar.edu
> >> To: geeta124 at hotmail.com
> >> Date: Tue, 25 Feb 2014 11:26:10 -0700
> >>
> >> Geeta,
> >>
> >> Sorry, I was out of the office yesterday.  Looking back through
this ticket, I see that you're still getting the following error:
> >>      NetCDF: Attribute not found
> >>
> >> You said that you updated the NetCDF files to include the
following global attribute:
> >>      MET_version = "V3.0" ;
> >>
> >> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> I see 2 other questions in your emails:
> >>
> >> (1) How can you control the optional "upscaling" or "smoothing"
done by grid_stat?
> >>       In METv3.0, that is controlled by configuration file
options that begin with "interp_".  For example, try the following:
> >>          interp_method[] = [ "UW_MEAN" ];
> >>          interp_width[]  = [ 1, 3, 5 ];
> >>          interp_flag     = 3;
> >>
> >>       For each output line you were getting before, you should
now get 2 more.  Since interp_flag is set to 3, grid_stat will smooth
both the forecast and observation fields.  For interp_width = 3,
> >> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
> >> in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.
> >>
> >> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
> >>
> >> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
> >> plots a red dot for each observation lat/lon it finds in the
data.  It is intended to just give you a quick look at the location of
the observations to make sure that they exist where you expect.  It
> >> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
> >> the computation of statistics.  The MPR line type includes
columns named "OBS_LAT" and "OBS_LON" giving the point observation
location information.  You could read the lat/lon information from the
MPR
> >> line type and use whatever plotting tool you prefer to plot the
observation locations.
> >>
> >> If you do post more data to the ftp site, please write me back
and I'll go grab it.
> >>
> >> Thanks,
> >> John
> >>
> >>
> >> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>
> >>> hi john,
> >>> you had discussed about the upscaling (of both obs and fcst or
any one of them). The forecast is compared with the observations which
are averaged to coarse scales.
> >>> How is this averaged defined in the configuration file.
> >>>
> >>> Pls let me know reg the global attributes ????
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Sun, 23 Feb 2014 22:24:04 +0530
> >>>
> >>>
> >>>
> >>>
> >>> hi John,
> >>> Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
> >>>
> >>> Can you provide some more hints.
> >>>
> >>> regards
> >>>
> >>> geeta
> >>>
> >>> From: geeta124 at hotmail.com
> >>> To: met_help at ucar.edu
> >>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>> Date: Fri, 21 Feb 2014 15:03:34 +0530
> >>>
> >>>
> >>>
> >>>
> >>> thanks John,
> >>> I have made the changes as per your config file.
> >>> But the error persists.
> >>> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>> GSL_RNG_TYPE=mt19937
> >>> GSL_RNG_SEED=18446744073358673747
> >>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>> Observation File: ../trmm_nc_data/02june2011.nc
> >>> Configuration File: GridStatConfig_APCP_24
> >>> NetCDF: Attribute not found
> >>> -bash-3.2$
> >>>
> >>>
> >>> 2. I have used  ncdump to see my file attributes.
> >>> Are u referring to these attributes???
> >>>
> >>> // global attributes:
> >>>                   :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
> >>>                   :MET_version = "V3.0" ;
> >>>                   :MET_tool = "pcp_combine" ;
> >>>
> >>> Following is my Config file.
> >>>
____________________________________________________________________
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Default grid_stat configuration file
> >>> //
> >>>
////////////////////////////////////////////////////////////////////////////////
> >>> //
> >>> // Specify a name to designate the model being verified.  This
name will be
> >>> // written to the second column of the ASCII output generated.
> >>> //
> >>> model = "WRF";
> >>> //
> >>> // Specify a comma-separated list of fields to be verified.  The
forecast and
> >>> // observation fields may be specified separately.  If the
obs_field parameter
> >>> // is left blank, it will default to the contents of fcst_field.
> >>> //
> >>> // Each field is specified as a GRIB code or abbreviation
followed by an
> >>> // accumulation or vertical level indicator for GRIB files or as
a variable name
> >>> // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
> >>> //
> >>> // Specifying verification fields for GRIB files:
> >>> //    GC/ANNN for accumulation interval NNN
> >>> //    GC/ZNNN for vertical level NNN
> >>> //    GC/PNNN for pressure level NNN in hPa
> >>> //    GC/PNNN-NNN for a range of pressure levels in hPa
> >>> //    GC/LNNN for a generic level type
> >>> //    GC/RNNN for a specific GRIB record number
> >>> //    Where GC is the number of or abbreviation for the grib
code
> >>> //    to be verified.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> // Specifying verification fields for NetCDF files:
> >>> //    var_name(i,...,j,*,*) for a single field
> >>> //    Where var_name is the name of the NetCDF variable,
> >>> //    and i,...,j specifies fixed dimension values,
> >>> //    and *,* specifies the two dimensions for the gridded
field.
> >>> //
> >>> //    NOTE: To verify winds as vectors rather than scalars,
> >>> //          specify UGRD (or 33) followd by VGRD (or 34) with
the
> >>> //          same level values.
> >>> //
> >>> //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
> >>> //
> >>> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for
GRIB input
> >>> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ];
for NetCDF input
> >>> //
> >>> fcst_field[] = [ "APCP_24(*,*)" ];
> >>> obs_field[]  = [ "APCP_03(*,*)" ];
> >>> //
> >>> // Specify a comma-separated list of groups of thresholds to be
applied to the
> >>> // fields listed above.  Thresholds for the forecast and
observation fields
> >>> // may be specified separately.  If the obs_thresh parameter is
left blank,
> >>> // it will default to the content of fcst_thresh.
> >>> //
> >>> // At least one threshold must be provided for each field listed
above.  The
> >>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
> >>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
> >>> // thresholds to a field, separate the threshold values with a
space.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //
> >>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
> >>> //       and be preceeded by "ge".
> >>> //
> >>> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0
ge80.0" ];
> >>> //
> >>> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
> >>> obs_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of thresholds to be used when
computing
> >>> // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
> >>> // values derived from each U/V pair.  Only those U/V pairs
which meet the wind
> >>> // speed threshold criteria are retained.  If the
obs_wind_thresh parameter is
> >>> // left blank, it will default to the contents of
fcst_wind_thresh.
> >>> //
> >>> // To apply multiple wind speed thresholds, separate the
threshold values with a
> >>> // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
> >>> //
> >>> // Each threshold must be preceded by a two letter indicator for
the type of
> >>> // thresholding to be performed:
> >>> //    'lt' for less than     'le' for less than or equal to
> >>> //    'eq' for equal to      'ne' for not equal to
> >>> //    'gt' for greater than  'ge' for greater than or equal to
> >>> //    'NA' for no threshold
> >>> //
> >>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
> >>> //
> >>> fcst_wind_thresh[] = [ "NA" ];
> >>> obs_wind_thresh[]  = [];
> >>> //
> >>> // Specify a comma-separated list of grids to be used in masking
the data over
> >>> // which to perform scoring.  An empty list indicates that no
masking grid
> >>> // should be performed.  The standard NCEP grids are named
"GNNN" where NNN
> >>> // indicates the three digit grid number.  Enter "FULL" to score
over the
> >>> // entire domain.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
> >>> //
> >>> // e.g. mask_grid[] = [ "FULL" ];
> >>> //
> >>> mask_grid[] = [ "FULL" ];
> >>> //
> >>> // Specify a comma-separated list of masking regions to be
applied.
> >>> // An empty list indicates that no additional masks should be
used.
> >>> // The masking regions may be defined in one of 4 ways:
> >>> //
> >>> // (1) An ASCII file containing a lat/lon polygon.
> >>> //     Latitude in degrees north and longitude in degrees east.
> >>> //     By default, the first and last polygon points are
connected.
> >>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
> >>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
> >>> //
> >>> // (2) The NetCDF output of the gen_poly_mask tool.
> >>> //
> >>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.nc var_name gt0.00"
> >>> //
> >>> // (4) A GRIB data file, followed by a description of the field
> >>> //     to be used, and optionally, a threshold to be applied to
the field.
> >>> //     e.g. "sample.grb APCP/A3 gt0.00"
> >>> //
> >>> // Any NetCDF or GRIB file used must have the same grid
dimensions as the
> >>> // data being verified.
> >>> //
> >>> // MET_BASE may be used in the path for the files above.
> >>> //
> >>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
> >>> //                      "poly_mask.ncf",
> >>> //                      "sample.nc APCP",
> >>> //                      "sample.grb HGT/Z0 gt100.0" ];
> >>> //
> >>> mask_poly[] = [];
> >>> //
> >>> // Specify a comma-separated list of values for alpha to be used
when computing
> >>> // confidence intervals.  Values of alpha must be between 0 and
1.
> >>> //
> >>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
> >>> //
> >>> ci_alpha[] = [ 0.10, 0.05 ];
> >>> //
> >>> // Specify the method to be used for computing bootstrap
confidence intervals.
> >>> // The value for this is interpreted as follows:
> >>> //    (0) Use the BCa interval method (computationally
intensive)
> >>> //    (1) Use the percentile interval method
> >>> //
> >>> boot_interval = 1;
> >>> //
> >>> // Specify a proportion between 0 and 1 to define the replicate
sample size
> >>> // to be used when computing percentile intervals.  The
replicate sample
> >>> // size is set to boot_rep_prop * n, where n is the number of
raw data points.
> >>> //
> >>> // e.g boot_rep_prop = 0.80;
> >>> //
> >>> boot_rep_prop = 1.0;
> >>> //
> >>> // Specify the number of times each set of matched pair data
should be
> >>> // resampled when computing bootstrap confidence intervals.  A
value of
> >>> // zero disables the computation of bootstrap condifence
intervals.
> >>> //
> >>> // e.g. n_boot_rep = 1000;
> >>> //
> >>> n_boot_rep = 0;
> >>> //
> >>> // Specify the name of the random number generator to be used.
See the MET
> >>> // Users Guide for a list of possible random number generators.
> >>> //
> >>> boot_rng = "mt19937";
> >>> //
> >>> // Specify the seed value to be used when computing bootstrap
confidence
> >>> // intervals.  If left unspecified, the seed will change for
each run and
> >>> // the computed bootstrap confidence intervals will not be
reproducable.
> >>> //
> >>> boot_seed = "";
> >>> //
> >>> // Specify a comma-separated list of interpolation method(s) to
be used for
> >>> // smoothing the data fields prior to comparing them.  The value
at each grid
> >>> // point is replaced by the measure computed over the
neighborhood defined
> >>> // around the grid point.  String values are interpreted as
follows:
> >>> //    MIN     = Minimum in the neighborhood
> >>> //    MAX     = Maximum in the neighborhood
> >>> //    MEDIAN  = Median in the neighborhood
> >>> //    UW_MEAN = Unweighted mean in the neighborhood
> >>> //
> >>> //    NOTE: The distance-weighted mean (DW_MEAN) is not an
option here since
> >>> //          it will have no effect on a gridded field.
> >>> //
> >>> //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
> >>> //          it reduces to an unweighted mean on a grid.
> >>> //
> >>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
> >>> //
> >>> interp_method[] = [ "UW_MEAN" ];
> >>> //
> >>> // Specify a comma-separated list of box widths to be used by
the interpolation
> >>> // techniques listed above.  All values must be odd.  A value of
1 indicates
> >>> // that no smoothing should be performed.  For values greater
than 1, the n*n
> >>> // grid points around each point will be used to smooth the data
fields.
> >>> //
> >>> // e.g. interp_width = [ 1, 3, 5 ];
> >>> //
> >>> interp_width[] = [ 1 ];
> >>> //
> >>> // The interp_flag controls how the smoothing defined above
should be applied:
> >>> // (1) Smooth only the forecast field
> >>> // (2) Smooth only the observation field
> >>> // (3) Smooth both the forecast and observation fields
> >>> //
> >>> interp_flag = 1;
> >>> //
> >>> // When smoothing, compute a ratio of the number of valid data
points to
> >>> // the total number of points in the neighborhood.  If that
ratio is less
> >>> // than this threshold, do not compute a smoothed forecast
value.  This
> >>> // threshold must be between 0 and 1.  Setting this threshold to
1 will
> >>> // require that each observation be surrounded by n*n valid
forecast
> >>> // points.
> >>> //
> >>> // e.g. interp_thresh = 1.0;
> >>> //
> >>> interp_thresh = 1.0;
> >>> //
> >>> // Specify a comma-separated list of box widths to be used to
define the
> >>> // neighborhood size for the neighborhood verification methods.
All values
> >>> // must be odd.  For values greater than 1, the n*n grid points
around each
> >>> // point will be used to define the neighborhood.
> >>> //
> >>> // e.g. nbr_width = [ 3, 5 ];
> >>> //
> >>> nbr_width[] = [ 3, 5 ];
> >>> //
> >>> // When applying the neighborhood verification methods, compute
a ratio
> >>> // of the number of valid data points to the total number of
points in
> >>> // the neighborhood.  If that ratio is less than this threshold,
do not
> >>> // include it in the computations.  This threshold must be
between 0
> >>> // and 1.  Setting this threshold to 1 will require that each
point be
> >>> // surrounded by n*n valid forecast points.
> >>> //
> >>> // e.g. nbr_thresh = 1.0;
> >>> //
> >>> nbr_thresh = 1.0;
> >>> //
> >>> // When applying the neighborhood verification methods, apply a
threshold
> >>> // to the fractional coverage values to define contingency
tables from
> >>> // which to compute statistics.
> >>> //
> >>> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
> >>> //
> >>> cov_thresh[] = [ "ge0.5" ];
> >>> //
> >>> // Specify flags to indicate the type of data to be output:
> >>> //
> >>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
> >>> //           Total (TOTAL),
> >>> //           Forecast Rate (F_RATE),
> >>> //           Hit Rate (H_RATE),
> >>> //           Observation Rate (O_RATE)
> >>> //
> >>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON)
> >>> //
> >>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Frequency Bias (FBIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Contingency Table Count columns repeated
N_CAT*N_CAT times
> >>> //
> >>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Categories (N_CAT),
> >>> //           Accuracy (ACC),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Gerrity Score (GER),
> >>> //           NOTE: All statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //           Forecast Standard Deviation (FSTDEV),
> >>> //           Observation Mean (OBAR),
> >>> //           Observation Standard Deviation (OSTDEV),
> >>> //           Pearson's Correlation Coefficient (PR_CORR),
> >>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
> >>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
> >>> //           Number of ranks compared (RANKS),
> >>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
> >>> //           Number of tied ranks in the observation field
(ORANK_TIES),
> >>> //           Mean Error (ME),
> >>> //           Standard Deviation of the Error (ESTDEV),
> >>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
> >>> //           Mean Absolute Error (MAE),
> >>> //           Mean Squared Error (MSE),
> >>> //           Bias-Corrected Mean Squared Error (BCMSE),
> >>> //           Root Mean Squared Error (RMSE),
> >>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           Forecast Mean (FBAR),
> >>> //              = mean(f)
> >>> //           Observation Mean (OBAR),
> >>> //              = mean(o)
> >>> //           Forecast*Observation Product Mean (FOBAR),
> >>> //              = mean(f*o)
> >>> //           Forecast Squared Mean (FFBAR),
> >>> //              = mean(f^2)
> >>> //           Observation Squared Mean (OOBAR)
> >>> //              = mean(o^2)
> >>> //
> >>> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
> >>> //           Total (TOTAL),
> >>> //           U-Forecast Mean (UFBAR),
> >>> //              = mean(uf)
> >>> //           V-Forecast Mean (VFBAR),
> >>> //              = mean(vf)
> >>> //           U-Observation Mean (UOBAR),
> >>> //              = mean(uo)
> >>> //           V-Observation Mean (VOBAR),
> >>> //              = mean(vo)
> >>> //           U-Product Plus V-Product (UVFOBAR),
> >>> //              = mean(uf*uo+vf*vo)
> >>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
> >>> //              = mean(uf^2+vf^2)
> >>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
> >>> //              = mean(uo^2+vo^2)
> >>> //
> >>> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Row Observation Yes Count (OY_i),
> >>> //           Row Observation No Count (ON_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Base Rate (BASER) with confidence interval limits,
> >>> //           Reliability (RELIABILITY),
> >>> //           Resolution (RESOLUTION),
> >>> //           Uncertainty (UNCERTAINTY),
> >>> //           Area Under the ROC Curve (ROC_AUC),
> >>> //           Brier Score (BRIER) with confidence interval
limits,
> >>> //           Probability Threshold Value (THRESH_i)
> >>> //           NOTE: Previous column repeated for each probability
threshold.
> >>> //
> >>> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics
of
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Observation Yes Count Divided by Total (OY_TP_i),
> >>> //           Observation No Count Divided by Total (ON_TP_i),
> >>> //           Calibration (CALIBRATION_i),
> >>> //           Refinement (REFINEMENT_i),
> >>> //           Likelikhood (LIKELIHOOD_i),
> >>> //           Base Rate (BASER_i),
> >>> //           NOTE: Previous 7 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (12) STAT and PRC Text Files, ROC Curve Points for
> >>> //                                 Probabilistic Variables:
> >>> //           Total (TOTAL),
> >>> //           Number of Forecast Probability Thresholds
(N_THRESH),
> >>> //           Probability Threshold Value (THRESH_i),
> >>> //           Probability of Detecting Yes (PODY_i),
> >>> //           Probability of False Detection (POFD_i),
> >>> //           NOTE: Previous 3 columns repeated for each row in
the table
> >>> //           Last Probability Threshold Value (THRESH_n)
> >>> //
> >>> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
> >>> //           Total (TOTAL),
> >>> //           Forecast Yes and Observation Yes Count (FY_OY),
> >>> //           Forecast Yes and Observation No Count (FY_ON),
> >>> //           Forecast No and Observation Yes Count (FN_OY),
> >>> //           Forecast No and Observation No Count (FN_ON),
> >>> //           Fractional Threshold Value (FRAC_T)
> >>> //
> >>> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
> >>> //           Total (TOTAL),
> >>> //           Base Rate (BASER),
> >>> //           Forecast Mean (FMEAN),
> >>> //           Accuracy (ACC),
> >>> //           Bias (BIAS),
> >>> //           Probability of Detecting Yes (PODY),
> >>> //           Probability of Detecting No (PODN),
> >>> //           Probability of False Detection (POFD),
> >>> //           False Alarm Ratio (FAR),
> >>> //           Critical Success Index (CSI),
> >>> //           Gilbert Skill Score (GSS),
> >>> //           Hanssen and Kuipers Discriminant (HK),
> >>> //           Heidke Skill Score (HSS),
> >>> //           Odds Ratio (ODDS),
> >>> //           NOTE: Most statistics listed above contain
parametric and/or
> >>> //                 non-parametric confidence interval limits.
> >>> //
> >>> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
> >>> //           Total (TOTAL),
> >>> //           Fractions Brier Score (FBS),
> >>> //           Fractions Skill Score (FSS)
> >>> //
> >>> //   (16) NetCDF File containing difference fields for each grib
> >>> //        code/mask combination.  A non-zero value indicates
that
> >>> //        this NetCDF file should be produced.  A value of 0
> >>> //        indicates that it should not be produced.
> >>> //
> >>> // Values for flags (1) through (15) are interpreted as follows:
> >>> //    (0) Do not generate output of this type
> >>> //    (1) Write output to a STAT file
> >>> //    (2) Write output to a STAT file and a text file
> >>> //
> >>> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1
];
> >>> //
> >>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
> >>> // Coefficients should be computed.  Computing them over large
datasets is
> >>> // computationally intensive and slows down the runtime
execution significantly.
> >>> //    (0) Do not compute these correlation coefficients
> >>> //    (1) Compute these correlation coefficients
> >>> //
> >>> rank_corr_flag = 0;
> >>> //
> >>> // Specify the GRIB Table 2 parameter table version number to be
used
> >>> // for interpreting GRIB codes.
> >>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
> >>> //
> >>> grib_ptv = 2;
> >>> //
> >>> // Directory where temporary files should be written.
> >>> //
> >>> tmp_dir = "/tmp";
> >>> //
> >>> // Prefix to be used for the output file names.
> >>> //
> >>> output_prefix = "APCP_24";
> >>> //
> >>> // Indicate a version number for the contents of this
configuration file.
> >>> // The value should generally not be modified.
> >>> //
> >>> version = "V3.0";
> >>>
> >>>
> >>> geeta
> >>>
> >>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
> >>>> From: met_help at ucar.edu
> >>>> To: geeta124 at hotmail.com
> >>>> Date: Thu, 20 Feb 2014 10:21:20 -0700
> >>>>
> >>>> Geeta,
> >>>>
> >>>> I see that you're using METv3.0.  The current version is
METv4.1, and it'd be good to switch to that version when possible.
There have been major changes to the MET configuration file format
since
> >>>> METv3.0, so be sure to use the default config files for
METv4.1.
> >>>>
> >>>> I ran METv3.0 grid_stat on the data files you sent and
reproduced the error message you saw:
> >>>>       ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
> >>>>
> >>>> Since the input files are both NetCDF files, you need to
specify the name of the NetCDF variable that should be used.  So I
modified your config file:
> >>>>       FROM: fcst_field[] = [ "61/A24" ];
> >>>>       TO:   fcst_field[] = [ "APCP_24(*,*)" ];
> >>>>
> >>>> When I reran with this change, I got this error:
> >>>>       NetCDF: Attribute not found
> >>>>
> >>>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
> >>>>                    :MET_version = "V3.0.1" ;
> >>>>
> >>>> I switched that to be consistent with the version of MET you're
running:
> >>>>                    :MET_version = "V3.0" ;
> >>>>
> >>>> And then I got this error:
> >>>> ERROR: parse_poly_mask() -> the dimensions of the masking
region (185, 129) must match the dimensions of the data (53, 53).
> >>>>
> >>>> So I modified the config file to change the masking region
settings:
> >>>>       FROM: mask_grid[] = [ "DTC165", "DTC166" ];
> >>>>             mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
> >>>>                             "MET_BASE/data/poly/LMV.poly" ];
> >>>>
> >>>>       TO:   mask_grid[] = [ "FULL" ];
> >>>>             mask_poly[] = [];
> >>>>
> >>>> And then it ran fine.
> >>>>
> >>>> To summarize...
> >>>>     (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
> >>>>     (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
> >>>>     (3) Consider updating to using METv4.1 instead.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
> >>>>>
> >>>>> Hi John,
> >>>>> I am bothering you with a few more. Hope u ll bear with me.
> >>>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
> >>>>>
> >>>>> 1. Now I was reading  about 3 approaches of FUZZY verf which
are   a. Multi event contingency Table (My question is -----Can we
define a hit as RF b/w 0.1 to 2.5 in the config file. Normally we
select the threshold as ge0.1 or ge2.5 etc. Is the provision of giving
a range in config file there?????).
> >>>>>
> >>>>> b) Pragmatic approach  (donot know what's that???)
> >>>>>
> >>>>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
> >>>>>
> >>>>> I donot understand these. Can u lead me to the right direction
or provide some hints????
> >>>>>
> >>>>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
> >>>>> Can the QUILT plot be prepared for any score like HK, HSS, FBS
or FSS????
> >>>>>
> >>>>>
> >>>>> thanks
> >>>>> geeta
> >>>>>
> >>>>> From: geeta124 at hotmail.com
> >>>>> To: met_help at ucar.edu
> >>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Hi John,
> >>>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
> >>>>> Kindly check that.
> >>>>>
> >>>>> geeta
> >>>>>
> >>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>> From: met_help at ucar.edu
> >>>>>> To: geeta124 at hotmail.com
> >>>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
> >>>>>>
> >>>>>> Geeta,
> >>>>>>
> >>>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
> >>>>>>
> >>>>>> I was trying to make the point that the "interpolation
methods" in the grid_stat config file could be used as a form of
"upscaling".  You are right, there is no *need* to interpolate the
data since
> >>>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
> >>>>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
> >>>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
> >>>>>>
> >>>>>> The default interpolation width is 1, meaning that no
smoothing is performed.  However, you could use multiple smoothing
widths and see how your performance changes the more you smooth the
data.
> >>>>>>
> >>>>>> Does that make sense?
> >>>>>>
> >>>>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
> >>>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
> >>>>>>
> >>>>>> fcst = {
> >>>>>>        wind_thresh = [ NA ];
> >>>>>>
> >>>>>>        field = [
> >>>>>>           {
> >>>>>>             name       = "APCP_24";
> >>>>>>             level      = [ "(*,*)" ];
> >>>>>>             cat_thresh = [ >0.0, >=5.0 ];
> >>>>>>           }
> >>>>>>        ];
> >>>>>>
> >>>>>> };
> >>>>>>
> >>>>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
> >>>>>> instructions:
> >>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John
> >>>>>>
> >>>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>
> >>>>>>> Hi John,
> >>>>>>>      I have run grid-stat. Following is the error.
> >>>>>>>
> >>>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
> >>>>>>> GSL_RNG_TYPE=mt19937
> >>>>>>> GSL_RNG_SEED=18446744073321512274
> >>>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>> Observation File: ../trmm_nc_data/02june2011.nc
> >>>>>>> Configuration File: GridStatConfig_APCP_24
> >>>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in
file: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
> >>>>>>>
> >>>>>>>
--------------------------------------------------------------------------------
> >>>>>>>
> >>>>>>>
> >>>>>>> Pls suggest.
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>> From: geeta124 at hotmail.com
> >>>>>>> To: met_help at ucar.edu
> >>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> Thanks a lot John for your inputs and clarifications.
> >>>>>>>
> >>>>>>> Still following doubts are there.
> >>>>>>>
> >>>>>>> 1. when I run copygb, what it does is to make the
observation and Model FC uniform ( I mean same GRID and RESOLUTION).
These two parameters are only important.
> >>>>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
> >>>>>>>
> >>>>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
> >>>>>>>
> >>>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
> >>>>>>>
> >>>>>>> geeta
> >>>>>>>
> >>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
> >>>>>>>> From: met_help at ucar.edu
> >>>>>>>> To: geeta124 at hotmail.com
> >>>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
> >>>>>>>>
> >>>>>>>> Geeta,
> >>>>>>>>
> >>>>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
> >>>>>>>>
> >>>>>>>> (1) The first way is by applying an interpolation method to
the data.  Since the data are already on the same grid, this is really
a "smoothing" operation instead.  This is called "upscaling".
> >>>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
> >>>>>>>> config file, you could try:
> >>>>>>>>
> >>>>>>>> interp = {
> >>>>>>>>         field      = BOTH;
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>
> >>>>>>>>         type = [
> >>>>>>>>            { method = UW_MEAN; width  = 1; },
> >>>>>>>>            { method = UW_MEAN; width  = 3; },
> >>>>>>>>            { method = UW_MEAN; width  = 6; },
> >>>>>>>>            { method = UW_MEAN; width  = 9; }
> >>>>>>>>         ];
> >>>>>>>> };
> >>>>>>>>
> >>>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
> >>>>>>>>
> >>>>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
> >>>>>>>> type.  Be sure to turn the NBRCNT output line on in the
Grid-Stat config file.  For neighborhood verification, you pick
multiple neighborhood sizes and look to see how the FSS changes as you
increase
> >>>>>>>> the neighborhood size.  As the neighborhood size increases,
FSS increases.  And you look to see how large of a neighborhood size
you need to get a "useful" (FSS > 0.5) forecast.
> >>>>>>>>
> >>>>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
> >>>>>>>> it places an n x n box around each grid point and counts up
the number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
> >>>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
> >>>>>>>> computed by comparing the forecast and observation
fractional coverage fields to each other.
> >>>>>>>>
> >>>>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
> >>>>>>>>
> >>>>>>>> Here's an example of how you might set this up in the Grid-
Stat config file:
> >>>>>>>>
> >>>>>>>> nbrhd = {
> >>>>>>>>         vld_thresh = 1.0;
> >>>>>>>>         width      = [ 3, 5, 9, 11, 13, 15 ];
> >>>>>>>>         cov_thresh = [ >=0.5 ];
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>> For a given threshold, you should look to see how FSS
changes as you increase the neighborhood size.
> >>>>>>>>
> >>>>>>>> Hopefully that helps get you going.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> John Halley Gotway
> >>>>>>>> met_help at ucar.edu
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
> >>>>>>>>>
> >>>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
> >>>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
> >>>>>>>>>             Queue: met_help
> >>>>>>>>>           Subject: Unable to visualize Fuzzy verf.
> >>>>>>>>>             Owner: Nobody
> >>>>>>>>>        Requestors: geeta124 at hotmail.com
> >>>>>>>>>            Status: new
> >>>>>>>>>       Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Hi John/ met_help.
> >>>>>>>>>
> >>>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
> >>>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
> >>>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and
the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
> >>>>>>>>>
> >>>>>>>>> geeta
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy verf.
From: John Halley Gotway
Time: Fri Feb 28 10:02:33 2014

Geeta,

I'd suggest switching to METv4.1 if possible.  It should be mostly
backward-compatible and be able to read the data you've generated for
earlier versions of MET.

John

On 02/28/2014 03:43 AM, Geeta Geeta via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>
> John, finally what is the remedy for my problem????
>
> geeta
>
>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>> From: met_help at ucar.edu
>> To: geeta124 at hotmail.com
>> Date: Wed, 26 Feb 2014 11:13:55 -0700
>>
>> Geeta,
>>
>> The problem is in the observation files:
>>
>> *************************************
>> O/P of OBSERVATION FILE (NETCDF format) *
>> *************************************
>> :MET_version = "V3.0.1" ;
>>
>> If you change the "V3.0.1" to "V3.0", then METv3.0 grid_stat will
be able to process it fine.
>>
>> Also, you should switch the timing variable attributes from floats
to integers:
>> Change from:
>>    APCP_03:init_time_ut = 1306972800. ;
>>    APCP_03:valid_time_ut = 1306983600. ;
>>    APCP_03:accum_time_sec = 10800.f ;
>> Change to:
>>    APCP_03:init_time_ut = 1306972800 ;
>>    APCP_03:valid_time_ut = 1306983600 ;
>>    APCP_03:accum_time_sec = 10800 ;
>>
>> When you switch to METv4.1, it'll complain if those aren't
integers.
>>
>> Hope that helps.
>>
>> Thanks,
>> John
>>
>>
>> On 02/25/2014 11:26 PM, Geeta Geeta via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>
>>> thanks John,
>>> I have NOT done anything with the NC files as yet. I was asking
you how to go about changing the attribute????
>>>
>>> I am posting the o/p of one of the Forecast files in netcdf format
using ncdump. It Shows MET="v3.0" what you desire.
>>> bash-3.2$ ncdump -h test.nc
>>> netcdf test {
>>> dimensions:
>>>           lat = 53 ;
>>>           lon = 53 ;
>>> variables:
>>>           float lat(lat, lon) ;
>>>                   lat:long_name = "latitude" ;
>>>                   lat:units = "degrees_north" ;
>>>                   lat:standard_name = "latitude" ;
>>>           float lon(lat, lon) ;
>>>                   lon:long_name = "longitude" ;
>>>                   lon:units = "degrees_east" ;
>>>                   lon:standard_name = "longitude" ;
>>>           float APCP_24(lat, lon) ;
>>>                   APCP_24:name = "APCP" ;
>>>                   APCP_24:long_name = "Total precipitation" ;
>>>                   APCP_24:level = "A24" ;
>>>                   APCP_24:units = "kg/m^2" ;
>>>                   APCP_24:grib_code = 61 ;
>>>                   APCP_24:_FillValue = -9999.f ;
>>>                   APCP_24:init_time = "20110601_000000" ;
>>>                   APCP_24:init_time_ut = 1306886400 ;
>>>                   APCP_24:valid_time = "20110602_030000" ;
>>>                   APCP_24:valid_time_ut = 1306983600 ;
>>>                   APCP_24:accum_time = "240000" ;
>>>                   APCP_24:accum_time_sec = 86400 ;
>>>
>>> // global attributes:
>>>                   :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
>>>                   :MET_version = "V3.0" ;
>>>                   :MET_tool = "pcp_combine" ;
>>>                   :RunCommand = "Subtraction:
2011060100_WRFPRS_d01.027 with accumulation of 270000 minus
2011060100_WRFPRS_d01.003 with accumulation of 030000." ;
>>>                   :Projection = "LatLon" ;
>>>                   :lat_ll = "9.000000 degrees_north" ;
>>>                   :lon_ll = "74.000000 degrees_east" ;
>>>                   :delta_lat = "0.250000 degrees" ;
>>>                   :delta_lon = "0.250000 degrees" ;
>>>                   :Nlat = "53 grid_points" ;
>>>                   :Nlon = "53 grid_points" ;
>>> }
>>> bash-3.2$
>>> *************************************
>>> O/P of OBSERVATION FILE (NETCDF format) *
>>> *************************************
>>> bash-3.2$ ncdump -h ../trmm_nc_data/test.nc
>>> netcdf test {
>>> dimensions:
>>>           lon = 53 ;
>>>           lat = 53 ;
>>> variables:
>>>           double lon(lon) ;
>>>                   lon:units = "degrees_east" ;
>>>           double lat(lat) ;
>>>                   lat:units = "degrees_north" ;
>>>           float APCP_03(lat, lon) ;
>>>                   APCP_03:units = "kg/m^2" ;
>>>                   APCP_03:missing_value = -9999.f ;
>>>                   APCP_03:long_name = "Total precipitation" ;
>>>                   APCP_03:name = "APCP" ;
>>>                   APCP_03:level = "A3" ;
>>>                   APCP_03:grib_code = 61.f ;
>>>                   APCP_03:_FillValue = -9999.f ;
>>>                   APCP_03:init_time = "20110602_000000" ;
>>>                   APCP_03:init_time_ut = 1306972800. ;
>>>                   APCP_03:valid_time = "20110602_030000" ;
>>>                   APCP_03:valid_time_ut = 1306983600. ;
>>>                   APCP_03:accum_time = "030000" ;
>>>                   APCP_03:accum_time_sec = 10800.f ;
>>>
>>> // global attributes:
>>>                   :FileOrigins = "File
../../../vpt/geeta/02june2011.nc generated 20140123_163031 on host
ncmr0102 by the Rscript trmm2nc.R" ;
>>>                   :MET_version = "V3.0.1" ;
>>>                   :Projection = "LatLon" ;
>>>                   :lat_ll = "9 degrees_north" ;
>>>                   :lon_ll = "74 degrees_east" ;
>>>                   :delta_lat = "0.25 degrees" ;
>>>                   :delta_lon = "0.25 degrees" ;
>>>                   :Nlat = "53 grid_points" ;
>>>                   :Nlon = "53 grid_points" ;
>>> }
>>>
>>>
>>> Anyway I am sending you my data once again. the directory is
geeta_data-25feb2014.
>>> DO you suspect the location of NETCDF ???????????????.
>>>
>>> shall be looking forward to hearing from you.
>>>
>>> thanks
>>> geeta
>>>
>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>> From: met_help at ucar.edu
>>>> To: geeta124 at hotmail.com
>>>> Date: Tue, 25 Feb 2014 11:26:10 -0700
>>>>
>>>> Geeta,
>>>>
>>>> Sorry, I was out of the office yesterday.  Looking back through
this ticket, I see that you're still getting the following error:
>>>>       NetCDF: Attribute not found
>>>>
>>>> You said that you updated the NetCDF files to include the
following global attribute:
>>>>       MET_version = "V3.0" ;
>>>>
>>>> If you've added this to both the forecast and observation NetCDF
files, and you're still getting this error, I'll need to see your data
files to debug more.  Please post them to our anonymous ftp site:
>>>>       http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>
>>>> I see 2 other questions in your emails:
>>>>
>>>> (1) How can you control the optional "upscaling" or "smoothing"
done by grid_stat?
>>>>        In METv3.0, that is controlled by configuration file
options that begin with "interp_".  For example, try the following:
>>>>           interp_method[] = [ "UW_MEAN" ];
>>>>           interp_width[]  = [ 1, 3, 5 ];
>>>>           interp_flag     = 3;
>>>>
>>>>        For each output line you were getting before, you should
now get 2 more.  Since interp_flag is set to 3, grid_stat will smooth
both the forecast and observation fields.  For interp_width = 3,
>>>> it'll smooth each data point by computing the average of the 9
points in a 3x3 box around each grid point.  For interp_width = 5,
it'll smooth each data point by computing the average of the 25 points
>>>> in a 5x5 box around each grid point.  You can look to see how the
scores change as you do more and more smoothing.
>>>>
>>>> However computing the fractions skill score (in the NBRCNT line
type) is a common way of doing "neighborhood" or "fuzzy" verification.
>>>>
>>>> (2) You also asked about plotting the station location from
point_stat.  You have a couple of options.  The "plot_point_obs"
utility reads the NetCDF output files from the pb2nc or ascii2nc tools
and
>>>> plots a red dot for each observation lat/lon it finds in the
data.  It is intended to just give you a quick look at the location of
the observations to make sure that they exist where you expect.  It
>>>> in not a general purpose or very flexible plotting tool.
Alternatively, you could look at the "MPR" output line type from
point_stat.  This contains the individual matched pair values that
went into
>>>> the computation of statistics.  The MPR line type includes
columns named "OBS_LAT" and "OBS_LON" giving the point observation
location information.  You could read the lat/lon information from the
MPR
>>>> line type and use whatever plotting tool you prefer to plot the
observation locations.
>>>>
>>>> If you do post more data to the ftp site, please write me back
and I'll go grab it.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>>
>>>> On 02/24/2014 05:59 PM, Geeta Geeta via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>
>>>>> hi john,
>>>>> you had discussed about the upscaling (of both obs and fcst or
any one of them). The forecast is compared with the observations which
are averaged to coarse scales.
>>>>> How is this averaged defined in the configuration file.
>>>>>
>>>>> Pls let me know reg the global attributes ????
>>>>>
>>>>> geeta
>>>>>
>>>>> From: geeta124 at hotmail.com
>>>>> To: met_help at ucar.edu
>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>>> Date: Sun, 23 Feb 2014 22:24:04 +0530
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> hi John,
>>>>> Can you help with changing/ appending  the GLOBAL attributes of
the NETCDF file???.
>>>>>
>>>>> Can you provide some more hints.
>>>>>
>>>>> regards
>>>>>
>>>>> geeta
>>>>>
>>>>> From: geeta124 at hotmail.com
>>>>> To: met_help at ucar.edu
>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>>> Date: Fri, 21 Feb 2014 15:03:34 +0530
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> thanks John,
>>>>> I have made the changes as per your config file.
>>>>> But the error persists.
>>>>> -bash-3.2$ ../bin/grid_stat ./fcst_nc/2011060100_WRFPRS_day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>>>>> GSL_RNG_TYPE=mt19937
>>>>> GSL_RNG_SEED=18446744073358673747
>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>>> Observation File: ../trmm_nc_data/02june2011.nc
>>>>> Configuration File: GridStatConfig_APCP_24
>>>>> NetCDF: Attribute not found
>>>>> -bash-3.2$
>>>>>
>>>>>
>>>>> 2. I have used  ncdump to see my file attributes.
>>>>> Are u referring to these attributes???
>>>>>
>>>>> // global attributes:
>>>>>                    :FileOrigins = "File
2011060100_WRFPRS_d01.003Z.nc generated 20140130_092500 UTC on host
rmcdlh by the MET pcp_combine tool" ;
>>>>>                    :MET_version = "V3.0" ;
>>>>>                    :MET_tool = "pcp_combine" ;
>>>>>
>>>>> Following is my Config file.
>>>>>
____________________________________________________________________
>>>>>
////////////////////////////////////////////////////////////////////////////////
>>>>> //
>>>>> // Default grid_stat configuration file
>>>>> //
>>>>>
////////////////////////////////////////////////////////////////////////////////
>>>>> //
>>>>> // Specify a name to designate the model being verified.  This
name will be
>>>>> // written to the second column of the ASCII output generated.
>>>>> //
>>>>> model = "WRF";
>>>>> //
>>>>> // Specify a comma-separated list of fields to be verified.  The
forecast and
>>>>> // observation fields may be specified separately.  If the
obs_field parameter
>>>>> // is left blank, it will default to the contents of fcst_field.
>>>>> //
>>>>> // Each field is specified as a GRIB code or abbreviation
followed by an
>>>>> // accumulation or vertical level indicator for GRIB files or as
a variable name
>>>>> // followed by a list of dimensions for NetCDF files output from
p_interp or MET.
>>>>> //
>>>>> // Specifying verification fields for GRIB files:
>>>>> //    GC/ANNN for accumulation interval NNN
>>>>> //    GC/ZNNN for vertical level NNN
>>>>> //    GC/PNNN for pressure level NNN in hPa
>>>>> //    GC/PNNN-NNN for a range of pressure levels in hPa
>>>>> //    GC/LNNN for a generic level type
>>>>> //    GC/RNNN for a specific GRIB record number
>>>>> //    Where GC is the number of or abbreviation for the grib
code
>>>>> //    to be verified.
>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>> //
>>>>> // Specifying verification fields for NetCDF files:
>>>>> //    var_name(i,...,j,*,*) for a single field
>>>>> //    Where var_name is the name of the NetCDF variable,
>>>>> //    and i,...,j specifies fixed dimension values,
>>>>> //    and *,* specifies the two dimensions for the gridded
field.
>>>>> //
>>>>> //    NOTE: To verify winds as vectors rather than scalars,
>>>>> //          specify UGRD (or 33) followd by VGRD (or 34) with
the
>>>>> //          same level values.
>>>>> //
>>>>> //    NOTE: To process a probability field, add "/PROB", such as
"POP/Z0/PROB".
>>>>> //
>>>>> // e.g. fcst_field[] = [ "61/A3", "APCP/A24", "RH/L10" ]; for
GRIB input
>>>>> // e.g. fcst_field[] = [ "RAINC(0,*,*)", "QVAPOR(0,5,*,*)" ];
for NetCDF input
>>>>> //
>>>>> fcst_field[] = [ "APCP_24(*,*)" ];
>>>>> obs_field[]  = [ "APCP_03(*,*)" ];
>>>>> //
>>>>> // Specify a comma-separated list of groups of thresholds to be
applied to the
>>>>> // fields listed above.  Thresholds for the forecast and
observation fields
>>>>> // may be specified separately.  If the obs_thresh parameter is
left blank,
>>>>> // it will default to the content of fcst_thresh.
>>>>> //
>>>>> // At least one threshold must be provided for each field listed
above.  The
>>>>> // lengths of the "fcst_field" and "fcst_thresh" arrays must
match, as must
>>>>> // lengths of the "obs_field" and "obs_thresh" arrays.  To apply
multiple
>>>>> // thresholds to a field, separate the threshold values with a
space.
>>>>> //
>>>>> // Each threshold must be preceded by a two letter indicator for
the type of
>>>>> // thresholding to be performed:
>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>> //    'gt' for greater than  'ge' for greater than or equal to
>>>>> //
>>>>> // NOTE: Thresholds for probabilities must begin with 0.0, end
with 1.0,
>>>>> //       and be preceeded by "ge".
>>>>> //
>>>>> // e.g. fcst_thresh[] = [ "gt0.0 ge5.0", "gt0.0", "lt80.0
ge80.0" ];
>>>>> //
>>>>> fcst_thresh[] = [ "gt0.0 gt5.0 gt10.0" ];
>>>>> obs_thresh[]  = [];
>>>>> //
>>>>> // Specify a comma-separated list of thresholds to be used when
computing
>>>>> // VL1L2 partial sums for winds.  The thresholds are applied to
the wind speed
>>>>> // values derived from each U/V pair.  Only those U/V pairs
which meet the wind
>>>>> // speed threshold criteria are retained.  If the
obs_wind_thresh parameter is
>>>>> // left blank, it will default to the contents of
fcst_wind_thresh.
>>>>> //
>>>>> // To apply multiple wind speed thresholds, separate the
threshold values with a
>>>>> // space.  Use "NA" to indicate that no wind speed threshold
should be applied.
>>>>> //
>>>>> // Each threshold must be preceded by a two letter indicator for
the type of
>>>>> // thresholding to be performed:
>>>>> //    'lt' for less than     'le' for less than or equal to
>>>>> //    'eq' for equal to      'ne' for not equal to
>>>>> //    'gt' for greater than  'ge' for greater than or equal to
>>>>> //    'NA' for no threshold
>>>>> //
>>>>> // e.g. fcst_wind_thresh[] = [ "NA", "ge1.0" ];
>>>>> //
>>>>> fcst_wind_thresh[] = [ "NA" ];
>>>>> obs_wind_thresh[]  = [];
>>>>> //
>>>>> // Specify a comma-separated list of grids to be used in masking
the data over
>>>>> // which to perform scoring.  An empty list indicates that no
masking grid
>>>>> // should be performed.  The standard NCEP grids are named
"GNNN" where NNN
>>>>> // indicates the three digit grid number.  Enter "FULL" to score
over the
>>>>> // entire domain.
>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/tableb.html
>>>>> //
>>>>> // e.g. mask_grid[] = [ "FULL" ];
>>>>> //
>>>>> mask_grid[] = [ "FULL" ];
>>>>> //
>>>>> // Specify a comma-separated list of masking regions to be
applied.
>>>>> // An empty list indicates that no additional masks should be
used.
>>>>> // The masking regions may be defined in one of 4 ways:
>>>>> //
>>>>> // (1) An ASCII file containing a lat/lon polygon.
>>>>> //     Latitude in degrees north and longitude in degrees east.
>>>>> //     By default, the first and last polygon points are
connected.
>>>>> //     e.g. "MET_BASE/data/poly/EAST.poly" which consists of n
points:
>>>>> //          "poly_name lat1 lon1 lat2 lon2... latn lonn"
>>>>> //
>>>>> // (2) The NetCDF output of the gen_poly_mask tool.
>>>>> //
>>>>> // (3) A NetCDF data file, followed by the name of the NetCDF
variable
>>>>> //     to be used, and optionally, a threshold to be applied to
the field.
>>>>> //     e.g. "sample.nc var_name gt0.00"
>>>>> //
>>>>> // (4) A GRIB data file, followed by a description of the field
>>>>> //     to be used, and optionally, a threshold to be applied to
the field.
>>>>> //     e.g. "sample.grb APCP/A3 gt0.00"
>>>>> //
>>>>> // Any NetCDF or GRIB file used must have the same grid
dimensions as the
>>>>> // data being verified.
>>>>> //
>>>>> // MET_BASE may be used in the path for the files above.
>>>>> //
>>>>> // e.g. mask_poly[] = [ "MET_BASE/data/poly/EAST.poly",
>>>>> //                      "poly_mask.ncf",
>>>>> //                      "sample.nc APCP",
>>>>> //                      "sample.grb HGT/Z0 gt100.0" ];
>>>>> //
>>>>> mask_poly[] = [];
>>>>> //
>>>>> // Specify a comma-separated list of values for alpha to be used
when computing
>>>>> // confidence intervals.  Values of alpha must be between 0 and
1.
>>>>> //
>>>>> // e.g. ci_alpha[] = [ 0.05, 0.10 ];
>>>>> //
>>>>> ci_alpha[] = [ 0.10, 0.05 ];
>>>>> //
>>>>> // Specify the method to be used for computing bootstrap
confidence intervals.
>>>>> // The value for this is interpreted as follows:
>>>>> //    (0) Use the BCa interval method (computationally
intensive)
>>>>> //    (1) Use the percentile interval method
>>>>> //
>>>>> boot_interval = 1;
>>>>> //
>>>>> // Specify a proportion between 0 and 1 to define the replicate
sample size
>>>>> // to be used when computing percentile intervals.  The
replicate sample
>>>>> // size is set to boot_rep_prop * n, where n is the number of
raw data points.
>>>>> //
>>>>> // e.g boot_rep_prop = 0.80;
>>>>> //
>>>>> boot_rep_prop = 1.0;
>>>>> //
>>>>> // Specify the number of times each set of matched pair data
should be
>>>>> // resampled when computing bootstrap confidence intervals.  A
value of
>>>>> // zero disables the computation of bootstrap condifence
intervals.
>>>>> //
>>>>> // e.g. n_boot_rep = 1000;
>>>>> //
>>>>> n_boot_rep = 0;
>>>>> //
>>>>> // Specify the name of the random number generator to be used.
See the MET
>>>>> // Users Guide for a list of possible random number generators.
>>>>> //
>>>>> boot_rng = "mt19937";
>>>>> //
>>>>> // Specify the seed value to be used when computing bootstrap
confidence
>>>>> // intervals.  If left unspecified, the seed will change for
each run and
>>>>> // the computed bootstrap confidence intervals will not be
reproducable.
>>>>> //
>>>>> boot_seed = "";
>>>>> //
>>>>> // Specify a comma-separated list of interpolation method(s) to
be used for
>>>>> // smoothing the data fields prior to comparing them.  The value
at each grid
>>>>> // point is replaced by the measure computed over the
neighborhood defined
>>>>> // around the grid point.  String values are interpreted as
follows:
>>>>> //    MIN     = Minimum in the neighborhood
>>>>> //    MAX     = Maximum in the neighborhood
>>>>> //    MEDIAN  = Median in the neighborhood
>>>>> //    UW_MEAN = Unweighted mean in the neighborhood
>>>>> //
>>>>> //    NOTE: The distance-weighted mean (DW_MEAN) is not an
option here since
>>>>> //          it will have no effect on a gridded field.
>>>>> //
>>>>> //    NOTE: The least-squares fit (LS_FIT) is not an option here
since
>>>>> //          it reduces to an unweighted mean on a grid.
>>>>> //
>>>>> // e.g. interp_method[] = [ "UW_MEAN", "MEDIAN" ];
>>>>> //
>>>>> interp_method[] = [ "UW_MEAN" ];
>>>>> //
>>>>> // Specify a comma-separated list of box widths to be used by
the interpolation
>>>>> // techniques listed above.  All values must be odd.  A value of
1 indicates
>>>>> // that no smoothing should be performed.  For values greater
than 1, the n*n
>>>>> // grid points around each point will be used to smooth the data
fields.
>>>>> //
>>>>> // e.g. interp_width = [ 1, 3, 5 ];
>>>>> //
>>>>> interp_width[] = [ 1 ];
>>>>> //
>>>>> // The interp_flag controls how the smoothing defined above
should be applied:
>>>>> // (1) Smooth only the forecast field
>>>>> // (2) Smooth only the observation field
>>>>> // (3) Smooth both the forecast and observation fields
>>>>> //
>>>>> interp_flag = 1;
>>>>> //
>>>>> // When smoothing, compute a ratio of the number of valid data
points to
>>>>> // the total number of points in the neighborhood.  If that
ratio is less
>>>>> // than this threshold, do not compute a smoothed forecast
value.  This
>>>>> // threshold must be between 0 and 1.  Setting this threshold to
1 will
>>>>> // require that each observation be surrounded by n*n valid
forecast
>>>>> // points.
>>>>> //
>>>>> // e.g. interp_thresh = 1.0;
>>>>> //
>>>>> interp_thresh = 1.0;
>>>>> //
>>>>> // Specify a comma-separated list of box widths to be used to
define the
>>>>> // neighborhood size for the neighborhood verification methods.
All values
>>>>> // must be odd.  For values greater than 1, the n*n grid points
around each
>>>>> // point will be used to define the neighborhood.
>>>>> //
>>>>> // e.g. nbr_width = [ 3, 5 ];
>>>>> //
>>>>> nbr_width[] = [ 3, 5 ];
>>>>> //
>>>>> // When applying the neighborhood verification methods, compute
a ratio
>>>>> // of the number of valid data points to the total number of
points in
>>>>> // the neighborhood.  If that ratio is less than this threshold,
do not
>>>>> // include it in the computations.  This threshold must be
between 0
>>>>> // and 1.  Setting this threshold to 1 will require that each
point be
>>>>> // surrounded by n*n valid forecast points.
>>>>> //
>>>>> // e.g. nbr_thresh = 1.0;
>>>>> //
>>>>> nbr_thresh = 1.0;
>>>>> //
>>>>> // When applying the neighborhood verification methods, apply a
threshold
>>>>> // to the fractional coverage values to define contingency
tables from
>>>>> // which to compute statistics.
>>>>> //
>>>>> // e.g. cov_thresh[] = [ "ge0.25", "ge0.50" ];
>>>>> //
>>>>> cov_thresh[] = [ "ge0.5" ];
>>>>> //
>>>>> // Specify flags to indicate the type of data to be output:
>>>>> //
>>>>> //    (1) STAT and FHO Text Files, Forecast, Hit, Observation
Rates:
>>>>> //           Total (TOTAL),
>>>>> //           Forecast Rate (F_RATE),
>>>>> //           Hit Rate (H_RATE),
>>>>> //           Observation Rate (O_RATE)
>>>>> //
>>>>> //    (2) STAT and CTC Text Files, Contingency Table Counts:
>>>>> //           Total (TOTAL),
>>>>> //           Forecast Yes and Observation Yes Count (FY_OY),
>>>>> //           Forecast Yes and Observation No Count (FY_ON),
>>>>> //           Forecast No and Observation Yes Count (FN_OY),
>>>>> //           Forecast No and Observation No Count (FN_ON)
>>>>> //
>>>>> //    (3) STAT and CTS Text Files, Contingency Table Scores:
>>>>> //           Total (TOTAL),
>>>>> //           Base Rate (BASER),
>>>>> //           Forecast Mean (FMEAN),
>>>>> //           Accuracy (ACC),
>>>>> //           Frequency Bias (FBIAS),
>>>>> //           Probability of Detecting Yes (PODY),
>>>>> //           Probability of Detecting No (PODN),
>>>>> //           Probability of False Detection (POFD),
>>>>> //           False Alarm Ratio (FAR),
>>>>> //           Critical Success Index (CSI),
>>>>> //           Gilbert Skill Score (GSS),
>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>> //           Heidke Skill Score (HSS),
>>>>> //           Odds Ratio (ODDS),
>>>>> //           NOTE: All statistics listed above contain
parametric and/or
>>>>> //                 non-parametric confidence interval limits.
>>>>> //
>>>>> //    (4) STAT and MCTC Text Files, NxN Multi-Category
Contingency Table Counts:
>>>>> //           Total (TOTAL),
>>>>> //           Number of Categories (N_CAT),
>>>>> //           Contingency Table Count columns repeated
N_CAT*N_CAT times
>>>>> //
>>>>> //    (5) STAT and MCTS Text Files, NxN Multi-Category
Contingency Table Scores:
>>>>> //           Total (TOTAL),
>>>>> //           Number of Categories (N_CAT),
>>>>> //           Accuracy (ACC),
>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>> //           Heidke Skill Score (HSS),
>>>>> //           Gerrity Score (GER),
>>>>> //           NOTE: All statistics listed above contain
parametric and/or
>>>>> //                 non-parametric confidence interval limits.
>>>>> //
>>>>> //    (6) STAT and CNT Text Files, Statistics of Continuous
Variables:
>>>>> //           Total (TOTAL),
>>>>> //           Forecast Mean (FBAR),
>>>>> //           Forecast Standard Deviation (FSTDEV),
>>>>> //           Observation Mean (OBAR),
>>>>> //           Observation Standard Deviation (OSTDEV),
>>>>> //           Pearson's Correlation Coefficient (PR_CORR),
>>>>> //           Spearman's Rank Correlation Coefficient (SP_CORR),
>>>>> //           Kendall Tau Rank Correlation Coefficient (KT_CORR),
>>>>> //           Number of ranks compared (RANKS),
>>>>> //           Number of tied ranks in the forecast field
(FRANK_TIES),
>>>>> //           Number of tied ranks in the observation field
(ORANK_TIES),
>>>>> //           Mean Error (ME),
>>>>> //           Standard Deviation of the Error (ESTDEV),
>>>>> //           Multiplicative Bias (MBIAS = FBAR - OBAR),
>>>>> //           Mean Absolute Error (MAE),
>>>>> //           Mean Squared Error (MSE),
>>>>> //           Bias-Corrected Mean Squared Error (BCMSE),
>>>>> //           Root Mean Squared Error (RMSE),
>>>>> //           Percentiles of the Error (E10, E25, E50, E75, E90)
>>>>> //           NOTE: Most statistics listed above contain
parametric and/or
>>>>> //                 non-parametric confidence interval limits.
>>>>> //
>>>>> //    (7) STAT and SL1L2 Text Files, Scalar Partial Sums:
>>>>> //           Total (TOTAL),
>>>>> //           Forecast Mean (FBAR),
>>>>> //              = mean(f)
>>>>> //           Observation Mean (OBAR),
>>>>> //              = mean(o)
>>>>> //           Forecast*Observation Product Mean (FOBAR),
>>>>> //              = mean(f*o)
>>>>> //           Forecast Squared Mean (FFBAR),
>>>>> //              = mean(f^2)
>>>>> //           Observation Squared Mean (OOBAR)
>>>>> //              = mean(o^2)
>>>>> //
>>>>> //    (8) STAT and VL1L2 Text Files, Vector Partial Sums:
>>>>> //           Total (TOTAL),
>>>>> //           U-Forecast Mean (UFBAR),
>>>>> //              = mean(uf)
>>>>> //           V-Forecast Mean (VFBAR),
>>>>> //              = mean(vf)
>>>>> //           U-Observation Mean (UOBAR),
>>>>> //              = mean(uo)
>>>>> //           V-Observation Mean (VOBAR),
>>>>> //              = mean(vo)
>>>>> //           U-Product Plus V-Product (UVFOBAR),
>>>>> //              = mean(uf*uo+vf*vo)
>>>>> //           U-Forecast Squared Plus V-Forecast Squared
(UVFFBAR),
>>>>> //              = mean(uf^2+vf^2)
>>>>> //           U-Observation Squared Plus V-Observation Squared
(UVOOBAR)
>>>>> //              = mean(uo^2+vo^2)
>>>>> //
>>>>> //    (9) STAT and PCT Text Files, Nx2 Probability Contingency
Table Counts:
>>>>> //           Total (TOTAL),
>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>> //           Probability Threshold Value (THRESH_i),
>>>>> //           Row Observation Yes Count (OY_i),
>>>>> //           Row Observation No Count (ON_i),
>>>>> //           NOTE: Previous 3 columns repeated for each row in
the table
>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>> //
>>>>> //   (10) STAT and PSTD Text Files, Nx2 Probability Contingency
Table Scores:
>>>>> //           Total (TOTAL),
>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>> //           Base Rate (BASER) with confidence interval limits,
>>>>> //           Reliability (RELIABILITY),
>>>>> //           Resolution (RESOLUTION),
>>>>> //           Uncertainty (UNCERTAINTY),
>>>>> //           Area Under the ROC Curve (ROC_AUC),
>>>>> //           Brier Score (BRIER) with confidence interval
limits,
>>>>> //           Probability Threshold Value (THRESH_i)
>>>>> //           NOTE: Previous column repeated for each probability
threshold.
>>>>> //
>>>>> //   (11) STAT and PJC Text Files, Joint/Continuous Statistics
of
>>>>> //                                 Probabilistic Variables:
>>>>> //           Total (TOTAL),
>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>> //           Probability Threshold Value (THRESH_i),
>>>>> //           Observation Yes Count Divided by Total (OY_TP_i),
>>>>> //           Observation No Count Divided by Total (ON_TP_i),
>>>>> //           Calibration (CALIBRATION_i),
>>>>> //           Refinement (REFINEMENT_i),
>>>>> //           Likelikhood (LIKELIHOOD_i),
>>>>> //           Base Rate (BASER_i),
>>>>> //           NOTE: Previous 7 columns repeated for each row in
the table
>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>> //
>>>>> //   (12) STAT and PRC Text Files, ROC Curve Points for
>>>>> //                                 Probabilistic Variables:
>>>>> //           Total (TOTAL),
>>>>> //           Number of Forecast Probability Thresholds
(N_THRESH),
>>>>> //           Probability Threshold Value (THRESH_i),
>>>>> //           Probability of Detecting Yes (PODY_i),
>>>>> //           Probability of False Detection (POFD_i),
>>>>> //           NOTE: Previous 3 columns repeated for each row in
the table
>>>>> //           Last Probability Threshold Value (THRESH_n)
>>>>> //
>>>>> //   (13) STAT and NBRCTC Text Files, Neighborhood Methods
Contingency Table Counts:
>>>>> //           Total (TOTAL),
>>>>> //           Forecast Yes and Observation Yes Count (FY_OY),
>>>>> //           Forecast Yes and Observation No Count (FY_ON),
>>>>> //           Forecast No and Observation Yes Count (FN_OY),
>>>>> //           Forecast No and Observation No Count (FN_ON),
>>>>> //           Fractional Threshold Value (FRAC_T)
>>>>> //
>>>>> //   (14) STAT and NBRCTS Text Files, Neighborhood Methods
Contingency Table Scores:
>>>>> //           Total (TOTAL),
>>>>> //           Base Rate (BASER),
>>>>> //           Forecast Mean (FMEAN),
>>>>> //           Accuracy (ACC),
>>>>> //           Bias (BIAS),
>>>>> //           Probability of Detecting Yes (PODY),
>>>>> //           Probability of Detecting No (PODN),
>>>>> //           Probability of False Detection (POFD),
>>>>> //           False Alarm Ratio (FAR),
>>>>> //           Critical Success Index (CSI),
>>>>> //           Gilbert Skill Score (GSS),
>>>>> //           Hanssen and Kuipers Discriminant (HK),
>>>>> //           Heidke Skill Score (HSS),
>>>>> //           Odds Ratio (ODDS),
>>>>> //           NOTE: Most statistics listed above contain
parametric and/or
>>>>> //                 non-parametric confidence interval limits.
>>>>> //
>>>>> //   (15) STAT and NBRCNT Text Files, Neighborhood Methods
Continuous Scores:
>>>>> //           Total (TOTAL),
>>>>> //           Fractions Brier Score (FBS),
>>>>> //           Fractions Skill Score (FSS)
>>>>> //
>>>>> //   (16) NetCDF File containing difference fields for each grib
>>>>> //        code/mask combination.  A non-zero value indicates
that
>>>>> //        this NetCDF file should be produced.  A value of 0
>>>>> //        indicates that it should not be produced.
>>>>> //
>>>>> // Values for flags (1) through (15) are interpreted as follows:
>>>>> //    (0) Do not generate output of this type
>>>>> //    (1) Write output to a STAT file
>>>>> //    (2) Write output to a STAT file and a text file
>>>>> //
>>>>> output_flag[] = [ 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 1
];
>>>>> //
>>>>> // Flag to indicate whether Kendall's Tau and Spearman's Rank
Correlation
>>>>> // Coefficients should be computed.  Computing them over large
datasets is
>>>>> // computationally intensive and slows down the runtime
execution significantly.
>>>>> //    (0) Do not compute these correlation coefficients
>>>>> //    (1) Compute these correlation coefficients
>>>>> //
>>>>> rank_corr_flag = 0;
>>>>> //
>>>>> // Specify the GRIB Table 2 parameter table version number to be
used
>>>>> // for interpreting GRIB codes.
>>>>> // http://www.nco.ncep.noaa.gov/pmb/docs/on388/table2.html
>>>>> //
>>>>> grib_ptv = 2;
>>>>> //
>>>>> // Directory where temporary files should be written.
>>>>> //
>>>>> tmp_dir = "/tmp";
>>>>> //
>>>>> // Prefix to be used for the output file names.
>>>>> //
>>>>> output_prefix = "APCP_24";
>>>>> //
>>>>> // Indicate a version number for the contents of this
configuration file.
>>>>> // The value should generally not be modified.
>>>>> //
>>>>> version = "V3.0";
>>>>>
>>>>>
>>>>> geeta
>>>>>
>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize Fuzzy
verf.
>>>>>> From: met_help at ucar.edu
>>>>>> To: geeta124 at hotmail.com
>>>>>> Date: Thu, 20 Feb 2014 10:21:20 -0700
>>>>>>
>>>>>> Geeta,
>>>>>>
>>>>>> I see that you're using METv3.0.  The current version is
METv4.1, and it'd be good to switch to that version when possible.
There have been major changes to the MET configuration file format
since
>>>>>> METv3.0, so be sure to use the default config files for
METv4.1.
>>>>>>
>>>>>> I ran METv3.0 grid_stat on the data files you sent and
reproduced the error message you saw:
>>>>>>        ***WARNING***: process_scores() -> 61(*,*) not found in
file: 2011060100_WRFPRS_day1_003Z.nc
>>>>>>
>>>>>> Since the input files are both NetCDF files, you need to
specify the name of the NetCDF variable that should be used.  So I
modified your config file:
>>>>>>        FROM: fcst_field[] = [ "61/A24" ];
>>>>>>        TO:   fcst_field[] = [ "APCP_24(*,*)" ];
>>>>>>
>>>>>> When I reran with this change, I got this error:
>>>>>>        NetCDF: Attribute not found
>>>>>>
>>>>>> After some digging, I found the problem to be the MET_version
global attribute in 02june2011.nc:
>>>>>>                     :MET_version = "V3.0.1" ;
>>>>>>
>>>>>> I switched that to be consistent with the version of MET you're
running:
>>>>>>                     :MET_version = "V3.0" ;
>>>>>>
>>>>>> And then I got this error:
>>>>>> ERROR: parse_poly_mask() -> the dimensions of the masking
region (185, 129) must match the dimensions of the data (53, 53).
>>>>>>
>>>>>> So I modified the config file to change the masking region
settings:
>>>>>>        FROM: mask_grid[] = [ "DTC165", "DTC166" ];
>>>>>>              mask_poly[] = [
"MET_BASE/out/gen_poly_mask/CONUS_poly.nc",
>>>>>>                              "MET_BASE/data/poly/LMV.poly" ];
>>>>>>
>>>>>>        TO:   mask_grid[] = [ "FULL" ];
>>>>>>              mask_poly[] = [];
>>>>>>
>>>>>> And then it ran fine.
>>>>>>
>>>>>> To summarize...
>>>>>>      (1) To run METv3.0 grid_stat, please set the "MET_version"
global attribute in all the gridded NetCDF files you're using to
METv3.0.
>>>>>>      (2) Please use the attached, updated version of
GridStatConfig_APCP_24.
>>>>>>      (3) Consider updating to using METv4.1 instead.
>>>>>>
>>>>>> Thanks,
>>>>>> John
>>>>>>
>>>>>> On 02/19/2014 11:33 PM, Geeta Geeta via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427
>
>>>>>>>
>>>>>>> Hi John,
>>>>>>> I am bothering you with a few more. Hope u ll bear with me.
>>>>>>> So what I gathered from the discussion with you is that: we
essentially have 3 types of neighbourhood approaches (space, time and
intensity).  these changes can be done using the config file.
>>>>>>>
>>>>>>> 1. Now I was reading  about 3 approaches of FUZZY verf which
are   a. Multi event contingency Table (My question is -----Can we
define a hit as RF b/w 0.1 to 2.5 in the config file. Normally we
select the threshold as ge0.1 or ge2.5 etc. Is the provision of giving
a range in config file there?????).
>>>>>>>
>>>>>>> b) Pragmatic approach  (donot know what's that???)
>>>>>>>
>>>>>>> c) Conditional Square root of Ranked probability score (CSRR).
(donot know what's that???)
>>>>>>>
>>>>>>> I donot understand these. Can u lead me to the right direction
or provide some hints????
>>>>>>>
>>>>>>> 2. How Can I prepare the QUILT plots (Spatial scale vs
Threshold) for a score???
>>>>>>> Can the QUILT plot be prepared for any score like HK, HSS, FBS
or FSS????
>>>>>>>
>>>>>>>
>>>>>>> thanks
>>>>>>> geeta
>>>>>>>
>>>>>>> From: geeta124 at hotmail.com
>>>>>>> To: met_help at ucar.edu
>>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>>>>>>> Date: Thu, 20 Feb 2014 11:30:25 +0530
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hi John,
>>>>>>> Sorry I have put my data in Ur server . my dir name is
geeta124_data.
>>>>>>> Kindly check that.
>>>>>>>
>>>>>>> geeta
>>>>>>>
>>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>>>>>>>> From: met_help at ucar.edu
>>>>>>>> To: geeta124 at hotmail.com
>>>>>>>> Date: Fri, 14 Feb 2014 09:48:08 -0700
>>>>>>>>
>>>>>>>> Geeta,
>>>>>>>>
>>>>>>>> You run copygb to put the forecast and observation fields on
exactly the same grid, meaning the exact same resolution and number of
grid points.
>>>>>>>>
>>>>>>>> I was trying to make the point that the "interpolation
methods" in the grid_stat config file could be used as a form of
"upscaling".  You are right, there is no *need* to interpolate the
data since
>>>>>>>> you've already used copygb to put them on the same grid.  In
grid_stat, the interpolation options provide a way of smoothing, or
upscaling, the data.  For example, suppose you choose and
interpolation
>>>>>>>> option of UW_MEAN (for un-weighted mean) and width of 5.  For
each grid point, grid_stat will replace the value at the grid point
with the average of the 25 points in a 5x5 box around that point.
>>>>>>>> Doing that for every point in the grid smooths the data and
provides a way of upscaling.
>>>>>>>>
>>>>>>>> The default interpolation width is 1, meaning that no
smoothing is performed.  However, you could use multiple smoothing
widths and see how your performance changes the more you smooth the
data.
>>>>>>>>
>>>>>>>> Does that make sense?
>>>>>>>>
>>>>>>>> Regarding the runtime error you're getting, I see that you're
using input NetCDF files for the forecast and observation fields.  In
the config file, you need to specify the name and dimensions of the
>>>>>>>> NetCDF variable to be used.  Assuming the NetCDF variable is
named "APCP_24", it would look something like this:
>>>>>>>>
>>>>>>>> fcst = {
>>>>>>>>         wind_thresh = [ NA ];
>>>>>>>>
>>>>>>>>         field = [
>>>>>>>>            {
>>>>>>>>              name       = "APCP_24";
>>>>>>>>              level      = [ "(*,*)" ];
>>>>>>>>              cat_thresh = [ >0.0, >=5.0 ];
>>>>>>>>            }
>>>>>>>>         ];
>>>>>>>>
>>>>>>>> };
>>>>>>>>
>>>>>>>> If you continue to experience problems, please send me sample
forecast and observation files along with the GridStatConfig file
you're using.  You can post it to our anonymous ftp site following
these
>>>>>>>> instructions:
>>>>>>>>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> John
>>>>>>>>
>>>>>>>> On 02/14/2014 02:17 AM, Geeta Geeta via RT wrote:
>>>>>>>>>
>>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>>>>>
>>>>>>>>> Hi John,
>>>>>>>>>       I have run grid-stat. Following is the error.
>>>>>>>>>
>>>>>>>>> bash-3.2$ ../bin/grid_stat ./fcst_nc/20110601*day1*
../trmm_nc_data/02june2011.nc GridStatConfig_APCP_24
>>>>>>>>> GSL_RNG_TYPE=mt19937
>>>>>>>>> GSL_RNG_SEED=18446744073321512274
>>>>>>>>> Forecast File: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>>>>>>> Observation File: ../trmm_nc_data/02june2011.nc
>>>>>>>>> Configuration File: GridStatConfig_APCP_24
>>>>>>>>> ***WARNING***: process_scores() -> 61(*,*) not found in
file: ./fcst_nc/2011060100_WRFPRS_day1_003Z.nc
>>>>>>>>>
>>>>>>>>>
--------------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Pls suggest.
>>>>>>>>>
>>>>>>>>> geeta
>>>>>>>>>
>>>>>>>>> From: geeta124 at hotmail.com
>>>>>>>>> To: met_help at ucar.edu
>>>>>>>>> Subject: RE: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>>>>>>>>> Date: Fri, 14 Feb 2014 14:08:12 +0530
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks a lot John for your inputs and clarifications.
>>>>>>>>>
>>>>>>>>> Still following doubts are there.
>>>>>>>>>
>>>>>>>>> 1. when I run copygb, what it does is to make the
observation and Model FC uniform ( I mean same GRID and RESOLUTION).
These two parameters are only important.
>>>>>>>>> R u calling that as Upscaling???? So this process is not a
part of GRID-stat. So essentially copygb is doing the upscaling part.
>>>>>>>>>
>>>>>>>>> 2. There are interpolation methods in the grid-stat config
file. (analogous to that in point-stat. in point-stat, there are 3-4
like nearest neigh, mean, distance weighted etc).
>>>>>>>>>
>>>>>>>>> why should one have the interpolation ONCE again i.e (after
copygb) the grid fields are similar. ie. One GP has 2 values one OBS
and one FCST??? It is correct???
>>>>>>>>>
>>>>>>>>> geeta
>>>>>>>>>
>>>>>>>>>> Subject: Re: [rt.rap.ucar.edu #65427] Unable to visualize
Fuzzy verf.
>>>>>>>>>> From: met_help at ucar.edu
>>>>>>>>>> To: geeta124 at hotmail.com
>>>>>>>>>> Date: Thu, 13 Feb 2014 10:33:47 -0700
>>>>>>>>>>
>>>>>>>>>> Geeta,
>>>>>>>>>>
>>>>>>>>>> You are correct, the input forecast and observation files
must be on the same grid.  In Grid-Stat, there are two ways you can
perform "fuzzy" verification.
>>>>>>>>>>
>>>>>>>>>> (1) The first way is by applying an interpolation method to
the data.  Since the data are already on the same grid, this is really
a "smoothing" operation instead.  This is called "upscaling".
>>>>>>>>>> Smoother forecasts and observations tend to produce better
traditional verification scores.  So you could see how your scores
(like RMSE or GSS) improve as you smooth the data more and more.  In
the
>>>>>>>>>> config file, you could try:
>>>>>>>>>>
>>>>>>>>>> interp = {
>>>>>>>>>>          field      = BOTH;
>>>>>>>>>>          vld_thresh = 1.0;
>>>>>>>>>>
>>>>>>>>>>          type = [
>>>>>>>>>>             { method = UW_MEAN; width  = 1; },
>>>>>>>>>>             { method = UW_MEAN; width  = 3; },
>>>>>>>>>>             { method = UW_MEAN; width  = 6; },
>>>>>>>>>>             { method = UW_MEAN; width  = 9; }
>>>>>>>>>>          ];
>>>>>>>>>> };
>>>>>>>>>>
>>>>>>>>>> This tells Grid-Stat to compute its statistics 4 times,
applying more smoothing each time.  Typically, the more the data has
been smoothed, the better the statistics will be.
>>>>>>>>>>
>>>>>>>>>> (2) The second way is by applying neighborhood verification
methods.  The most common are the Fractions Brier Score (FBS) and
Fractions Skill Score (FSS), both contained in the NBRCNT output line
>>>>>>>>>> type.  Be sure to turn the NBRCNT output line on in the
Grid-Stat config file.  For neighborhood verification, you pick
multiple neighborhood sizes and look to see how the FSS changes as you
increase
>>>>>>>>>> the neighborhood size.  As the neighborhood size increases,
FSS increases.  And you look to see how large of a neighborhood size
you need to get a "useful" (FSS > 0.5) forecast.
>>>>>>>>>>
>>>>>>>>>> Here's how this method works.  You pick one or more
thresholds (cat_thresh) for your field.  Grid-Stat applies the
threshold to produce a 0/1 binary field of your data.  For each
neighborhood size, n,
>>>>>>>>>> it places an n x n box around each grid point and counts up
the number of events within that box.  For a 3 x 3 box, if 4 of the 9
points contained an event, the value for that point is 4/9.  This is
>>>>>>>>>> done for every grid point in for forecast field and the
observation field.  We call the result of this process the forecast
and observation "fractional coverage" fields.  The FSS and FBS scores
are
>>>>>>>>>> computed by comparing the forecast and observation
fractional coverage fields to each other.
>>>>>>>>>>
>>>>>>>>>> If you're verifying a single field using 3 different
thresholds and 6 different neighborhood sizes, you'd get 18 NBRCNT
lines in the output file.
>>>>>>>>>>
>>>>>>>>>> Here's an example of how you might set this up in the Grid-
Stat config file:
>>>>>>>>>>
>>>>>>>>>> nbrhd = {
>>>>>>>>>>          vld_thresh = 1.0;
>>>>>>>>>>          width      = [ 3, 5, 9, 11, 13, 15 ];
>>>>>>>>>>          cov_thresh = [ >=0.5 ];
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> For a given threshold, you should look to see how FSS
changes as you increase the neighborhood size.
>>>>>>>>>>
>>>>>>>>>> Hopefully that helps get you going.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> John Halley Gotway
>>>>>>>>>> met_help at ucar.edu
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 02/12/2014 10:49 PM, Geeta Geeta via RT wrote:
>>>>>>>>>>>
>>>>>>>>>>> Wed Feb 12 22:49:14 2014: Request 65427 was acted upon.
>>>>>>>>>>> Transaction: Ticket created by geeta124 at hotmail.com
>>>>>>>>>>>              Queue: met_help
>>>>>>>>>>>            Subject: Unable to visualize Fuzzy verf.
>>>>>>>>>>>              Owner: Nobody
>>>>>>>>>>>         Requestors: geeta124 at hotmail.com
>>>>>>>>>>>             Status: new
>>>>>>>>>>>        Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=65427 >
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Hi John/ met_help.
>>>>>>>>>>>
>>>>>>>>>>> I was reading MET doc that mentions about the FUZZY
verification methods. I am trying to visualise what grid stat does or
how it functions.
>>>>>>>>>>> After the copygb is run, the FCST and OBS are on the same
grid. ie 1------------------2-----------------------3|
|                                |                                   |
|                                ||                         |
||                         |
|4-------------------5-----------------------6
>>>>>>>>>>> ie at the Grid Points (GP) 1 to 6, U have Observations and
the model FCST.Now the MET doc (Pg 5-3) says that a SQUARE search
window is defined around each grid point, within which the obs and the
FCST events are counted.  1. I want to know HOW is this SQUARE WINDOW
is defined (I mean in the configuration file) of Grid stat.  2. How
Can I change the size of the SQUARE window. 3. If My model resolution
in 10km and I am interested in the synoptic scale phenomenon, then
what should be the window size???????????????  your help is urgently
required.
>>>>>>>>>>>
>>>>>>>>>>> geeta
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
>

------------------------------------------------


More information about the Met_help mailing list