[Met_help] [rt.rap.ucar.edu #86525] History for RE: grid_stat running very slow

John Halley Gotway via RT met_help at ucar.edu
Tue Jul 9 12:07:32 MDT 2019


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Hi again MET team,

I have a pcp_combine question for you that may or may not work, given the current format of model output data I’m trying to work with.

Brief Background: We have a single netcdf file with individual hourly model precip for 48 hours/times of a forecast run.

I need to extract and sum these hourly precip over specified intervals from the file and have pcp_combine output into a temporary file.
Is there a way to do this, given that all the netcdf file has in it are valid times for each hourly interval?  There is no specification of model initialization date and forecast hours in the file.  Also, the variable name is “PCP”, not APCP.

If you think we need to extract and/or re-work the netcdf file, please let me know.  So far, I’ve not been able to figure out which way to do this in pcp_combine.

Thanks for the help,
JonC

From: John Halley Gotway <johnhg at ucar.edu>
Sent: Monday, August 6, 2018 12:32 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov>
Cc: Julie Prestopnik <jpresto at ucar.edu>; Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov>; Tara Jensen <jensen at ucar.edu>
Subject: Re: grid_stat running very slow

Jon,

This is great info.  Thanks for letting us know about how much longer it takes to run with higher compression levels.  We can add a cautionary note to the MET User's Guide about this.

John

On Tue, Jul 31, 2018 at 10:38 AM Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Hi MET Team,

Thanks for all your emails lately.  It’s all helpful info for getting to the bottom of my issue on the NASA supercomputer.
Jayanthi has built MET version 6.0 on our own SPoRT cluster, so I’m currently trying to do a sample grid_stat run for a single time on our system, since we have more control there.
I have verified that Jayanthi built our MET using Intel compilers and the –O2 optimization level.

More later then,
JonC

From: Julie Prestopnik <jpresto at ucar.edu<mailto:jpresto at ucar.edu>>
Sent: Tuesday, July 31, 2018 10:43 AM
To: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Cc: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>; Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-1 at nasa.gov>>; Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
Subject: Re: grid_stat running very slow

Hi Jon.

As John mentioned, I have built MET and its supporting libraries on a number of platforms, always with the Intel family of compilers.

I just looked through the script that I use to build and install MET on these various platforms and verified that I have not added any optimization flags in building MET or any of its supporting libraries.  I don't know if there would be a difference in optimization using gnu vs. intel compilers or not.  Unfortunately, I don't have much experience in this area.

Please let me know if you have any more questions on compiling.  Also, please let us know if you try to run a job with more memory allocation and how that goes.

Thanks,
Julie


On Mon, Jul 30, 2018 at 10:28 PM John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>> wrote:
Hi Jon,

Unfortunately I don’t have any good advice for you.  Since I’m typically in development mode, I usually compile with the -g option.  We compile with GNU in development.

I’ve cc’ed Julie Prestopnik, who’s been compiling MET on a variety of platforms (usually with Intel compilers) in case she has any advice.

I wonder if the slower run times are related to memory usage.  If you’re process switches over to swap space, it could run much slower.  Some supers enable you to request more memory... and/or print diagnostic info about memory usage in the job log.  As a test, you could try running a job with more memory allocation to see if that speeds things up.

Thanks
John

On Mon, Jul 30, 2018 at 2:26 PM Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Hi again John H-G.

I’m presently working with the NASA IT staff in determining the optimization levels for compiling MET.  They built MET with gnu/g++, and noted the MET documentation reports that –O2 causes problems, and to use either –O or –g.  Could you therefore please provide with me the compiler you use for building MET, and any information about optimization levels for both the supporting packages and in particular MET?

Thanks very much,
JonC

From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Sent: Friday, July 27, 2018 4:55 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
Subject: Re: Verifying neighborhood precip very slow

Jon,

Thanks for sending this data.  I grabbed it and took a look.  Here's what is see.

I ran the following 2 commands for a single output time that you sent:

time \
/usr/local/met-6.0/bin/pcp_combine -subtract \
MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f120000 12 \
MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f110000 11 \
met-6.0/sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc>

time \
/usr/local/met-6.0/bin/grid_stat \
met-${CUR_MET_VERSION}/sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc> \
MET_MRMSTEST/mrms_met_2015060312.grb2 \
met-6.0/GridStatConfig \
-outdir met-6.0/met_out \
-v 3

And I did this for 4 versions of MET:

met-6.0 takes 0.46 sec for pcp_combine and 13.35 seconds for grid_stat.
met-6.1 takes 0.50 sec for pcp_combine and 17.91 seconds for grid_stat.
met-7.0 takes 0.48 sec for pcp_combine and 16.78 seconds for grid_stat.
met-8.0 takes 0.38 sec for pcp_combine and 11.35 seconds for grid_stat (this is the version under development).

When I tried rerunning with "to_grid = OBS" (i.e. regridding forecast data to the MRMS domain) it slowed down a lot.  In particular the BUDGET interpolation method is very slow.  Using NEAREST neighbor speeds it up a lot:
//
// Verification grid
//
regrid = {
   to_grid    = OBS;
   method     = NEAREST;
   width      = 2;
   vld_thresh = 0.5;
}

However, I do think it's appropriate to verify on the relatively coarse model domain instead of the very fine MRMS grid.  So setting "to_grid = FCST" makes sense to me.  I'm out of the office next week.  But will be back in the following week.

When configured with "to_grid = FCST" are you seeing runtime similar to what I've listed?  Or are they significantly different?

Thanks,
John

On Wed, Jul 25, 2018 at 2:27 PM Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Hi John H-G:

I uploaded a tarball which hopefully contains most everything you’ll need to do some grid_stat tests with MRMS and our near-CONUS 9-km WRF output.
https://geo.nsstc.nasa.gov/SPoRT/outgoing/jlc/MET/

Thanks!
Jon

From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Sent: Wednesday, July 25, 2018 3:01 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
Subject: Re: Verifying neighborhood precip very slow

Jon,

OK, good to know.  Please let me know when you've uploaded those sample files.

Thanks,
John

On Wed, Jul 25, 2018 at 12:10 PM Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
FYI, I just ran a test with the vx_mask netcdf files, and I see no improvement in the performance of grid_stat.
-JonC

From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
Sent: Wednesday, July 25, 2018 12:05 PM
To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Subject: RE: Verifying neighborhood precip very slow

John H-G,

Let me upload for you a sample GridStatConfig file for v6.0, one of the WRF GRIB1 files I’m using, and the MRMS GRIB2 files I “re-packaged” for using in MET.
I’ll get that to you this afternoon sometime.

-JonC

From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Sent: Wednesday, July 25, 2018 11:46 AM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
Subject: Re: Verifying neighborhood precip very slow

Jon,

I thought of trying to run a simple test here to quantify these timing issues (at least in a relative sense).  I have a sample MRMS file, and so I know the projection info.  But I don't have your WRF domain ready at hand.  Can you send me (or point me to) a sample file?  Apologies if you've already sent this to me... I see some data from 2011 but I'm guessing the grid may have changed since then.

Thanks,
John

On Wed, Jul 25, 2018 at 9:10 AM Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Hi again John H-G,

Since these are standard NCEP verification regions I’m sourcing within MET, is there by any chance these vx_mask regions in netcdf format already created for the .poly files below?
Also, the USER.poly file I use below is auto-generated by our python scripts based on the approximate outline of the WRF model domain being verified.  So all I’m doing there is just outlining the entire domain with only 5 points in the .poly file.  So that shouldn’t need to gen_vx_mask applied to it, right?

Thanks for the recommendation.  I’ll let you know how much using vx_mask speeds things up.
-JonC

From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Sent: Tuesday, July 24, 2018 5:15 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-1 at nasa.gov>>; Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa Fowler <tressa at ucar.edu<mailto:tressa at ucar.edu>>
Subject: Re: Verifying neighborhood precip very slow

Jon,

Actually, I do notice one setting in your config file that's slowing things down a lot!  You're specifying masking regions using lat/lon polylines:

poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly", "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly", "MET_BASE/poly/GRB.poly",
              "MET_BASE/poly/NMT.poly", "MET_BASE/poly/SMT.poly", "MET_BASE/poly/SWD.poly", "MET_BASE/poly/NPL.poly", "MET_BASE/poly/SPL.poly",
              "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly", "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly", "MET_BASE/poly/APL.poly",
              "MET_BASE/poly/SEC.poly" ];

That's very slow.  For each point in the verification domain, Grid-Stat is checking to see if it's inside each of these lat/lon polylines.  For a coarse grid or a polyline that doesn't contain many points, it's hardly noticeable.  But for the dense MRMS grid, that's millions and millions of computations that we could avoid by running the gen_vx_mask tool instead.  It should speed it up considerably.

For each of these polylines, run the gen_vx_mask tool to generate a NetCDF output file.  And then replace the ".poly" file in the config file with the path to the NetCDF output of gen_vx_mask.  Then try rerunning.

I'd be really curious to hear if and by how much that improves the runtime.

Thanks,
John


On Tue, Jul 24, 2018 at 4:05 PM John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>> wrote:
Jon,

I just ran this same Grid-Stat test case using met-6.0 with/without the "fix".  Unfortunately, there's no improvement.  Both versions take about 43.6 seconds to run.  We must have introduced this issue when restructuring logic in met-7.0.  So unfortunately, this fix has no real impact on 6.0.

So it's back to the drawing board.  We need a much faster algorithm for computing fractional coverage fields when computing neighborhood stats.

John

On Tue, Jul 24, 2018 at 2:59 PM Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Hi John,

If it’s a simple 1-line change to allocate memory as you say, then could you identify where the code change needs to be made in the 6.0 version (if feasible)?
We can make the change and re-compile to see if it’s as dramatic of a change as you’ve documented.

If the change can’t easily be made in v6.0, then I’ll need to consider upgrading to v7.0.  That will be a longer effort on my part, but one that we’ll likely need to make eventually….
Thx,
-JonC

From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Sent: Tuesday, July 24, 2018 3:55 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa Fowler <tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-1 at nasa.gov>>
Subject: Re: Verifying neighborhood precip very slow

Jon,

As luck would have it, working with folks at NCEP, we identified a memory allocation issue in met-7.0 and will be posting a bugfix for it later today:
   https://dtcenter.org/met/users/support/known_issues/METv7.0/index.php

This one-line change pre-allocates the required memory in one chunk rather than building it up incrementally, which is the excrutiatingly slow part!  For comparison, the execution time for NCEP's Grid-Stat test case improved from 18 minutes to 56 seconds.  Additionally, the beta release for the next version of MET further improves that runtime to 27 seconds.  The latter speed up is largely due to storing masking regions more intelligently using booleans instead of double precision values... which consume more memory and are slower to process.

As EMC moves to using MET operationally, there's a great focus on efficiency.

However, you're using met-6.0.  I could check to see if that same memory fix would apply to met-6.0... unless you tell me that you'd be able to switch to met-7.0 instead.

Thanks,
John



On Tue, Jul 24, 2018 at 1:33 PM Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Folks,

Well, it turns out that there has been a very substantial speed-up, confirming that I am in fact interpolating the MRMS OBS grid to the forecast grid.
I change the “to_grid” field to “OBS”, and grid_stat is still running 45 minutes later on the first grid comparison!!  The number of pairs over the model region is 14 million vs. 159 thousand when interpolating to the model grid.

So I have realized a dramatic speed-up --- grid_stat is still not running quite as fast as I’d like.

Thanks,
JonC

From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
Sent: Tuesday, July 24, 2018 11:45 AM
To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa Fowler <tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-1 at nasa.gov>>
Subject: RE: Verifying neighborhood precip very slow

Hi again John H-G,

I finally got to re-configuring my python scripts to remove the regrid_data_plane step and set it up to re-grid MRMS to the FCST domains in-line within GridStatConfig, as you defined in the regrid block below.  Previously, we had been re-gridding all model fields to the obs grid using regrid_data_plane, prior to running grid_stat (so kind of an extra, unnecessary step, since we first developed this work flow prior to MET version 6).

I now have grid_stat running in our version 6.0 installation with this new configuration.  Unfortunately, the grid_stat is still running excruciatingly/prohibitively too slow in what I feel is a fairly basic setup.
It takes about 6 minutes just to go through a single run of grid_stat for one accumulation interval (just 1-h precip for now).  I need to produce batch results across multiple accumulation intervals, model grids, and experiments, so this will literally take weeks to process the numerous days of forecast runs I have.

My GridStatConfig setup is as follows: (followed by the key GridStatConfig entries below, just to see if I’m doing something inherently wrong)

•         1/5/10/25mm accumulation thresholds

•         1h APCP in the current runs taking ~6min each. (I really need these to run on the order of seconds, not minutes)

•         Verification stats generated for several poly regions: (essentially all the NCEP verification regions and the entire grid)

Any suggestions for speeding up grid_stat using MRMS QPE is greatly appreciated!

Many thanks,
JonC

GridStatConfig contents:

model = "sportlis_d01";

//
// Output description to be written
// May be set separately in each "obs.field" entry
//
desc = "NA";

//
// Output observation type to be written
//
obtype = "ANALYS";

////////////////////////////////////////////////////////////////////////////////

//
// Verification grid
//
regrid = {
   to_grid    = FCST;
   method     = BUDGET;
   width      = 2;
   vld_thresh = 0.5;
}

////////////////////////////////////////////////////////////////////////////////

cat_thresh  = [ NA ];
cnt_thresh  = [ NA ];
cnt_logic   = UNION;
wind_thresh = [ NA ];
wind_logic  = UNION;

fcst = {

   field = [
      {
        name       = "APCP_01";
        level      = [ "(*,*)" ];
        cat_thresh = [ >=1, >=5, >=10, >=25 ];
      }
   ];

}

obs = {

   field = [
      {
        name       = "APCP_01";
        level      = [ "(*,*)" ];
        cat_thresh = [ >=1, >=5, >=10, >=25 ];
      }
   ];

}

climo_mean = {

   file_name = [];
   field     = [];

   regrid = {
      method     = NEAREST;
      width      = 1;
      vld_thresh = 0.5;
   }

   time_interp_method = DW_MEAN;
   match_day          = FALSE;
   time_step          = 21600;
}

mask = {
   grid = [  ];
   poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly", "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly", "MET_BASE/poly/GRB.poly", "MET_BASE/poly/NMT.poly", "MET_BASE/poly/SMT.po
ly", "MET_BASE/poly/SWD.poly", "MET_BASE/poly/NPL.poly", "MET_BASE/poly/SPL.poly", "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly", "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly", "MET_
BASE/poly/APL.poly", "MET_BASE/poly/SEC.poly" ];
}

ci_alpha  = [ 0.05 ];

boot = {
   interval = PCTILE;
   rep_prop = 1.0;
   n_rep    = 0;
   rng      = "mt19937";
   seed     = "";
}

interp = {
   field      = BOTH;
   vld_thresh = 1.0;
   shape      = SQUARE;

   type = [
      {
         method = NEAREST;
         width  = 1;
      }
   ];
}

nbrhd = {
   width      = [ 7 ];
   cov_thresh = [ >0.0 ];
   vld_thresh = 1.0;
}

output_flag = {
   fho    = BOTH;
   ctc    = BOTH;
   cts    = BOTH;
   mctc   = NONE;
   mcts   = NONE;
   cnt    = BOTH;
   sl1l2  = BOTH;
   sal1l2 = BOTH;
   vl1l2  = BOTH;
   val1l2 = BOTH;
   pct    = BOTH;
   pstd   = BOTH;
   pjc    = BOTH;
   prc    = BOTH;
   nbrctc = BOTH;
   nbrcts = BOTH;
   nbrcnt = BOTH;
}

//
// NetCDF matched pairs output file
//
nc_pairs_flag   = {
   latlon     = FALSE;
   raw        = FALSE;
   diff       = FALSE;
   climo      = FALSE;
   weight     = FALSE;
   nbrhd      = FALSE;
   apply_mask = FALSE;
}

////////////////////////////////////////////////////////////////////////////////

grid_weight_flag = NONE;
rank_corr_flag   = FALSE;
tmp_dir          = "/discover/nobackup/jlcase/MET/gridStatOutput/grid_stat_tmp";
output_prefix    = "sportlis_d01_APCP_01";
version          = "V6.0";


From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Sent: Monday, June 11, 2018 1:29 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa Fowler <tressa at ucar.edu<mailto:tressa at ucar.edu>>
Subject: Re: Verifying neighborhood precip

Jon,

Sorry for the delay in responding.  I'm in a training class all day, and they won't let us use our phones :(

And sorry for the misunderstanding.  I remember you asking about applying neighborhood methods in Grid-Stat regarding a 40-km neighborhood size.

But if that's not the case, and you're simply comparing precipitation accumulations, then its much simpler.  I would suggest re-gridding the hi-res MRMS "observation" data to the relatively lower-res NU-WRF domain.  You'd do that in in the Grid-Stat config file like this:

regrid = {
   to_grid    = FCST;
   method   = BUDGET;
   width       = 2;
   vld_thresh = 0.5;
}

The BUDGET interpolation method is generally recommended for accumulated variables, like precip.

As for when to upgrade versions, it's totally up to you.  You can see a list of the features added for each release here:
   https://dtcenter.org/met/users/support/release_notes/index.php

Probably doesn't make sense to upgrade versions until there's some new functionality available that you need/want.

Nice seeing you this week.  Hope you had a good trip back.

Thanks,
John


On Mon, Jun 11, 2018 at 11:33 AM, Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Hi again,

What would be the benefit in upgrading to MET v6.1 vs. v7.0 at this point?  Should I simply stick with our v6.0 installation where I’ve done a lot of my point verification already, or is it helpful to upgrade to one of these newer versions?  Will either of these versions be backward compatible with my v6.0 results?

Thanks,
JonC

From: John Halley Gotway [mailto:johnhg at ucar.edu<mailto:johnhg at ucar.edu>]
Sent: Monday, June 11, 2018 10:39 AM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa Fowler <tressa at ucar.edu<mailto:tressa at ucar.edu>>
Subject: Re: Verifying neighborhood precip

Jon,

Here's the same command but using met-6.0:

met-6.0/bin/regrid_data_plane MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000.grib2 G212 MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc<http://MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc> -field 'name="GaugeCorrQPE24H"; level="Z0";' -width 72 -method MAX -name GaugeCorrQPE24H_MAX_72

Note that the censor_thresh/censor_val settings aren't included... and neither is the "-shape CIRCLE" option... those were added met-6.1.  With met-6.0, you'll get a square interpolation area instead of a circle.

As for whether or not this is appropriate... My understanding is that you have a forecast field of probabilities that are defined as the probability of some event occurring within 40km of each grid point.  The upscaling method I've suggested is a way to pre-process the observation data to make it consistent with the way the probabilistic forecast was defined.  Replace the value at each observation grid point with the maximum value within a neighborhood of radius 40km.  Once you've transformed the obs in this way, you can use it to verify the probability forecast directly.

I believe this is the same method that the HRRR-TLE group at NOAA/GSD is using the verify their neighborhood probability forecasts.

I've cc'ed Tressa Fowler on this email.  She's our resident statistician and may have an opinion on this.

Using the CIRCLE shape available in met-6.1 or met-7.0 would be preferable to using squares in met-6.0.  But perhaps that's close enough.

Thanks,
John


On Mon, Jun 11, 2018 at 9:18 AM, Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
Oh, I forgot to mention one thing.  Is “max” the appropriate upscaling method I should use, or is there a conservative upscaling/interpolation approach?
-JonC

From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
Sent: Monday, June 11, 2018 10:16 AM
To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
Subject: RE: Verifying neighborhood precip


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: RE: grid_stat running very slow
From: John Halley Gotway
Time: Wed Aug 08 14:12:33 2018

Jon,

The pcp_combine "-sum" option definitely will not work.  It's logic is
set
up to process GRIB1/2 files.  The pcp_combine "-add" option might
work.
You do something like this:

pcp_combine -add \
   in_file.nc 'name="PCP"; level="(0,*,*)"; file_type=NETCDF_NCCF;'  \
   in_file.nc 'name="PCP"; level="(1,*,*)"; file_type=NETCDF_NCCF;'  \
   in_file.nc 'name="PCP"; level="(2,*,*)"; file_type=NETCDF_NCCF;'  \
   out_file.nc

I'm assuming that the PCP variable has 3 dimensions and the first one
is
time.  This is telling pcp_combine to read data for the first, second,
and
third time dimension, add them up, and write the output to
out_file.nc.  Of
course if MET can't understand the input timing info, it won't writing
meaning time info to the output.

Another alternative would be using the NCO tools to slice, date, and
sum
your NetCDF files.

John

On Wed, Aug 8, 2018 at 2:06 PM Case, Jonathan[ENSCO INC] via RT <
met_help at ucar.edu> wrote:

>
> Wed Aug 08 14:06:01 2018: Request 86525 was acted upon.
> Transaction: Ticket created by jonathan.case-1 at nasa.gov
>        Queue: met_help
>      Subject: RE: grid_stat running very slow
>        Owner: Nobody
>   Requestors: jonathan.case-1 at nasa.gov
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86525 >
>
>
> Hi again MET team,
>
> I have a pcp_combine question for you that may or may not work,
given the
> current format of model output data I’m trying to work with.
>
> Brief Background: We have a single netcdf file with individual
hourly
> model precip for 48 hours/times of a forecast run.
>
> I need to extract and sum these hourly precip over specified
intervals
> from the file and have pcp_combine output into a temporary file.
> Is there a way to do this, given that all the netcdf file has in it
are
> valid times for each hourly interval?  There is no specification of
model
> initialization date and forecast hours in the file.  Also, the
variable
> name is “PCP”, not APCP.
>
> If you think we need to extract and/or re-work the netcdf file,
please let
> me know.  So far, I’ve not been able to figure out which way to do
this in
> pcp_combine.
>
> Thanks for the help,
> JonC
>
> From: John Halley Gotway <johnhg at ucar.edu>
> Sent: Monday, August 6, 2018 12:32 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov>
> Cc: Julie Prestopnik <jpresto at ucar.edu>; Srikishen, Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov>; Tara Jensen <
> jensen at ucar.edu>
> Subject: Re: grid_stat running very slow
>
> Jon,
>
> This is great info.  Thanks for letting us know about how much
longer it
> takes to run with higher compression levels.  We can add a
cautionary note
> to the MET User's Guide about this.
>
> John
>
> On Tue, Jul 31, 2018 at 10:38 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi MET Team,
>
> Thanks for all your emails lately.  It’s all helpful info for
getting to
> the bottom of my issue on the NASA supercomputer.
> Jayanthi has built MET version 6.0 on our own SPoRT cluster, so I’m
> currently trying to do a sample grid_stat run for a single time on
our
> system, since we have more control there.
> I have verified that Jayanthi built our MET using Intel compilers
and the
> –O2 optimization level.
>
> More later then,
> JonC
>
> From: Julie Prestopnik <jpresto at ucar.edu<mailto:jpresto at ucar.edu>>
> Sent: Tuesday, July 31, 2018 10:43 AM
> To: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>; Srikishen, Jayanthi (MSFC-
ST11)[USRA]
> <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-
1 at nasa.gov>>;
> Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> Subject: Re: grid_stat running very slow
>
> Hi Jon.
>
> As John mentioned, I have built MET and its supporting libraries on
a
> number of platforms, always with the Intel family of compilers.
>
> I just looked through the script that I use to build and install MET
on
> these various platforms and verified that I have not added any
optimization
> flags in building MET or any of its supporting libraries.  I don't
know if
> there would be a difference in optimization using gnu vs. intel
compilers
> or not.  Unfortunately, I don't have much experience in this area.
>
> Please let me know if you have any more questions on compiling.
Also,
> please let us know if you try to run a job with more memory
allocation and
> how that goes.
>
> Thanks,
> Julie
>
>
> On Mon, Jul 30, 2018 at 10:28 PM John Halley Gotway <johnhg at ucar.edu
> <mailto:johnhg at ucar.edu>> wrote:
> Hi Jon,
>
> Unfortunately I don’t have any good advice for you.  Since I’m
typically
> in development mode, I usually compile with the -g option.  We
compile with
> GNU in development.
>
> I’ve cc’ed Julie Prestopnik, who’s been compiling MET on a variety
of
> platforms (usually with Intel compilers) in case she has any advice.
>
> I wonder if the slower run times are related to memory usage.  If
you’re
> process switches over to swap space, it could run much slower.  Some
supers
> enable you to request more memory... and/or print diagnostic info
about
> memory usage in the job log.  As a test, you could try running a job
with
> more memory allocation to see if that speeds things up.
>
> Thanks
> John
>
> On Mon, Jul 30, 2018 at 2:26 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi again John H-G.
>
> I’m presently working with the NASA IT staff in determining the
> optimization levels for compiling MET.  They built MET with gnu/g++,
and
> noted the MET documentation reports that –O2 causes problems, and to
use
> either –O or –g.  Could you therefore please provide with me the
compiler
> you use for building MET, and any information about optimization
levels for
> both the supporting packages and in particular MET?
>
> Thanks very much,
> JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Friday, July 27, 2018 4:55 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> Thanks for sending this data.  I grabbed it and took a look.  Here's
what
> is see.
>
> I ran the following 2 commands for a single output time that you
sent:
>
> time \
> /usr/local/met-6.0/bin/pcp_combine -subtract \
> MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f120000 12 \
> MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f110000 11 \
> met-6.0/sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc>
>
> time \
> /usr/local/met-6.0/bin/grid_stat \
> met-${CUR_MET_VERSION}/
> sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc> \
> MET_MRMSTEST/mrms_met_2015060312.grb2 \
> met-6.0/GridStatConfig \
> -outdir met-6.0/met_out \
> -v 3
>
> And I did this for 4 versions of MET:
>
> met-6.0 takes 0.46 sec for pcp_combine and 13.35 seconds for
grid_stat.
> met-6.1 takes 0.50 sec for pcp_combine and 17.91 seconds for
grid_stat.
> met-7.0 takes 0.48 sec for pcp_combine and 16.78 seconds for
grid_stat.
> met-8.0 takes 0.38 sec for pcp_combine and 11.35 seconds for
grid_stat
> (this is the version under development).
>
> When I tried rerunning with "to_grid = OBS" (i.e. regridding
forecast data
> to the MRMS domain) it slowed down a lot.  In particular the BUDGET
> interpolation method is very slow.  Using NEAREST neighbor speeds it
up a
> lot:
> //
> // Verification grid
> //
> regrid = {
>    to_grid    = OBS;
>    method     = NEAREST;
>    width      = 2;
>    vld_thresh = 0.5;
> }
>
> However, I do think it's appropriate to verify on the relatively
coarse
> model domain instead of the very fine MRMS grid.  So setting
"to_grid =
> FCST" makes sense to me.  I'm out of the office next week.  But will
be
> back in the following week.
>
> When configured with "to_grid = FCST" are you seeing runtime similar
to
> what I've listed?  Or are they significantly different?
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 2:27 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi John H-G:
>
> I uploaded a tarball which hopefully contains most everything you’ll
need
> to do some grid_stat tests with MRMS and our near-CONUS 9-km WRF
output.
> https://geo.nsstc.nasa.gov/SPoRT/outgoing/jlc/MET/
>
> Thanks!
> Jon
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Wednesday, July 25, 2018 3:01 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> OK, good to know.  Please let me know when you've uploaded those
sample
> files.
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 12:10 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> FYI, I just ran a test with the vx_mask netcdf files, and I see no
> improvement in the performance of grid_stat.
> -JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Wednesday, July 25, 2018 12:05 PM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Subject: RE: Verifying neighborhood precip very slow
>
> John H-G,
>
> Let me upload for you a sample GridStatConfig file for v6.0, one of
the
> WRF GRIB1 files I’m using, and the MRMS GRIB2 files I “re-packaged”
for
> using in MET.
> I’ll get that to you this afternoon sometime.
>
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Wednesday, July 25, 2018 11:46 AM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> I thought of trying to run a simple test here to quantify these
timing
> issues (at least in a relative sense).  I have a sample MRMS file,
and so I
> know the projection info.  But I don't have your WRF domain ready at
hand.
> Can you send me (or point me to) a sample file?  Apologies if you've
> already sent this to me... I see some data from 2011 but I'm
guessing the
> grid may have changed since then.
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 9:10 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi again John H-G,
>
> Since these are standard NCEP verification regions I’m sourcing
within
> MET, is there by any chance these vx_mask regions in netcdf format
already
> created for the .poly files below?
> Also, the USER.poly file I use below is auto-generated by our python
> scripts based on the approximate outline of the WRF model domain
being
> verified.  So all I’m doing there is just outlining the entire
domain with
> only 5 points in the .poly file.  So that shouldn’t need to
gen_vx_mask
> applied to it, right?
>
> Thanks for the recommendation.  I’ll let you know how much using
vx_mask
> speeds things up.
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Tuesday, July 24, 2018 5:15 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-
1 at nasa.gov
> <mailto:jayanthi.srikishen-1 at nasa.gov>>; Tara Jensen
<jensen at ucar.edu
> <mailto:jensen at ucar.edu>>; Tressa Fowler <tressa at ucar.edu<mailto:
> tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> Actually, I do notice one setting in your config file that's slowing
> things down a lot!  You're specifying masking regions using lat/lon
> polylines:
>
> poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> "MET_BASE/poly/GRB.poly",
>               "MET_BASE/poly/NMT.poly", "MET_BASE/poly/SMT.poly",
> "MET_BASE/poly/SWD.poly", "MET_BASE/poly/NPL.poly",
> "MET_BASE/poly/SPL.poly",
>               "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly",
> "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly",
> "MET_BASE/poly/APL.poly",
>               "MET_BASE/poly/SEC.poly" ];
>
> That's very slow.  For each point in the verification domain, Grid-
Stat is
> checking to see if it's inside each of these lat/lon polylines.  For
a
> coarse grid or a polyline that doesn't contain many points, it's
hardly
> noticeable.  But for the dense MRMS grid, that's millions and
millions of
> computations that we could avoid by running the gen_vx_mask tool
instead.
> It should speed it up considerably.
>
> For each of these polylines, run the gen_vx_mask tool to generate a
NetCDF
> output file.  And then replace the ".poly" file in the config file
with the
> path to the NetCDF output of gen_vx_mask.  Then try rerunning.
>
> I'd be really curious to hear if and by how much that improves the
runtime.
>
> Thanks,
> John
>
>
> On Tue, Jul 24, 2018 at 4:05 PM John Halley Gotway <johnhg at ucar.edu
> <mailto:johnhg at ucar.edu>> wrote:
> Jon,
>
> I just ran this same Grid-Stat test case using met-6.0 with/without
the
> "fix".  Unfortunately, there's no improvement.  Both versions take
about
> 43.6 seconds to run.  We must have introduced this issue when
restructuring
> logic in met-7.0.  So unfortunately, this fix has no real impact on
6.0.
>
> So it's back to the drawing board.  We need a much faster algorithm
for
> computing fractional coverage fields when computing neighborhood
stats.
>
> John
>
> On Tue, Jul 24, 2018 at 2:59 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi John,
>
> If it’s a simple 1-line change to allocate memory as you say, then
could
> you identify where the code change needs to be made in the 6.0
version (if
> feasible)?
> We can make the change and re-compile to see if it’s as dramatic of
a
> change as you’ve documented.
>
> If the change can’t easily be made in v6.0, then I’ll need to
consider
> upgrading to v7.0.  That will be a longer effort on my part, but one
that
> we’ll likely need to make eventually….
> Thx,
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Tuesday, July 24, 2018 3:55 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
Fowler <
> tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen, Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> jayanthi.srikishen-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> As luck would have it, working with folks at NCEP, we identified a
memory
> allocation issue in met-7.0 and will be posting a bugfix for it
later today:
>
https://dtcenter.org/met/users/support/known_issues/METv7.0/index.php
>
> This one-line change pre-allocates the required memory in one chunk
rather
> than building it up incrementally, which is the excrutiatingly slow
part!
> For comparison, the execution time for NCEP's Grid-Stat test case
improved
> from 18 minutes to 56 seconds.  Additionally, the beta release for
the next
> version of MET further improves that runtime to 27 seconds.  The
latter
> speed up is largely due to storing masking regions more
intelligently using
> booleans instead of double precision values... which consume more
memory
> and are slower to process.
>
> As EMC moves to using MET operationally, there's a great focus on
> efficiency.
>
> However, you're using met-6.0.  I could check to see if that same
memory
> fix would apply to met-6.0... unless you tell me that you'd be able
to
> switch to met-7.0 instead.
>
> Thanks,
> John
>
>
>
> On Tue, Jul 24, 2018 at 1:33 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Folks,
>
> Well, it turns out that there has been a very substantial speed-up,
> confirming that I am in fact interpolating the MRMS OBS grid to the
> forecast grid.
> I change the “to_grid” field to “OBS”, and grid_stat is still
running 45
> minutes later on the first grid comparison!!  The number of pairs
over the
> model region is 14 million vs. 159 thousand when interpolating to
the model
> grid.
>
> So I have realized a dramatic speed-up --- grid_stat is still not
running
> quite as fast as I’d like.
>
> Thanks,
> JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Tuesday, July 24, 2018 11:45 AM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
Fowler <
> tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen, Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> jayanthi.srikishen-1 at nasa.gov>>
> Subject: RE: Verifying neighborhood precip very slow
>
> Hi again John H-G,
>
> I finally got to re-configuring my python scripts to remove the
> regrid_data_plane step and set it up to re-grid MRMS to the FCST
domains
> in-line within GridStatConfig, as you defined in the regrid block
below.
> Previously, we had been re-gridding all model fields to the obs grid
using
> regrid_data_plane, prior to running grid_stat (so kind of an extra,
> unnecessary step, since we first developed this work flow prior to
MET
> version 6).
>
> I now have grid_stat running in our version 6.0 installation with
this new
> configuration.  Unfortunately, the grid_stat is still running
> excruciatingly/prohibitively too slow in what I feel is a fairly
basic
> setup.
> It takes about 6 minutes just to go through a single run of
grid_stat for
> one accumulation interval (just 1-h precip for now).  I need to
produce
> batch results across multiple accumulation intervals, model grids,
and
> experiments, so this will literally take weeks to process the
numerous days
> of forecast runs I have.
>
> My GridStatConfig setup is as follows: (followed by the key
GridStatConfig
> entries below, just to see if I’m doing something inherently wrong)
>
> •         1/5/10/25mm accumulation thresholds
>
> •         1h APCP in the current runs taking ~6min each. (I really
need
> these to run on the order of seconds, not minutes)
>
> •         Verification stats generated for several poly regions:
> (essentially all the NCEP verification regions and the entire grid)
>
> Any suggestions for speeding up grid_stat using MRMS QPE is greatly
> appreciated!
>
> Many thanks,
> JonC
>
> GridStatConfig contents:
>
> model = "sportlis_d01";
>
> //
> // Output description to be written
> // May be set separately in each "obs.field" entry
> //
> desc = "NA";
>
> //
> // Output observation type to be written
> //
> obtype = "ANALYS";
>
>
>
////////////////////////////////////////////////////////////////////////////////
>
> //
> // Verification grid
> //
> regrid = {
>    to_grid    = FCST;
>    method     = BUDGET;
>    width      = 2;
>    vld_thresh = 0.5;
> }
>
>
>
////////////////////////////////////////////////////////////////////////////////
>
> cat_thresh  = [ NA ];
> cnt_thresh  = [ NA ];
> cnt_logic   = UNION;
> wind_thresh = [ NA ];
> wind_logic  = UNION;
>
> fcst = {
>
>    field = [
>       {
>         name       = "APCP_01";
>         level      = [ "(*,*)" ];
>         cat_thresh = [ >=1, >=5, >=10, >=25 ];
>       }
>    ];
>
> }
>
> obs = {
>
>    field = [
>       {
>         name       = "APCP_01";
>         level      = [ "(*,*)" ];
>         cat_thresh = [ >=1, >=5, >=10, >=25 ];
>       }
>    ];
>
> }
>
> climo_mean = {
>
>    file_name = [];
>    field     = [];
>
>    regrid = {
>       method     = NEAREST;
>       width      = 1;
>       vld_thresh = 0.5;
>    }
>
>    time_interp_method = DW_MEAN;
>    match_day          = FALSE;
>    time_step          = 21600;
> }
>
> mask = {
>    grid = [  ];
>    poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> "MET_BASE/poly/GRB.poly", "MET_BASE/poly/NMT.poly",
"MET_BASE/poly/SMT.po
> ly", "MET_BASE/poly/SWD.poly", "MET_BASE/poly/NPL.poly",
> "MET_BASE/poly/SPL.poly", "MET_BASE/poly/MDW.poly",
> "MET_BASE/poly/LMV.poly", "MET_BASE/poly/GMC.poly",
> "MET_BASE/poly/NEC.poly", "MET_
> BASE/poly/APL.poly", "MET_BASE/poly/SEC.poly" ];
> }
>
> ci_alpha  = [ 0.05 ];
>
> boot = {
>    interval = PCTILE;
>    rep_prop = 1.0;
>    n_rep    = 0;
>    rng      = "mt19937";
>    seed     = "";
> }
>
> interp = {
>    field      = BOTH;
>    vld_thresh = 1.0;
>    shape      = SQUARE;
>
>    type = [
>       {
>          method = NEAREST;
>          width  = 1;
>       }
>    ];
> }
>
> nbrhd = {
>    width      = [ 7 ];
>    cov_thresh = [ >0.0 ];
>    vld_thresh = 1.0;
> }
>
> output_flag = {
>    fho    = BOTH;
>    ctc    = BOTH;
>    cts    = BOTH;
>    mctc   = NONE;
>    mcts   = NONE;
>    cnt    = BOTH;
>    sl1l2  = BOTH;
>    sal1l2 = BOTH;
>    vl1l2  = BOTH;
>    val1l2 = BOTH;
>    pct    = BOTH;
>    pstd   = BOTH;
>    pjc    = BOTH;
>    prc    = BOTH;
>    nbrctc = BOTH;
>    nbrcts = BOTH;
>    nbrcnt = BOTH;
> }
>
> //
> // NetCDF matched pairs output file
> //
> nc_pairs_flag   = {
>    latlon     = FALSE;
>    raw        = FALSE;
>    diff       = FALSE;
>    climo      = FALSE;
>    weight     = FALSE;
>    nbrhd      = FALSE;
>    apply_mask = FALSE;
> }
>
>
>
////////////////////////////////////////////////////////////////////////////////
>
> grid_weight_flag = NONE;
> rank_corr_flag   = FALSE;
> tmp_dir          =
> "/discover/nobackup/jlcase/MET/gridStatOutput/grid_stat_tmp";
> output_prefix    = "sportlis_d01_APCP_01";
> version          = "V6.0";
>
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Monday, June 11, 2018 1:29 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
Fowler <
> tressa at ucar.edu<mailto:tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip
>
> Jon,
>
> Sorry for the delay in responding.  I'm in a training class all day,
and
> they won't let us use our phones :(
>
> And sorry for the misunderstanding.  I remember you asking about
applying
> neighborhood methods in Grid-Stat regarding a 40-km neighborhood
size.
>
> But if that's not the case, and you're simply comparing
precipitation
> accumulations, then its much simpler.  I would suggest re-gridding
the
> hi-res MRMS "observation" data to the relatively lower-res NU-WRF
domain.
> You'd do that in in the Grid-Stat config file like this:
>
> regrid = {
>    to_grid    = FCST;
>    method   = BUDGET;
>    width       = 2;
>    vld_thresh = 0.5;
> }
>
> The BUDGET interpolation method is generally recommended for
accumulated
> variables, like precip.
>
> As for when to upgrade versions, it's totally up to you.  You can
see a
> list of the features added for each release here:
>    https://dtcenter.org/met/users/support/release_notes/index.php
>
> Probably doesn't make sense to upgrade versions until there's some
new
> functionality available that you need/want.
>
> Nice seeing you this week.  Hope you had a good trip back.
>
> Thanks,
> John
>
>
> On Mon, Jun 11, 2018 at 11:33 AM, Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi again,
>
> What would be the benefit in upgrading to MET v6.1 vs. v7.0 at this
> point?  Should I simply stick with our v6.0 installation where I’ve
done a
> lot of my point verification already, or is it helpful to upgrade to
one of
> these newer versions?  Will either of these versions be backward
compatible
> with my v6.0 results?
>
> Thanks,
> JonC
>
> From: John Halley Gotway
[mailto:johnhg at ucar.edu<mailto:johnhg at ucar.edu>]
> Sent: Monday, June 11, 2018 10:39 AM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
Fowler <
> tressa at ucar.edu<mailto:tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip
>
> Jon,
>
> Here's the same command but using met-6.0:
>
> met-6.0/bin/regrid_data_plane
> MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000.grib2 G212
> MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc<
> http://MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc> -field
> 'name="GaugeCorrQPE24H"; level="Z0";' -width 72 -method MAX -name
> GaugeCorrQPE24H_MAX_72
>
> Note that the censor_thresh/censor_val settings aren't included...
and
> neither is the "-shape CIRCLE" option... those were added met-6.1.
With
> met-6.0, you'll get a square interpolation area instead of a circle.
>
> As for whether or not this is appropriate... My understanding is
that you
> have a forecast field of probabilities that are defined as the
probability
> of some event occurring within 40km of each grid point.  The
upscaling
> method I've suggested is a way to pre-process the observation data
to make
> it consistent with the way the probabilistic forecast was defined.
Replace
> the value at each observation grid point with the maximum value
within a
> neighborhood of radius 40km.  Once you've transformed the obs in
this way,
> you can use it to verify the probability forecast directly.
>
> I believe this is the same method that the HRRR-TLE group at
NOAA/GSD is
> using the verify their neighborhood probability forecasts.
>
> I've cc'ed Tressa Fowler on this email.  She's our resident
statistician
> and may have an opinion on this.
>
> Using the CIRCLE shape available in met-6.1 or met-7.0 would be
preferable
> to using squares in met-6.0.  But perhaps that's close enough.
>
> Thanks,
> John
>
>
> On Mon, Jun 11, 2018 at 9:18 AM, Case, Jonathan (MSFC-ST11)[ENSCO
INC] <
> jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Oh, I forgot to mention one thing.  Is “max” the appropriate
upscaling
> method I should use, or is there a conservative
upscaling/interpolation
> approach?
> -JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Monday, June 11, 2018 10:16 AM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> Subject: RE: Verifying neighborhood precip
>
>

------------------------------------------------
Subject: RE: [rt.rap.ucar.edu #86525] RE: grid_stat running very slow
From: Case, Jonathan[ENSCO INC]
Time: Wed Aug 08 14:52:32 2018

Thanks.  I'll give it a shot!
-JonC

-----Original Message-----
From: John Halley Gotway via RT <met_help at ucar.edu>
Sent: Wednesday, August 8, 2018 3:13 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov>
Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-
1 at nasa.gov>; jensen at ucar.edu; jpresto at ucar.edu
Subject: Re: [rt.rap.ucar.edu #86525] RE: grid_stat running very slow

Jon,

The pcp_combine "-sum" option definitely will not work.  It's logic is
set up to process GRIB1/2 files.  The pcp_combine "-add" option might
work.
You do something like this:

pcp_combine -add \
   in_file.nc 'name="PCP"; level="(0,*,*)"; file_type=NETCDF_NCCF;'  \
   in_file.nc 'name="PCP"; level="(1,*,*)"; file_type=NETCDF_NCCF;'  \
   in_file.nc 'name="PCP"; level="(2,*,*)"; file_type=NETCDF_NCCF;'  \
   out_file.nc

I'm assuming that the PCP variable has 3 dimensions and the first one
is time.  This is telling pcp_combine to read data for the first,
second, and third time dimension, add them up, and write the output to
out_file.nc.  Of course if MET can't understand the input timing info,
it won't writing meaning time info to the output.

Another alternative would be using the NCO tools to slice, date, and
sum your NetCDF files.

John

On Wed, Aug 8, 2018 at 2:06 PM Case, Jonathan[ENSCO INC] via RT <
met_help at ucar.edu> wrote:

>
> Wed Aug 08 14:06:01 2018: Request 86525 was acted upon.
> Transaction: Ticket created by jonathan.case-1 at nasa.gov
>        Queue: met_help
>      Subject: RE: grid_stat running very slow
>        Owner: Nobody
>   Requestors: jonathan.case-1 at nasa.gov
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86525
> >
>
>
> Hi again MET team,
>
> I have a pcp_combine question for you that may or may not work,
given
> the current format of model output data I’m trying to work with.
>
> Brief Background: We have a single netcdf file with individual
hourly
> model precip for 48 hours/times of a forecast run.
>
> I need to extract and sum these hourly precip over specified
intervals
> from the file and have pcp_combine output into a temporary file.
> Is there a way to do this, given that all the netcdf file has in it
> are valid times for each hourly interval?  There is no specification
> of model initialization date and forecast hours in the file.  Also,
> the variable name is “PCP”, not APCP.
>
> If you think we need to extract and/or re-work the netcdf file,
please
> let me know.  So far, I’ve not been able to figure out which way to
do
> this in pcp_combine.
>
> Thanks for the help,
> JonC
>
> From: John Halley Gotway <johnhg at ucar.edu>
> Sent: Monday, August 6, 2018 12:32 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov>
> Cc: Julie Prestopnik <jpresto at ucar.edu>; Srikishen, Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov>; Tara Jensen <
> jensen at ucar.edu>
> Subject: Re: grid_stat running very slow
>
> Jon,
>
> This is great info.  Thanks for letting us know about how much
longer
> it takes to run with higher compression levels.  We can add a
> cautionary note to the MET User's Guide about this.
>
> John
>
> On Tue, Jul 31, 2018 at 10:38 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi MET Team,
>
> Thanks for all your emails lately.  It’s all helpful info for
getting
> to the bottom of my issue on the NASA supercomputer.
> Jayanthi has built MET version 6.0 on our own SPoRT cluster, so I’m
> currently trying to do a sample grid_stat run for a single time on
our
> system, since we have more control there.
> I have verified that Jayanthi built our MET using Intel compilers
and
> the
> –O2 optimization level.
>
> More later then,
> JonC
>
> From: Julie Prestopnik <jpresto at ucar.edu<mailto:jpresto at ucar.edu>>
> Sent: Tuesday, July 31, 2018 10:43 AM
> To: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>; Srikishen, Jayanthi
> (MSFC-ST11)[USRA]
> <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-
1 at nasa.gov>>;
> Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> Subject: Re: grid_stat running very slow
>
> Hi Jon.
>
> As John mentioned, I have built MET and its supporting libraries on
a
> number of platforms, always with the Intel family of compilers.
>
> I just looked through the script that I use to build and install MET
> on these various platforms and verified that I have not added any
> optimization flags in building MET or any of its supporting
libraries.
> I don't know if there would be a difference in optimization using
gnu
> vs. intel compilers or not.  Unfortunately, I don't have much
experience in this area.
>
> Please let me know if you have any more questions on compiling.
Also,
> please let us know if you try to run a job with more memory
allocation
> and how that goes.
>
> Thanks,
> Julie
>
>
> On Mon, Jul 30, 2018 at 10:28 PM John Halley Gotway <johnhg at ucar.edu
> <mailto:johnhg at ucar.edu>> wrote:
> Hi Jon,
>
> Unfortunately I don’t have any good advice for you.  Since I’m
> typically in development mode, I usually compile with the -g option.
> We compile with GNU in development.
>
> I’ve cc’ed Julie Prestopnik, who’s been compiling MET on a variety
of
> platforms (usually with Intel compilers) in case she has any advice.
>
> I wonder if the slower run times are related to memory usage.  If
> you’re process switches over to swap space, it could run much
slower.
> Some supers enable you to request more memory... and/or print
> diagnostic info about memory usage in the job log.  As a test, you
> could try running a job with more memory allocation to see if that
speeds things up.
>
> Thanks
> John
>
> On Mon, Jul 30, 2018 at 2:26 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi again John H-G.
>
> I’m presently working with the NASA IT staff in determining the
> optimization levels for compiling MET.  They built MET with gnu/g++,
> and noted the MET documentation reports that –O2 causes problems,
and
> to use either –O or –g.  Could you therefore please provide with me
> the compiler you use for building MET, and any information about
> optimization levels for both the supporting packages and in
particular MET?
>
> Thanks very much,
> JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Friday, July 27, 2018 4:55 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> Thanks for sending this data.  I grabbed it and took a look.  Here's
> what is see.
>
> I ran the following 2 commands for a single output time that you
sent:
>
> time \
> /usr/local/met-6.0/bin/pcp_combine -subtract \
> MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f120000 12 \
> MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f110000 11 \
> met-6.0/sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc>
>
> time \
> /usr/local/met-6.0/bin/grid_stat \
> met-${CUR_MET_VERSION}/
> sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc> \
> MET_MRMSTEST/mrms_met_2015060312.grb2 \ met-6.0/GridStatConfig \
> -outdir met-6.0/met_out \ -v 3
>
> And I did this for 4 versions of MET:
>
> met-6.0 takes 0.46 sec for pcp_combine and 13.35 seconds for
grid_stat.
> met-6.1 takes 0.50 sec for pcp_combine and 17.91 seconds for
grid_stat.
> met-7.0 takes 0.48 sec for pcp_combine and 16.78 seconds for
grid_stat.
> met-8.0 takes 0.38 sec for pcp_combine and 11.35 seconds for
grid_stat
> (this is the version under development).
>
> When I tried rerunning with "to_grid = OBS" (i.e. regridding
forecast
> data to the MRMS domain) it slowed down a lot.  In particular the
> BUDGET interpolation method is very slow.  Using NEAREST neighbor
> speeds it up a
> lot:
> //
> // Verification grid
> //
> regrid = {
>    to_grid    = OBS;
>    method     = NEAREST;
>    width      = 2;
>    vld_thresh = 0.5;
> }
>
> However, I do think it's appropriate to verify on the relatively
> coarse model domain instead of the very fine MRMS grid.  So setting
> "to_grid = FCST" makes sense to me.  I'm out of the office next
week.
> But will be back in the following week.
>
> When configured with "to_grid = FCST" are you seeing runtime similar
> to what I've listed?  Or are they significantly different?
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 2:27 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi John H-G:
>
> I uploaded a tarball which hopefully contains most everything you’ll
> need to do some grid_stat tests with MRMS and our near-CONUS 9-km
WRF output.
> https://geo.nsstc.nasa.gov/SPoRT/outgoing/jlc/MET/
>
> Thanks!
> Jon
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Wednesday, July 25, 2018 3:01 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> OK, good to know.  Please let me know when you've uploaded those
> sample files.
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 12:10 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> FYI, I just ran a test with the vx_mask netcdf files, and I see no
> improvement in the performance of grid_stat.
> -JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Wednesday, July 25, 2018 12:05 PM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Subject: RE: Verifying neighborhood precip very slow
>
> John H-G,
>
> Let me upload for you a sample GridStatConfig file for v6.0, one of
> the WRF GRIB1 files I’m using, and the MRMS GRIB2 files I
> “re-packaged” for using in MET.
> I’ll get that to you this afternoon sometime.
>
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Wednesday, July 25, 2018 11:46 AM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> I thought of trying to run a simple test here to quantify these
timing
> issues (at least in a relative sense).  I have a sample MRMS file,
and
> so I know the projection info.  But I don't have your WRF domain
ready at hand.
> Can you send me (or point me to) a sample file?  Apologies if you've
> already sent this to me... I see some data from 2011 but I'm
guessing
> the grid may have changed since then.
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 9:10 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi again John H-G,
>
> Since these are standard NCEP verification regions I’m sourcing
within
> MET, is there by any chance these vx_mask regions in netcdf format
> already created for the .poly files below?
> Also, the USER.poly file I use below is auto-generated by our python
> scripts based on the approximate outline of the WRF model domain
being
> verified.  So all I’m doing there is just outlining the entire
domain
> with only 5 points in the .poly file.  So that shouldn’t need to
> gen_vx_mask applied to it, right?
>
> Thanks for the recommendation.  I’ll let you know how much using
> vx_mask speeds things up.
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Tuesday, July 24, 2018 5:15 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA]
> <jayanthi.srikishen-1 at nasa.gov
> <mailto:jayanthi.srikishen-1 at nasa.gov>>; Tara Jensen
<jensen at ucar.edu <mailto:jensen at ucar.edu>>; Tressa Fowler
<tressa at ucar.edu<mailto:
> tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> Actually, I do notice one setting in your config file that's slowing
> things down a lot!  You're specifying masking regions using lat/lon
> polylines:
>
> poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> "MET_BASE/poly/GRB.poly",
>               "MET_BASE/poly/NMT.poly", "MET_BASE/poly/SMT.poly",
> "MET_BASE/poly/SWD.poly", "MET_BASE/poly/NPL.poly",
> "MET_BASE/poly/SPL.poly",
>               "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly",
> "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly",
> "MET_BASE/poly/APL.poly",
>               "MET_BASE/poly/SEC.poly" ];
>
> That's very slow.  For each point in the verification domain,
> Grid-Stat is checking to see if it's inside each of these lat/lon
> polylines.  For a coarse grid or a polyline that doesn't contain
many
> points, it's hardly noticeable.  But for the dense MRMS grid, that's
> millions and millions of computations that we could avoid by running
the gen_vx_mask tool instead.
> It should speed it up considerably.
>
> For each of these polylines, run the gen_vx_mask tool to generate a
> NetCDF output file.  And then replace the ".poly" file in the config
> file with the path to the NetCDF output of gen_vx_mask.  Then try
rerunning.
>
> I'd be really curious to hear if and by how much that improves the
runtime.
>
> Thanks,
> John
>
>
> On Tue, Jul 24, 2018 at 4:05 PM John Halley Gotway <johnhg at ucar.edu
> <mailto:johnhg at ucar.edu>> wrote:
> Jon,
>
> I just ran this same Grid-Stat test case using met-6.0 with/without
> the "fix".  Unfortunately, there's no improvement.  Both versions
take
> about
> 43.6 seconds to run.  We must have introduced this issue when
> restructuring logic in met-7.0.  So unfortunately, this fix has no
real impact on 6.0.
>
> So it's back to the drawing board.  We need a much faster algorithm
> for computing fractional coverage fields when computing neighborhood
stats.
>
> John
>
> On Tue, Jul 24, 2018 at 2:59 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi John,
>
> If it’s a simple 1-line change to allocate memory as you say, then
> could you identify where the code change needs to be made in the 6.0
> version (if feasible)?
> We can make the change and re-compile to see if it’s as dramatic of
a
> change as you’ve documented.
>
> If the change can’t easily be made in v6.0, then I’ll need to
consider
> upgrading to v7.0.  That will be a longer effort on my part, but one
> that we’ll likely need to make eventually….
> Thx,
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Tuesday, July 24, 2018 3:55 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen,
Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> jayanthi.srikishen-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> As luck would have it, working with folks at NCEP, we identified a
> memory allocation issue in met-7.0 and will be posting a bugfix for
it later today:
>
>
https://dtcenter.org/met/users/support/known_issues/METv7.0/index.php
>
> This one-line change pre-allocates the required memory in one chunk
> rather than building it up incrementally, which is the
excrutiatingly slow part!
> For comparison, the execution time for NCEP's Grid-Stat test case
> improved from 18 minutes to 56 seconds.  Additionally, the beta
> release for the next version of MET further improves that runtime to
> 27 seconds.  The latter speed up is largely due to storing masking
> regions more intelligently using booleans instead of double
precision
> values... which consume more memory and are slower to process.
>
> As EMC moves to using MET operationally, there's a great focus on
> efficiency.
>
> However, you're using met-6.0.  I could check to see if that same
> memory fix would apply to met-6.0... unless you tell me that you'd
be
> able to switch to met-7.0 instead.
>
> Thanks,
> John
>
>
>
> On Tue, Jul 24, 2018 at 1:33 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Folks,
>
> Well, it turns out that there has been a very substantial speed-up,
> confirming that I am in fact interpolating the MRMS OBS grid to the
> forecast grid.
> I change the “to_grid” field to “OBS”, and grid_stat is still
running
> 45 minutes later on the first grid comparison!!  The number of pairs
> over the model region is 14 million vs. 159 thousand when
> interpolating to the model grid.
>
> So I have realized a dramatic speed-up --- grid_stat is still not
> running quite as fast as I’d like.
>
> Thanks,
> JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Tuesday, July 24, 2018 11:45 AM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen,
Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> jayanthi.srikishen-1 at nasa.gov>>
> Subject: RE: Verifying neighborhood precip very slow
>
> Hi again John H-G,
>
> I finally got to re-configuring my python scripts to remove the
> regrid_data_plane step and set it up to re-grid MRMS to the FCST
> domains in-line within GridStatConfig, as you defined in the regrid
block below.
> Previously, we had been re-gridding all model fields to the obs grid
> using regrid_data_plane, prior to running grid_stat (so kind of an
> extra, unnecessary step, since we first developed this work flow
prior
> to MET version 6).
>
> I now have grid_stat running in our version 6.0 installation with
this
> new configuration.  Unfortunately, the grid_stat is still running
> excruciatingly/prohibitively too slow in what I feel is a fairly
basic
> setup.
> It takes about 6 minutes just to go through a single run of
grid_stat
> for one accumulation interval (just 1-h precip for now).  I need to
> produce batch results across multiple accumulation intervals, model
> grids, and experiments, so this will literally take weeks to process
> the numerous days of forecast runs I have.
>
> My GridStatConfig setup is as follows: (followed by the key
> GridStatConfig entries below, just to see if I’m doing something
> inherently wrong)
>
> •         1/5/10/25mm accumulation thresholds
>
> •         1h APCP in the current runs taking ~6min each. (I really
need
> these to run on the order of seconds, not minutes)
>
> •         Verification stats generated for several poly regions:
> (essentially all the NCEP verification regions and the entire grid)
>
> Any suggestions for speeding up grid_stat using MRMS QPE is greatly
> appreciated!
>
> Many thanks,
> JonC
>
> GridStatConfig contents:
>
> model = "sportlis_d01";
>
> //
> // Output description to be written
> // May be set separately in each "obs.field" entry // desc = "NA";
>
> //
> // Output observation type to be written // obtype = "ANALYS";
>
>
>
//////////////////////////////////////////////////////////////////////
> //////////
>
> //
> // Verification grid
> //
> regrid = {
>    to_grid    = FCST;
>    method     = BUDGET;
>    width      = 2;
>    vld_thresh = 0.5;
> }
>
>
>
//////////////////////////////////////////////////////////////////////
> //////////
>
> cat_thresh  = [ NA ];
> cnt_thresh  = [ NA ];
> cnt_logic   = UNION;
> wind_thresh = [ NA ];
> wind_logic  = UNION;
>
> fcst = {
>
>    field = [
>       {
>         name       = "APCP_01";
>         level      = [ "(*,*)" ];
>         cat_thresh = [ >=1, >=5, >=10, >=25 ];
>       }
>    ];
>
> }
>
> obs = {
>
>    field = [
>       {
>         name       = "APCP_01";
>         level      = [ "(*,*)" ];
>         cat_thresh = [ >=1, >=5, >=10, >=25 ];
>       }
>    ];
>
> }
>
> climo_mean = {
>
>    file_name = [];
>    field     = [];
>
>    regrid = {
>       method     = NEAREST;
>       width      = 1;
>       vld_thresh = 0.5;
>    }
>
>    time_interp_method = DW_MEAN;
>    match_day          = FALSE;
>    time_step          = 21600;
> }
>
> mask = {
>    grid = [  ];
>    poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> "MET_BASE/poly/GRB.poly", "MET_BASE/poly/NMT.poly",
> "MET_BASE/poly/SMT.po ly", "MET_BASE/poly/SWD.poly",
> "MET_BASE/poly/NPL.poly", "MET_BASE/poly/SPL.poly",
> "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly",
> "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly", "MET_
> BASE/poly/APL.poly", "MET_BASE/poly/SEC.poly" ]; }
>
> ci_alpha  = [ 0.05 ];
>
> boot = {
>    interval = PCTILE;
>    rep_prop = 1.0;
>    n_rep    = 0;
>    rng      = "mt19937";
>    seed     = "";
> }
>
> interp = {
>    field      = BOTH;
>    vld_thresh = 1.0;
>    shape      = SQUARE;
>
>    type = [
>       {
>          method = NEAREST;
>          width  = 1;
>       }
>    ];
> }
>
> nbrhd = {
>    width      = [ 7 ];
>    cov_thresh = [ >0.0 ];
>    vld_thresh = 1.0;
> }
>
> output_flag = {
>    fho    = BOTH;
>    ctc    = BOTH;
>    cts    = BOTH;
>    mctc   = NONE;
>    mcts   = NONE;
>    cnt    = BOTH;
>    sl1l2  = BOTH;
>    sal1l2 = BOTH;
>    vl1l2  = BOTH;
>    val1l2 = BOTH;
>    pct    = BOTH;
>    pstd   = BOTH;
>    pjc    = BOTH;
>    prc    = BOTH;
>    nbrctc = BOTH;
>    nbrcts = BOTH;
>    nbrcnt = BOTH;
> }
>
> //
> // NetCDF matched pairs output file
> //
> nc_pairs_flag   = {
>    latlon     = FALSE;
>    raw        = FALSE;
>    diff       = FALSE;
>    climo      = FALSE;
>    weight     = FALSE;
>    nbrhd      = FALSE;
>    apply_mask = FALSE;
> }
>
>
>
//////////////////////////////////////////////////////////////////////
> //////////
>
> grid_weight_flag = NONE;
> rank_corr_flag   = FALSE;
> tmp_dir          =
> "/discover/nobackup/jlcase/MET/gridStatOutput/grid_stat_tmp";
> output_prefix    = "sportlis_d01_APCP_01";
> version          = "V6.0";
>
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Monday, June 11, 2018 1:29 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip
>
> Jon,
>
> Sorry for the delay in responding.  I'm in a training class all day,
> and they won't let us use our phones :(
>
> And sorry for the misunderstanding.  I remember you asking about
> applying neighborhood methods in Grid-Stat regarding a 40-km
neighborhood size.
>
> But if that's not the case, and you're simply comparing
precipitation
> accumulations, then its much simpler.  I would suggest re-gridding
the
> hi-res MRMS "observation" data to the relatively lower-res NU-WRF
domain.
> You'd do that in in the Grid-Stat config file like this:
>
> regrid = {
>    to_grid    = FCST;
>    method   = BUDGET;
>    width       = 2;
>    vld_thresh = 0.5;
> }
>
> The BUDGET interpolation method is generally recommended for
> accumulated variables, like precip.
>
> As for when to upgrade versions, it's totally up to you.  You can
see
> a list of the features added for each release here:
>    https://dtcenter.org/met/users/support/release_notes/index.php
>
> Probably doesn't make sense to upgrade versions until there's some
new
> functionality available that you need/want.
>
> Nice seeing you this week.  Hope you had a good trip back.
>
> Thanks,
> John
>
>
> On Mon, Jun 11, 2018 at 11:33 AM, Case, Jonathan (MSFC-ST11)[ENSCO
> INC] < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> Hi again,
>
> What would be the benefit in upgrading to MET v6.1 vs. v7.0 at this
> point?  Should I simply stick with our v6.0 installation where I’ve
> done a lot of my point verification already, or is it helpful to
> upgrade to one of these newer versions?  Will either of these
versions
> be backward compatible with my v6.0 results?
>
> Thanks,
> JonC
>
> From: John Halley Gotway
> [mailto:johnhg at ucar.edu<mailto:johnhg at ucar.edu>]
> Sent: Monday, June 11, 2018 10:39 AM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip
>
> Jon,
>
> Here's the same command but using met-6.0:
>
> met-6.0/bin/regrid_data_plane
> MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000.grib2 G212
> MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc<
> http://MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc> -field
> 'name="GaugeCorrQPE24H"; level="Z0";' -width 72 -method MAX -name
> GaugeCorrQPE24H_MAX_72
>
> Note that the censor_thresh/censor_val settings aren't included...
and
> neither is the "-shape CIRCLE" option... those were added met-6.1.
> With met-6.0, you'll get a square interpolation area instead of a
circle.
>
> As for whether or not this is appropriate... My understanding is
that
> you have a forecast field of probabilities that are defined as the
> probability of some event occurring within 40km of each grid point.
> The upscaling method I've suggested is a way to pre-process the
> observation data to make it consistent with the way the
probabilistic
> forecast was defined.  Replace the value at each observation grid
> point with the maximum value within a neighborhood of radius 40km.
> Once you've transformed the obs in this way, you can use it to
verify the probability forecast directly.
>
> I believe this is the same method that the HRRR-TLE group at
NOAA/GSD
> is using the verify their neighborhood probability forecasts.
>
> I've cc'ed Tressa Fowler on this email.  She's our resident
> statistician and may have an opinion on this.
>
> Using the CIRCLE shape available in met-6.1 or met-7.0 would be
> preferable to using squares in met-6.0.  But perhaps that's close
enough.
>
> Thanks,
> John
>
>
> On Mon, Jun 11, 2018 at 9:18 AM, Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Oh, I forgot to mention one thing.  Is “max” the appropriate
upscaling
> method I should use, or is there a conservative
> upscaling/interpolation approach?
> -JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Monday, June 11, 2018 10:16 AM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> Subject: RE: Verifying neighborhood precip
>
>



------------------------------------------------
Subject: RE: [rt.rap.ucar.edu #86525] RE: grid_stat running very slow
From: Case, Jonathan[ENSCO INC]
Time: Wed Aug 08 15:22:51 2018

Thanks John H-G.  That appears to have worked!

Now, I'm wondering how I should set up the GridStatConfig file to work
with the resulting output.

Here's what came out of the pcp_combine command (a sample 3-h
forecast):
netcdf fcst_apcp_3h {
dimensions:
        lat = 330 ;
        lon = 430 ;
variables:
        float lat(lat) ;
                lat:long_name = "latitude" ;
                lat:units = "degrees_north" ;
                lat:standard_name = "latitude" ;
        float lon(lon) ;
                lon:long_name = "longitude" ;
                lon:units = "degrees_east" ;
                lon:standard_name = "longitude" ;
        float PCP0(lat, lon) ;
                PCP0:name = "PCP0" ;
                PCP0:long_name = "Hourly Precipitation" ;
                PCP0:level = "0,*,*" ;
                PCP0:units = "mm" ;
                PCP0:_FillValue = -9999.f ;
                PCP0:init_time = "20180329_210000" ;
                PCP0:init_time_ut = "1522357200" ;
                PCP0:valid_time = "20180329_210000" ;
                PCP0:valid_time_ut = "1522357200" ;

The actual model initialization time is 20180329_180000, but
pcp_combine merely set the init_time the same as the valid_time in the
output netcdf file.
So I'm wondering if there is a way to set the output variable name
(APCP_03?), init_time, etc. so that it will work cleanly with
grid_stat to get the correct output file naming convention, forecast
times, etc.?

Best,
JonC

-----Original Message-----
From: John Halley Gotway via RT <met_help at ucar.edu>
Sent: Wednesday, August 8, 2018 3:13 PM
To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov>
Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-
1 at nasa.gov>; jensen at ucar.edu; jpresto at ucar.edu
Subject: Re: [rt.rap.ucar.edu #86525] RE: grid_stat running very slow

Jon,

The pcp_combine "-sum" option definitely will not work.  It's logic is
set up to process GRIB1/2 files.  The pcp_combine "-add" option might
work.
You do something like this:

pcp_combine -add \
   in_file.nc 'name="PCP"; level="(0,*,*)"; file_type=NETCDF_NCCF;'  \
   in_file.nc 'name="PCP"; level="(1,*,*)"; file_type=NETCDF_NCCF;'  \
   in_file.nc 'name="PCP"; level="(2,*,*)"; file_type=NETCDF_NCCF;'  \
   out_file.nc

I'm assuming that the PCP variable has 3 dimensions and the first one
is time.  This is telling pcp_combine to read data for the first,
second, and third time dimension, add them up, and write the output to
out_file.nc.  Of course if MET can't understand the input timing info,
it won't writing meaning time info to the output.

Another alternative would be using the NCO tools to slice, date, and
sum your NetCDF files.

John

On Wed, Aug 8, 2018 at 2:06 PM Case, Jonathan[ENSCO INC] via RT <
met_help at ucar.edu> wrote:

>
> Wed Aug 08 14:06:01 2018: Request 86525 was acted upon.
> Transaction: Ticket created by jonathan.case-1 at nasa.gov
>        Queue: met_help
>      Subject: RE: grid_stat running very slow
>        Owner: Nobody
>   Requestors: jonathan.case-1 at nasa.gov
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86525
> >
>
>
> Hi again MET team,
>
> I have a pcp_combine question for you that may or may not work,
given
> the current format of model output data I’m trying to work with.
>
> Brief Background: We have a single netcdf file with individual
hourly
> model precip for 48 hours/times of a forecast run.
>
> I need to extract and sum these hourly precip over specified
intervals
> from the file and have pcp_combine output into a temporary file.
> Is there a way to do this, given that all the netcdf file has in it
> are valid times for each hourly interval?  There is no specification
> of model initialization date and forecast hours in the file.  Also,
> the variable name is “PCP”, not APCP.
>
> If you think we need to extract and/or re-work the netcdf file,
please
> let me know.  So far, I’ve not been able to figure out which way to
do
> this in pcp_combine.
>
> Thanks for the help,
> JonC
>
> From: John Halley Gotway <johnhg at ucar.edu>
> Sent: Monday, August 6, 2018 12:32 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov>
> Cc: Julie Prestopnik <jpresto at ucar.edu>; Srikishen, Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov>; Tara Jensen <
> jensen at ucar.edu>
> Subject: Re: grid_stat running very slow
>
> Jon,
>
> This is great info.  Thanks for letting us know about how much
longer
> it takes to run with higher compression levels.  We can add a
> cautionary note to the MET User's Guide about this.
>
> John
>
> On Tue, Jul 31, 2018 at 10:38 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi MET Team,
>
> Thanks for all your emails lately.  It’s all helpful info for
getting
> to the bottom of my issue on the NASA supercomputer.
> Jayanthi has built MET version 6.0 on our own SPoRT cluster, so I’m
> currently trying to do a sample grid_stat run for a single time on
our
> system, since we have more control there.
> I have verified that Jayanthi built our MET using Intel compilers
and
> the
> –O2 optimization level.
>
> More later then,
> JonC
>
> From: Julie Prestopnik <jpresto at ucar.edu<mailto:jpresto at ucar.edu>>
> Sent: Tuesday, July 31, 2018 10:43 AM
> To: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>; Srikishen, Jayanthi
> (MSFC-ST11)[USRA]
> <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-
1 at nasa.gov>>;
> Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> Subject: Re: grid_stat running very slow
>
> Hi Jon.
>
> As John mentioned, I have built MET and its supporting libraries on
a
> number of platforms, always with the Intel family of compilers.
>
> I just looked through the script that I use to build and install MET
> on these various platforms and verified that I have not added any
> optimization flags in building MET or any of its supporting
libraries.
> I don't know if there would be a difference in optimization using
gnu
> vs. intel compilers or not.  Unfortunately, I don't have much
experience in this area.
>
> Please let me know if you have any more questions on compiling.
Also,
> please let us know if you try to run a job with more memory
allocation
> and how that goes.
>
> Thanks,
> Julie
>
>
> On Mon, Jul 30, 2018 at 10:28 PM John Halley Gotway <johnhg at ucar.edu
> <mailto:johnhg at ucar.edu>> wrote:
> Hi Jon,
>
> Unfortunately I don’t have any good advice for you.  Since I’m
> typically in development mode, I usually compile with the -g option.
> We compile with GNU in development.
>
> I’ve cc’ed Julie Prestopnik, who’s been compiling MET on a variety
of
> platforms (usually with Intel compilers) in case she has any advice.
>
> I wonder if the slower run times are related to memory usage.  If
> you’re process switches over to swap space, it could run much
slower.
> Some supers enable you to request more memory... and/or print
> diagnostic info about memory usage in the job log.  As a test, you
> could try running a job with more memory allocation to see if that
speeds things up.
>
> Thanks
> John
>
> On Mon, Jul 30, 2018 at 2:26 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi again John H-G.
>
> I’m presently working with the NASA IT staff in determining the
> optimization levels for compiling MET.  They built MET with gnu/g++,
> and noted the MET documentation reports that –O2 causes problems,
and
> to use either –O or –g.  Could you therefore please provide with me
> the compiler you use for building MET, and any information about
> optimization levels for both the supporting packages and in
particular MET?
>
> Thanks very much,
> JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Friday, July 27, 2018 4:55 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> Thanks for sending this data.  I grabbed it and took a look.  Here's
> what is see.
>
> I ran the following 2 commands for a single output time that you
sent:
>
> time \
> /usr/local/met-6.0/bin/pcp_combine -subtract \
> MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f120000 12 \
> MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f110000 11 \
> met-6.0/sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc>
>
> time \
> /usr/local/met-6.0/bin/grid_stat \
> met-${CUR_MET_VERSION}/
> sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc> \
> MET_MRMSTEST/mrms_met_2015060312.grb2 \ met-6.0/GridStatConfig \
> -outdir met-6.0/met_out \ -v 3
>
> And I did this for 4 versions of MET:
>
> met-6.0 takes 0.46 sec for pcp_combine and 13.35 seconds for
grid_stat.
> met-6.1 takes 0.50 sec for pcp_combine and 17.91 seconds for
grid_stat.
> met-7.0 takes 0.48 sec for pcp_combine and 16.78 seconds for
grid_stat.
> met-8.0 takes 0.38 sec for pcp_combine and 11.35 seconds for
grid_stat
> (this is the version under development).
>
> When I tried rerunning with "to_grid = OBS" (i.e. regridding
forecast
> data to the MRMS domain) it slowed down a lot.  In particular the
> BUDGET interpolation method is very slow.  Using NEAREST neighbor
> speeds it up a
> lot:
> //
> // Verification grid
> //
> regrid = {
>    to_grid    = OBS;
>    method     = NEAREST;
>    width      = 2;
>    vld_thresh = 0.5;
> }
>
> However, I do think it's appropriate to verify on the relatively
> coarse model domain instead of the very fine MRMS grid.  So setting
> "to_grid = FCST" makes sense to me.  I'm out of the office next
week.
> But will be back in the following week.
>
> When configured with "to_grid = FCST" are you seeing runtime similar
> to what I've listed?  Or are they significantly different?
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 2:27 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi John H-G:
>
> I uploaded a tarball which hopefully contains most everything you’ll
> need to do some grid_stat tests with MRMS and our near-CONUS 9-km
WRF output.
> https://geo.nsstc.nasa.gov/SPoRT/outgoing/jlc/MET/
>
> Thanks!
> Jon
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Wednesday, July 25, 2018 3:01 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> OK, good to know.  Please let me know when you've uploaded those
> sample files.
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 12:10 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> FYI, I just ran a test with the vx_mask netcdf files, and I see no
> improvement in the performance of grid_stat.
> -JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Wednesday, July 25, 2018 12:05 PM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Subject: RE: Verifying neighborhood precip very slow
>
> John H-G,
>
> Let me upload for you a sample GridStatConfig file for v6.0, one of
> the WRF GRIB1 files I’m using, and the MRMS GRIB2 files I
> “re-packaged” for using in MET.
> I’ll get that to you this afternoon sometime.
>
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Wednesday, July 25, 2018 11:46 AM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> I thought of trying to run a simple test here to quantify these
timing
> issues (at least in a relative sense).  I have a sample MRMS file,
and
> so I know the projection info.  But I don't have your WRF domain
ready at hand.
> Can you send me (or point me to) a sample file?  Apologies if you've
> already sent this to me... I see some data from 2011 but I'm
guessing
> the grid may have changed since then.
>
> Thanks,
> John
>
> On Wed, Jul 25, 2018 at 9:10 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi again John H-G,
>
> Since these are standard NCEP verification regions I’m sourcing
within
> MET, is there by any chance these vx_mask regions in netcdf format
> already created for the .poly files below?
> Also, the USER.poly file I use below is auto-generated by our python
> scripts based on the approximate outline of the WRF model domain
being
> verified.  So all I’m doing there is just outlining the entire
domain
> with only 5 points in the .poly file.  So that shouldn’t need to
> gen_vx_mask applied to it, right?
>
> Thanks for the recommendation.  I’ll let you know how much using
> vx_mask speeds things up.
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Tuesday, July 24, 2018 5:15 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA]
> <jayanthi.srikishen-1 at nasa.gov
> <mailto:jayanthi.srikishen-1 at nasa.gov>>; Tara Jensen
<jensen at ucar.edu <mailto:jensen at ucar.edu>>; Tressa Fowler
<tressa at ucar.edu<mailto:
> tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> Actually, I do notice one setting in your config file that's slowing
> things down a lot!  You're specifying masking regions using lat/lon
> polylines:
>
> poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> "MET_BASE/poly/GRB.poly",
>               "MET_BASE/poly/NMT.poly", "MET_BASE/poly/SMT.poly",
> "MET_BASE/poly/SWD.poly", "MET_BASE/poly/NPL.poly",
> "MET_BASE/poly/SPL.poly",
>               "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly",
> "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly",
> "MET_BASE/poly/APL.poly",
>               "MET_BASE/poly/SEC.poly" ];
>
> That's very slow.  For each point in the verification domain,
> Grid-Stat is checking to see if it's inside each of these lat/lon
> polylines.  For a coarse grid or a polyline that doesn't contain
many
> points, it's hardly noticeable.  But for the dense MRMS grid, that's
> millions and millions of computations that we could avoid by running
the gen_vx_mask tool instead.
> It should speed it up considerably.
>
> For each of these polylines, run the gen_vx_mask tool to generate a
> NetCDF output file.  And then replace the ".poly" file in the config
> file with the path to the NetCDF output of gen_vx_mask.  Then try
rerunning.
>
> I'd be really curious to hear if and by how much that improves the
runtime.
>
> Thanks,
> John
>
>
> On Tue, Jul 24, 2018 at 4:05 PM John Halley Gotway <johnhg at ucar.edu
> <mailto:johnhg at ucar.edu>> wrote:
> Jon,
>
> I just ran this same Grid-Stat test case using met-6.0 with/without
> the "fix".  Unfortunately, there's no improvement.  Both versions
take
> about
> 43.6 seconds to run.  We must have introduced this issue when
> restructuring logic in met-7.0.  So unfortunately, this fix has no
real impact on 6.0.
>
> So it's back to the drawing board.  We need a much faster algorithm
> for computing fractional coverage fields when computing neighborhood
stats.
>
> John
>
> On Tue, Jul 24, 2018 at 2:59 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Hi John,
>
> If it’s a simple 1-line change to allocate memory as you say, then
> could you identify where the code change needs to be made in the 6.0
> version (if feasible)?
> We can make the change and re-compile to see if it’s as dramatic of
a
> change as you’ve documented.
>
> If the change can’t easily be made in v6.0, then I’ll need to
consider
> upgrading to v7.0.  That will be a longer effort on my part, but one
> that we’ll likely need to make eventually….
> Thx,
> -JonC
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Tuesday, July 24, 2018 3:55 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen,
Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> jayanthi.srikishen-1 at nasa.gov>>
> Subject: Re: Verifying neighborhood precip very slow
>
> Jon,
>
> As luck would have it, working with folks at NCEP, we identified a
> memory allocation issue in met-7.0 and will be posting a bugfix for
it later today:
>
>
https://dtcenter.org/met/users/support/known_issues/METv7.0/index.php
>
> This one-line change pre-allocates the required memory in one chunk
> rather than building it up incrementally, which is the
excrutiatingly slow part!
> For comparison, the execution time for NCEP's Grid-Stat test case
> improved from 18 minutes to 56 seconds.  Additionally, the beta
> release for the next version of MET further improves that runtime to
> 27 seconds.  The latter speed up is largely due to storing masking
> regions more intelligently using booleans instead of double
precision
> values... which consume more memory and are slower to process.
>
> As EMC moves to using MET operationally, there's a great focus on
> efficiency.
>
> However, you're using met-6.0.  I could check to see if that same
> memory fix would apply to met-6.0... unless you tell me that you'd
be
> able to switch to met-7.0 instead.
>
> Thanks,
> John
>
>
>
> On Tue, Jul 24, 2018 at 1:33 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Folks,
>
> Well, it turns out that there has been a very substantial speed-up,
> confirming that I am in fact interpolating the MRMS OBS grid to the
> forecast grid.
> I change the “to_grid” field to “OBS”, and grid_stat is still
running
> 45 minutes later on the first grid comparison!!  The number of pairs
> over the model region is 14 million vs. 159 thousand when
> interpolating to the model grid.
>
> So I have realized a dramatic speed-up --- grid_stat is still not
> running quite as fast as I’d like.
>
> Thanks,
> JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Tuesday, July 24, 2018 11:45 AM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen,
Jayanthi
> (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> jayanthi.srikishen-1 at nasa.gov>>
> Subject: RE: Verifying neighborhood precip very slow
>
> Hi again John H-G,
>
> I finally got to re-configuring my python scripts to remove the
> regrid_data_plane step and set it up to re-grid MRMS to the FCST
> domains in-line within GridStatConfig, as you defined in the regrid
block below.
> Previously, we had been re-gridding all model fields to the obs grid
> using regrid_data_plane, prior to running grid_stat (so kind of an
> extra, unnecessary step, since we first developed this work flow
prior
> to MET version 6).
>
> I now have grid_stat running in our version 6.0 installation with
this
> new configuration.  Unfortunately, the grid_stat is still running
> excruciatingly/prohibitively too slow in what I feel is a fairly
basic
> setup.
> It takes about 6 minutes just to go through a single run of
grid_stat
> for one accumulation interval (just 1-h precip for now).  I need to
> produce batch results across multiple accumulation intervals, model
> grids, and experiments, so this will literally take weeks to process
> the numerous days of forecast runs I have.
>
> My GridStatConfig setup is as follows: (followed by the key
> GridStatConfig entries below, just to see if I’m doing something
> inherently wrong)
>
> •         1/5/10/25mm accumulation thresholds
>
> •         1h APCP in the current runs taking ~6min each. (I really
need
> these to run on the order of seconds, not minutes)
>
> •         Verification stats generated for several poly regions:
> (essentially all the NCEP verification regions and the entire grid)
>
> Any suggestions for speeding up grid_stat using MRMS QPE is greatly
> appreciated!
>
> Many thanks,
> JonC
>
> GridStatConfig contents:
>
> model = "sportlis_d01";
>
> //
> // Output description to be written
> // May be set separately in each "obs.field" entry // desc = "NA";
>
> //
> // Output observation type to be written // obtype = "ANALYS";
>
>
>
//////////////////////////////////////////////////////////////////////
> //////////
>
> //
> // Verification grid
> //
> regrid = {
>    to_grid    = FCST;
>    method     = BUDGET;
>    width      = 2;
>    vld_thresh = 0.5;
> }
>
>
>
//////////////////////////////////////////////////////////////////////
> //////////
>
> cat_thresh  = [ NA ];
> cnt_thresh  = [ NA ];
> cnt_logic   = UNION;
> wind_thresh = [ NA ];
> wind_logic  = UNION;
>
> fcst = {
>
>    field = [
>       {
>         name       = "APCP_01";
>         level      = [ "(*,*)" ];
>         cat_thresh = [ >=1, >=5, >=10, >=25 ];
>       }
>    ];
>
> }
>
> obs = {
>
>    field = [
>       {
>         name       = "APCP_01";
>         level      = [ "(*,*)" ];
>         cat_thresh = [ >=1, >=5, >=10, >=25 ];
>       }
>    ];
>
> }
>
> climo_mean = {
>
>    file_name = [];
>    field     = [];
>
>    regrid = {
>       method     = NEAREST;
>       width      = 1;
>       vld_thresh = 0.5;
>    }
>
>    time_interp_method = DW_MEAN;
>    match_day          = FALSE;
>    time_step          = 21600;
> }
>
> mask = {
>    grid = [  ];
>    poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> "MET_BASE/poly/GRB.poly", "MET_BASE/poly/NMT.poly",
> "MET_BASE/poly/SMT.po ly", "MET_BASE/poly/SWD.poly",
> "MET_BASE/poly/NPL.poly", "MET_BASE/poly/SPL.poly",
> "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly",
> "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly", "MET_
> BASE/poly/APL.poly", "MET_BASE/poly/SEC.poly" ]; }
>
> ci_alpha  = [ 0.05 ];
>
> boot = {
>    interval = PCTILE;
>    rep_prop = 1.0;
>    n_rep    = 0;
>    rng      = "mt19937";
>    seed     = "";
> }
>
> interp = {
>    field      = BOTH;
>    vld_thresh = 1.0;
>    shape      = SQUARE;
>
>    type = [
>       {
>          method = NEAREST;
>          width  = 1;
>       }
>    ];
> }
>
> nbrhd = {
>    width      = [ 7 ];
>    cov_thresh = [ >0.0 ];
>    vld_thresh = 1.0;
> }
>
> output_flag = {
>    fho    = BOTH;
>    ctc    = BOTH;
>    cts    = BOTH;
>    mctc   = NONE;
>    mcts   = NONE;
>    cnt    = BOTH;
>    sl1l2  = BOTH;
>    sal1l2 = BOTH;
>    vl1l2  = BOTH;
>    val1l2 = BOTH;
>    pct    = BOTH;
>    pstd   = BOTH;
>    pjc    = BOTH;
>    prc    = BOTH;
>    nbrctc = BOTH;
>    nbrcts = BOTH;
>    nbrcnt = BOTH;
> }
>
> //
> // NetCDF matched pairs output file
> //
> nc_pairs_flag   = {
>    latlon     = FALSE;
>    raw        = FALSE;
>    diff       = FALSE;
>    climo      = FALSE;
>    weight     = FALSE;
>    nbrhd      = FALSE;
>    apply_mask = FALSE;
> }
>
>
>
//////////////////////////////////////////////////////////////////////
> //////////
>
> grid_weight_flag = NONE;
> rank_corr_flag   = FALSE;
> tmp_dir          =
> "/discover/nobackup/jlcase/MET/gridStatOutput/grid_stat_tmp";
> output_prefix    = "sportlis_d01_APCP_01";
> version          = "V6.0";
>
>
> From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Sent: Monday, June 11, 2018 1:29 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip
>
> Jon,
>
> Sorry for the delay in responding.  I'm in a training class all day,
> and they won't let us use our phones :(
>
> And sorry for the misunderstanding.  I remember you asking about
> applying neighborhood methods in Grid-Stat regarding a 40-km
neighborhood size.
>
> But if that's not the case, and you're simply comparing
precipitation
> accumulations, then its much simpler.  I would suggest re-gridding
the
> hi-res MRMS "observation" data to the relatively lower-res NU-WRF
domain.
> You'd do that in in the Grid-Stat config file like this:
>
> regrid = {
>    to_grid    = FCST;
>    method   = BUDGET;
>    width       = 2;
>    vld_thresh = 0.5;
> }
>
> The BUDGET interpolation method is generally recommended for
> accumulated variables, like precip.
>
> As for when to upgrade versions, it's totally up to you.  You can
see
> a list of the features added for each release here:
>    https://dtcenter.org/met/users/support/release_notes/index.php
>
> Probably doesn't make sense to upgrade versions until there's some
new
> functionality available that you need/want.
>
> Nice seeing you this week.  Hope you had a good trip back.
>
> Thanks,
> John
>
>
> On Mon, Jun 11, 2018 at 11:33 AM, Case, Jonathan (MSFC-ST11)[ENSCO
> INC] < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> Hi again,
>
> What would be the benefit in upgrading to MET v6.1 vs. v7.0 at this
> point?  Should I simply stick with our v6.0 installation where I’ve
> done a lot of my point verification already, or is it helpful to
> upgrade to one of these newer versions?  Will either of these
versions
> be backward compatible with my v6.0 results?
>
> Thanks,
> JonC
>
> From: John Halley Gotway
> [mailto:johnhg at ucar.edu<mailto:johnhg at ucar.edu>]
> Sent: Monday, June 11, 2018 10:39 AM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov
> <mailto:jonathan.case-1 at nasa.gov>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>
> Subject: Re: Verifying neighborhood precip
>
> Jon,
>
> Here's the same command but using met-6.0:
>
> met-6.0/bin/regrid_data_plane
> MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000.grib2 G212
> MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc<
> http://MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc> -field
> 'name="GaugeCorrQPE24H"; level="Z0";' -width 72 -method MAX -name
> GaugeCorrQPE24H_MAX_72
>
> Note that the censor_thresh/censor_val settings aren't included...
and
> neither is the "-shape CIRCLE" option... those were added met-6.1.
> With met-6.0, you'll get a square interpolation area instead of a
circle.
>
> As for whether or not this is appropriate... My understanding is
that
> you have a forecast field of probabilities that are defined as the
> probability of some event occurring within 40km of each grid point.
> The upscaling method I've suggested is a way to pre-process the
> observation data to make it consistent with the way the
probabilistic
> forecast was defined.  Replace the value at each observation grid
> point with the maximum value within a neighborhood of radius 40km.
> Once you've transformed the obs in this way, you can use it to
verify the probability forecast directly.
>
> I believe this is the same method that the HRRR-TLE group at
NOAA/GSD
> is using the verify their neighborhood probability forecasts.
>
> I've cc'ed Tressa Fowler on this email.  She's our resident
> statistician and may have an opinion on this.
>
> Using the CIRCLE shape available in met-6.1 or met-7.0 would be
> preferable to using squares in met-6.0.  But perhaps that's close
enough.
>
> Thanks,
> John
>
>
> On Mon, Jun 11, 2018 at 9:18 AM, Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>> wrote:
> Oh, I forgot to mention one thing.  Is “max” the appropriate
upscaling
> method I should use, or is there a conservative
> upscaling/interpolation approach?
> -JonC
>
> From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> Sent: Monday, June 11, 2018 10:16 AM
> To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> Subject: RE: Verifying neighborhood precip
>
>



------------------------------------------------
Subject: RE: grid_stat running very slow
From: John Halley Gotway
Time: Wed Aug 08 15:59:59 2018

Jon,

You can use the "-name APCP_03" command line option for pcp_combine to
manually set the output variable name.

Unfortunately, there is no corresponding way to manually set the
timing
information through pcp_combine.  Instead, one option would running
the
"ncatted" NCO utility on the output of pcp_combine to correct the
timing
info.

For example... let's say your pcp_combine output file is named
apcp_03.nc
and the variable is named "APCP_03".  Let's set the init time to
20180807_00, valid time to 20180807_12, and accumulation interval to 3
hours:

ncatted \
   -a init_time,APCP_03,o,c,"20180807_000000" \
   -a init_time_ut,APCP_03,o,c,"1533600000" \
   -a valid_time,APCP_03,o,c,"20180807_120000" \
   -a valid_time_ut,APCP_03,o,c,"1533643200" \
   -a accum_time,APCP_03,o,c,"030000" \
   -a accum_time_sec,APCP_03,o,i,10800 \
   -o apcp_03_met.nc apcp_03.nc

If you need more info on ncatted, here's the documentation on it:
   http://nco.sourceforge.net/nco.html#ncatted-netCDF-Attribute-Editor

Thanks,
John

On Wed, Aug 8, 2018 at 3:23 PM Case, Jonathan[ENSCO INC] via RT <
met_help at ucar.edu> wrote:

> Thanks John H-G.  That appears to have worked!
>
> Now, I'm wondering how I should set up the GridStatConfig file to
work
> with the resulting output.
>
> Here's what came out of the pcp_combine command (a sample 3-h
forecast):
> netcdf fcst_apcp_3h {
> dimensions:
>         lat = 330 ;
>         lon = 430 ;
> variables:
>         float lat(lat) ;
>                 lat:long_name = "latitude" ;
>                 lat:units = "degrees_north" ;
>                 lat:standard_name = "latitude" ;
>         float lon(lon) ;
>                 lon:long_name = "longitude" ;
>                 lon:units = "degrees_east" ;
>                 lon:standard_name = "longitude" ;
>         float PCP0(lat, lon) ;
>                 PCP0:name = "PCP0" ;
>                 PCP0:long_name = "Hourly Precipitation" ;
>                 PCP0:level = "0,*,*" ;
>                 PCP0:units = "mm" ;
>                 PCP0:_FillValue = -9999.f ;
>                 PCP0:init_time = "20180329_210000" ;
>                 PCP0:init_time_ut = "1522357200" ;
>                 PCP0:valid_time = "20180329_210000" ;
>                 PCP0:valid_time_ut = "1522357200" ;
>
> The actual model initialization time is 20180329_180000, but
pcp_combine
> merely set the init_time the same as the valid_time in the output
netcdf
> file.
> So I'm wondering if there is a way to set the output variable name
> (APCP_03?), init_time, etc. so that it will work cleanly with
grid_stat to
> get the correct output file naming convention, forecast times, etc.?
>
> Best,
> JonC
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Wednesday, August 8, 2018 3:13 PM
> To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-1 at nasa.gov>
> Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA] <jayanthi.srikishen-
1 at nasa.gov>;
> jensen at ucar.edu; jpresto at ucar.edu
> Subject: Re: [rt.rap.ucar.edu #86525] RE: grid_stat running very
slow
>
> Jon,
>
> The pcp_combine "-sum" option definitely will not work.  It's logic
is set
> up to process GRIB1/2 files.  The pcp_combine "-add" option might
work.
> You do something like this:
>
> pcp_combine -add \
>    in_file.nc 'name="PCP"; level="(0,*,*)"; file_type=NETCDF_NCCF;'
\
>    in_file.nc 'name="PCP"; level="(1,*,*)"; file_type=NETCDF_NCCF;'
\
>    in_file.nc 'name="PCP"; level="(2,*,*)"; file_type=NETCDF_NCCF;'
\
>    out_file.nc
>
> I'm assuming that the PCP variable has 3 dimensions and the first
one is
> time.  This is telling pcp_combine to read data for the first,
second, and
> third time dimension, add them up, and write the output to
out_file.nc.
> Of course if MET can't understand the input timing info, it won't
writing
> meaning time info to the output.
>
> Another alternative would be using the NCO tools to slice, date, and
sum
> your NetCDF files.
>
> John
>
> On Wed, Aug 8, 2018 at 2:06 PM Case, Jonathan[ENSCO INC] via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Wed Aug 08 14:06:01 2018: Request 86525 was acted upon.
> > Transaction: Ticket created by jonathan.case-1 at nasa.gov
> >        Queue: met_help
> >      Subject: RE: grid_stat running very slow
> >        Owner: Nobody
> >   Requestors: jonathan.case-1 at nasa.gov
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86525
> > >
> >
> >
> > Hi again MET team,
> >
> > I have a pcp_combine question for you that may or may not work,
given
> > the current format of model output data I’m trying to work with.
> >
> > Brief Background: We have a single netcdf file with individual
hourly
> > model precip for 48 hours/times of a forecast run.
> >
> > I need to extract and sum these hourly precip over specified
intervals
> > from the file and have pcp_combine output into a temporary file.
> > Is there a way to do this, given that all the netcdf file has in
it
> > are valid times for each hourly interval?  There is no
specification
> > of model initialization date and forecast hours in the file.
Also,
> > the variable name is “PCP”, not APCP.
> >
> > If you think we need to extract and/or re-work the netcdf file,
please
> > let me know.  So far, I’ve not been able to figure out which way
to do
> > this in pcp_combine.
> >
> > Thanks for the help,
> > JonC
> >
> > From: John Halley Gotway <johnhg at ucar.edu>
> > Sent: Monday, August 6, 2018 12:32 PM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov>
> > Cc: Julie Prestopnik <jpresto at ucar.edu>; Srikishen, Jayanthi
> > (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov>; Tara Jensen <
> > jensen at ucar.edu>
> > Subject: Re: grid_stat running very slow
> >
> > Jon,
> >
> > This is great info.  Thanks for letting us know about how much
longer
> > it takes to run with higher compression levels.  We can add a
> > cautionary note to the MET User's Guide about this.
> >
> > John
> >
> > On Tue, Jul 31, 2018 at 10:38 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Hi MET Team,
> >
> > Thanks for all your emails lately.  It’s all helpful info for
getting
> > to the bottom of my issue on the NASA supercomputer.
> > Jayanthi has built MET version 6.0 on our own SPoRT cluster, so
I’m
> > currently trying to do a sample grid_stat run for a single time on
our
> > system, since we have more control there.
> > I have verified that Jayanthi built our MET using Intel compilers
and
> > the
> > –O2 optimization level.
> >
> > More later then,
> > JonC
> >
> > From: Julie Prestopnik <jpresto at ucar.edu<mailto:jpresto at ucar.edu>>
> > Sent: Tuesday, July 31, 2018 10:43 AM
> > To: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Cc: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>; Srikishen, Jayanthi
> > (MSFC-ST11)[USRA]
> > <jayanthi.srikishen-1 at nasa.gov<mailto:jayanthi.srikishen-
1 at nasa.gov>>;
> > Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> > Subject: Re: grid_stat running very slow
> >
> > Hi Jon.
> >
> > As John mentioned, I have built MET and its supporting libraries
on a
> > number of platforms, always with the Intel family of compilers.
> >
> > I just looked through the script that I use to build and install
MET
> > on these various platforms and verified that I have not added any
> > optimization flags in building MET or any of its supporting
libraries.
> > I don't know if there would be a difference in optimization using
gnu
> > vs. intel compilers or not.  Unfortunately, I don't have much
experience
> in this area.
> >
> > Please let me know if you have any more questions on compiling.
Also,
> > please let us know if you try to run a job with more memory
allocation
> > and how that goes.
> >
> > Thanks,
> > Julie
> >
> >
> > On Mon, Jul 30, 2018 at 10:28 PM John Halley Gotway
<johnhg at ucar.edu
> > <mailto:johnhg at ucar.edu>> wrote:
> > Hi Jon,
> >
> > Unfortunately I don’t have any good advice for you.  Since I’m
> > typically in development mode, I usually compile with the -g
option.
> > We compile with GNU in development.
> >
> > I’ve cc’ed Julie Prestopnik, who’s been compiling MET on a variety
of
> > platforms (usually with Intel compilers) in case she has any
advice.
> >
> > I wonder if the slower run times are related to memory usage.  If
> > you’re process switches over to swap space, it could run much
slower.
> > Some supers enable you to request more memory... and/or print
> > diagnostic info about memory usage in the job log.  As a test, you
> > could try running a job with more memory allocation to see if that
> speeds things up.
> >
> > Thanks
> > John
> >
> > On Mon, Jul 30, 2018 at 2:26 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Hi again John H-G.
> >
> > I’m presently working with the NASA IT staff in determining the
> > optimization levels for compiling MET.  They built MET with
gnu/g++,
> > and noted the MET documentation reports that –O2 causes problems,
and
> > to use either –O or –g.  Could you therefore please provide with
me
> > the compiler you use for building MET, and any information about
> > optimization levels for both the supporting packages and in
particular
> MET?
> >
> > Thanks very much,
> > JonC
> >
> > From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Sent: Friday, July 27, 2018 4:55 PM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>
> > Subject: Re: Verifying neighborhood precip very slow
> >
> > Jon,
> >
> > Thanks for sending this data.  I grabbed it and took a look.
Here's
> > what is see.
> >
> > I ran the following 2 commands for a single output time that you
sent:
> >
> > time \
> > /usr/local/met-6.0/bin/pcp_combine -subtract \
> > MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f120000 12 \
> > MET_MRMSTEST/sportlis_1506030000_wrfout_arw_d01.grb1f110000 11 \
> > met-6.0/sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> > http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc>
> >
> > time \
> > /usr/local/met-6.0/bin/grid_stat \
> > met-${CUR_MET_VERSION}/
> > sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc<
> > http://sportlis_1506030000_wrfout_arw_d01.grb1f120000_APCP_01.nc>
\
> > MET_MRMSTEST/mrms_met_2015060312.grb2 \ met-6.0/GridStatConfig \
> > -outdir met-6.0/met_out \ -v 3
> >
> > And I did this for 4 versions of MET:
> >
> > met-6.0 takes 0.46 sec for pcp_combine and 13.35 seconds for
grid_stat.
> > met-6.1 takes 0.50 sec for pcp_combine and 17.91 seconds for
grid_stat.
> > met-7.0 takes 0.48 sec for pcp_combine and 16.78 seconds for
grid_stat.
> > met-8.0 takes 0.38 sec for pcp_combine and 11.35 seconds for
grid_stat
> > (this is the version under development).
> >
> > When I tried rerunning with "to_grid = OBS" (i.e. regridding
forecast
> > data to the MRMS domain) it slowed down a lot.  In particular the
> > BUDGET interpolation method is very slow.  Using NEAREST neighbor
> > speeds it up a
> > lot:
> > //
> > // Verification grid
> > //
> > regrid = {
> >    to_grid    = OBS;
> >    method     = NEAREST;
> >    width      = 2;
> >    vld_thresh = 0.5;
> > }
> >
> > However, I do think it's appropriate to verify on the relatively
> > coarse model domain instead of the very fine MRMS grid.  So
setting
> > "to_grid = FCST" makes sense to me.  I'm out of the office next
week.
> > But will be back in the following week.
> >
> > When configured with "to_grid = FCST" are you seeing runtime
similar
> > to what I've listed?  Or are they significantly different?
> >
> > Thanks,
> > John
> >
> > On Wed, Jul 25, 2018 at 2:27 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Hi John H-G:
> >
> > I uploaded a tarball which hopefully contains most everything
you’ll
> > need to do some grid_stat tests with MRMS and our near-CONUS 9-km
WRF
> output.
> > https://geo.nsstc.nasa.gov/SPoRT/outgoing/jlc/MET/
> >
> > Thanks!
> > Jon
> >
> > From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Sent: Wednesday, July 25, 2018 3:01 PM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>
> > Subject: Re: Verifying neighborhood precip very slow
> >
> > Jon,
> >
> > OK, good to know.  Please let me know when you've uploaded those
> > sample files.
> >
> > Thanks,
> > John
> >
> > On Wed, Jul 25, 2018 at 12:10 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > FYI, I just ran a test with the vx_mask netcdf files, and I see no
> > improvement in the performance of grid_stat.
> > -JonC
> >
> > From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> > Sent: Wednesday, July 25, 2018 12:05 PM
> > To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Subject: RE: Verifying neighborhood precip very slow
> >
> > John H-G,
> >
> > Let me upload for you a sample GridStatConfig file for v6.0, one
of
> > the WRF GRIB1 files I’m using, and the MRMS GRIB2 files I
> > “re-packaged” for using in MET.
> > I’ll get that to you this afternoon sometime.
> >
> > -JonC
> >
> > From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Sent: Wednesday, July 25, 2018 11:46 AM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>
> > Subject: Re: Verifying neighborhood precip very slow
> >
> > Jon,
> >
> > I thought of trying to run a simple test here to quantify these
timing
> > issues (at least in a relative sense).  I have a sample MRMS file,
and
> > so I know the projection info.  But I don't have your WRF domain
ready
> at hand.
> > Can you send me (or point me to) a sample file?  Apologies if
you've
> > already sent this to me... I see some data from 2011 but I'm
guessing
> > the grid may have changed since then.
> >
> > Thanks,
> > John
> >
> > On Wed, Jul 25, 2018 at 9:10 AM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Hi again John H-G,
> >
> > Since these are standard NCEP verification regions I’m sourcing
within
> > MET, is there by any chance these vx_mask regions in netcdf format
> > already created for the .poly files below?
> > Also, the USER.poly file I use below is auto-generated by our
python
> > scripts based on the approximate outline of the WRF model domain
being
> > verified.  So all I’m doing there is just outlining the entire
domain
> > with only 5 points in the .poly file.  So that shouldn’t need to
> > gen_vx_mask applied to it, right?
> >
> > Thanks for the recommendation.  I’ll let you know how much using
> > vx_mask speeds things up.
> > -JonC
> >
> > From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Sent: Tuesday, July 24, 2018 5:15 PM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>
> > Cc: Srikishen, Jayanthi (MSFC-ST11)[USRA]
> > <jayanthi.srikishen-1 at nasa.gov
> > <mailto:jayanthi.srikishen-1 at nasa.gov>>; Tara Jensen
<jensen at ucar.edu
> <mailto:jensen at ucar.edu>>; Tressa Fowler <tressa at ucar.edu<mailto:
> > tressa at ucar.edu>>
> > Subject: Re: Verifying neighborhood precip very slow
> >
> > Jon,
> >
> > Actually, I do notice one setting in your config file that's
slowing
> > things down a lot!  You're specifying masking regions using
lat/lon
> > polylines:
> >
> > poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> > "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> > "MET_BASE/poly/GRB.poly",
> >               "MET_BASE/poly/NMT.poly", "MET_BASE/poly/SMT.poly",
> > "MET_BASE/poly/SWD.poly", "MET_BASE/poly/NPL.poly",
> > "MET_BASE/poly/SPL.poly",
> >               "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly",
> > "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly",
> > "MET_BASE/poly/APL.poly",
> >               "MET_BASE/poly/SEC.poly" ];
> >
> > That's very slow.  For each point in the verification domain,
> > Grid-Stat is checking to see if it's inside each of these lat/lon
> > polylines.  For a coarse grid or a polyline that doesn't contain
many
> > points, it's hardly noticeable.  But for the dense MRMS grid,
that's
> > millions and millions of computations that we could avoid by
running the
> gen_vx_mask tool instead.
> > It should speed it up considerably.
> >
> > For each of these polylines, run the gen_vx_mask tool to generate
a
> > NetCDF output file.  And then replace the ".poly" file in the
config
> > file with the path to the NetCDF output of gen_vx_mask.  Then try
> rerunning.
> >
> > I'd be really curious to hear if and by how much that improves the
> runtime.
> >
> > Thanks,
> > John
> >
> >
> > On Tue, Jul 24, 2018 at 4:05 PM John Halley Gotway
<johnhg at ucar.edu
> > <mailto:johnhg at ucar.edu>> wrote:
> > Jon,
> >
> > I just ran this same Grid-Stat test case using met-6.0
with/without
> > the "fix".  Unfortunately, there's no improvement.  Both versions
take
> > about
> > 43.6 seconds to run.  We must have introduced this issue when
> > restructuring logic in met-7.0.  So unfortunately, this fix has no
real
> impact on 6.0.
> >
> > So it's back to the drawing board.  We need a much faster
algorithm
> > for computing fractional coverage fields when computing
neighborhood
> stats.
> >
> > John
> >
> > On Tue, Jul 24, 2018 at 2:59 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Hi John,
> >
> > If it’s a simple 1-line change to allocate memory as you say, then
> > could you identify where the code change needs to be made in the
6.0
> > version (if feasible)?
> > We can make the change and re-compile to see if it’s as dramatic
of a
> > change as you’ve documented.
> >
> > If the change can’t easily be made in v6.0, then I’ll need to
consider
> > upgrading to v7.0.  That will be a longer effort on my part, but
one
> > that we’ll likely need to make eventually….
> > Thx,
> > -JonC
> >
> > From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Sent: Tuesday, July 24, 2018 3:55 PM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>
> > Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> > Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen,
Jayanthi
> > (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> > jayanthi.srikishen-1 at nasa.gov>>
> > Subject: Re: Verifying neighborhood precip very slow
> >
> > Jon,
> >
> > As luck would have it, working with folks at NCEP, we identified a
> > memory allocation issue in met-7.0 and will be posting a bugfix
for it
> later today:
> >
> >
https://dtcenter.org/met/users/support/known_issues/METv7.0/index.php
> >
> > This one-line change pre-allocates the required memory in one
chunk
> > rather than building it up incrementally, which is the
excrutiatingly
> slow part!
> > For comparison, the execution time for NCEP's Grid-Stat test case
> > improved from 18 minutes to 56 seconds.  Additionally, the beta
> > release for the next version of MET further improves that runtime
to
> > 27 seconds.  The latter speed up is largely due to storing masking
> > regions more intelligently using booleans instead of double
precision
> > values... which consume more memory and are slower to process.
> >
> > As EMC moves to using MET operationally, there's a great focus on
> > efficiency.
> >
> > However, you're using met-6.0.  I could check to see if that same
> > memory fix would apply to met-6.0... unless you tell me that you'd
be
> > able to switch to met-7.0 instead.
> >
> > Thanks,
> > John
> >
> >
> >
> > On Tue, Jul 24, 2018 at 1:33 PM Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Folks,
> >
> > Well, it turns out that there has been a very substantial speed-
up,
> > confirming that I am in fact interpolating the MRMS OBS grid to
the
> > forecast grid.
> > I change the “to_grid” field to “OBS”, and grid_stat is still
running
> > 45 minutes later on the first grid comparison!!  The number of
pairs
> > over the model region is 14 million vs. 159 thousand when
> > interpolating to the model grid.
> >
> > So I have realized a dramatic speed-up --- grid_stat is still not
> > running quite as fast as I’d like.
> >
> > Thanks,
> > JonC
> >
> > From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> > Sent: Tuesday, July 24, 2018 11:45 AM
> > To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> > Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>; Srikishen,
Jayanthi
> > (MSFC-ST11)[USRA] <jayanthi.srikishen-1 at nasa.gov<mailto:
> > jayanthi.srikishen-1 at nasa.gov>>
> > Subject: RE: Verifying neighborhood precip very slow
> >
> > Hi again John H-G,
> >
> > I finally got to re-configuring my python scripts to remove the
> > regrid_data_plane step and set it up to re-grid MRMS to the FCST
> > domains in-line within GridStatConfig, as you defined in the
regrid
> block below.
> > Previously, we had been re-gridding all model fields to the obs
grid
> > using regrid_data_plane, prior to running grid_stat (so kind of an
> > extra, unnecessary step, since we first developed this work flow
prior
> > to MET version 6).
> >
> > I now have grid_stat running in our version 6.0 installation with
this
> > new configuration.  Unfortunately, the grid_stat is still running
> > excruciatingly/prohibitively too slow in what I feel is a fairly
basic
> > setup.
> > It takes about 6 minutes just to go through a single run of
grid_stat
> > for one accumulation interval (just 1-h precip for now).  I need
to
> > produce batch results across multiple accumulation intervals,
model
> > grids, and experiments, so this will literally take weeks to
process
> > the numerous days of forecast runs I have.
> >
> > My GridStatConfig setup is as follows: (followed by the key
> > GridStatConfig entries below, just to see if I’m doing something
> > inherently wrong)
> >
> > •         1/5/10/25mm accumulation thresholds
> >
> > •         1h APCP in the current runs taking ~6min each. (I really
need
> > these to run on the order of seconds, not minutes)
> >
> > •         Verification stats generated for several poly regions:
> > (essentially all the NCEP verification regions and the entire
grid)
> >
> > Any suggestions for speeding up grid_stat using MRMS QPE is
greatly
> > appreciated!
> >
> > Many thanks,
> > JonC
> >
> > GridStatConfig contents:
> >
> > model = "sportlis_d01";
> >
> > //
> > // Output description to be written
> > // May be set separately in each "obs.field" entry // desc = "NA";
> >
> > //
> > // Output observation type to be written // obtype = "ANALYS";
> >
> >
> >
//////////////////////////////////////////////////////////////////////
> > //////////
> >
> > //
> > // Verification grid
> > //
> > regrid = {
> >    to_grid    = FCST;
> >    method     = BUDGET;
> >    width      = 2;
> >    vld_thresh = 0.5;
> > }
> >
> >
> >
//////////////////////////////////////////////////////////////////////
> > //////////
> >
> > cat_thresh  = [ NA ];
> > cnt_thresh  = [ NA ];
> > cnt_logic   = UNION;
> > wind_thresh = [ NA ];
> > wind_logic  = UNION;
> >
> > fcst = {
> >
> >    field = [
> >       {
> >         name       = "APCP_01";
> >         level      = [ "(*,*)" ];
> >         cat_thresh = [ >=1, >=5, >=10, >=25 ];
> >       }
> >    ];
> >
> > }
> >
> > obs = {
> >
> >    field = [
> >       {
> >         name       = "APCP_01";
> >         level      = [ "(*,*)" ];
> >         cat_thresh = [ >=1, >=5, >=10, >=25 ];
> >       }
> >    ];
> >
> > }
> >
> > climo_mean = {
> >
> >    file_name = [];
> >    field     = [];
> >
> >    regrid = {
> >       method     = NEAREST;
> >       width      = 1;
> >       vld_thresh = 0.5;
> >    }
> >
> >    time_interp_method = DW_MEAN;
> >    match_day          = FALSE;
> >    time_step          = 21600;
> > }
> >
> > mask = {
> >    grid = [  ];
> >    poly = [ "/discover/nobackup/jlcase/MET/configFiles/USER.poly",
> > "MET_BASE/poly/NWC.poly", "MET_BASE/poly/SWC.poly",
> > "MET_BASE/poly/GRB.poly", "MET_BASE/poly/NMT.poly",
> > "MET_BASE/poly/SMT.po ly", "MET_BASE/poly/SWD.poly",
> > "MET_BASE/poly/NPL.poly", "MET_BASE/poly/SPL.poly",
> > "MET_BASE/poly/MDW.poly", "MET_BASE/poly/LMV.poly",
> > "MET_BASE/poly/GMC.poly", "MET_BASE/poly/NEC.poly", "MET_
> > BASE/poly/APL.poly", "MET_BASE/poly/SEC.poly" ]; }
> >
> > ci_alpha  = [ 0.05 ];
> >
> > boot = {
> >    interval = PCTILE;
> >    rep_prop = 1.0;
> >    n_rep    = 0;
> >    rng      = "mt19937";
> >    seed     = "";
> > }
> >
> > interp = {
> >    field      = BOTH;
> >    vld_thresh = 1.0;
> >    shape      = SQUARE;
> >
> >    type = [
> >       {
> >          method = NEAREST;
> >          width  = 1;
> >       }
> >    ];
> > }
> >
> > nbrhd = {
> >    width      = [ 7 ];
> >    cov_thresh = [ >0.0 ];
> >    vld_thresh = 1.0;
> > }
> >
> > output_flag = {
> >    fho    = BOTH;
> >    ctc    = BOTH;
> >    cts    = BOTH;
> >    mctc   = NONE;
> >    mcts   = NONE;
> >    cnt    = BOTH;
> >    sl1l2  = BOTH;
> >    sal1l2 = BOTH;
> >    vl1l2  = BOTH;
> >    val1l2 = BOTH;
> >    pct    = BOTH;
> >    pstd   = BOTH;
> >    pjc    = BOTH;
> >    prc    = BOTH;
> >    nbrctc = BOTH;
> >    nbrcts = BOTH;
> >    nbrcnt = BOTH;
> > }
> >
> > //
> > // NetCDF matched pairs output file
> > //
> > nc_pairs_flag   = {
> >    latlon     = FALSE;
> >    raw        = FALSE;
> >    diff       = FALSE;
> >    climo      = FALSE;
> >    weight     = FALSE;
> >    nbrhd      = FALSE;
> >    apply_mask = FALSE;
> > }
> >
> >
> >
//////////////////////////////////////////////////////////////////////
> > //////////
> >
> > grid_weight_flag = NONE;
> > rank_corr_flag   = FALSE;
> > tmp_dir          =
> > "/discover/nobackup/jlcase/MET/gridStatOutput/grid_stat_tmp";
> > output_prefix    = "sportlis_d01_APCP_01";
> > version          = "V6.0";
> >
> >
> > From: John Halley Gotway <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Sent: Monday, June 11, 2018 1:29 PM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>
> > Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> > Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>
> > Subject: Re: Verifying neighborhood precip
> >
> > Jon,
> >
> > Sorry for the delay in responding.  I'm in a training class all
day,
> > and they won't let us use our phones :(
> >
> > And sorry for the misunderstanding.  I remember you asking about
> > applying neighborhood methods in Grid-Stat regarding a 40-km
> neighborhood size.
> >
> > But if that's not the case, and you're simply comparing
precipitation
> > accumulations, then its much simpler.  I would suggest re-gridding
the
> > hi-res MRMS "observation" data to the relatively lower-res NU-WRF
domain.
> > You'd do that in in the Grid-Stat config file like this:
> >
> > regrid = {
> >    to_grid    = FCST;
> >    method   = BUDGET;
> >    width       = 2;
> >    vld_thresh = 0.5;
> > }
> >
> > The BUDGET interpolation method is generally recommended for
> > accumulated variables, like precip.
> >
> > As for when to upgrade versions, it's totally up to you.  You can
see
> > a list of the features added for each release here:
> >    https://dtcenter.org/met/users/support/release_notes/index.php
> >
> > Probably doesn't make sense to upgrade versions until there's some
new
> > functionality available that you need/want.
> >
> > Nice seeing you this week.  Hope you had a good trip back.
> >
> > Thanks,
> > John
> >
> >
> > On Mon, Jun 11, 2018 at 11:33 AM, Case, Jonathan (MSFC-ST11)[ENSCO
> > INC] < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Hi again,
> >
> > What would be the benefit in upgrading to MET v6.1 vs. v7.0 at
this
> > point?  Should I simply stick with our v6.0 installation where
I’ve
> > done a lot of my point verification already, or is it helpful to
> > upgrade to one of these newer versions?  Will either of these
versions
> > be backward compatible with my v6.0 results?
> >
> > Thanks,
> > JonC
> >
> > From: John Halley Gotway
> > [mailto:johnhg at ucar.edu<mailto:johnhg at ucar.edu>]
> > Sent: Monday, June 11, 2018 10:39 AM
> > To: Case, Jonathan (MSFC-ST11)[ENSCO INC] <jonathan.case-
1 at nasa.gov
> > <mailto:jonathan.case-1 at nasa.gov>>
> > Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>; Tressa
> > Fowler < tressa at ucar.edu<mailto:tressa at ucar.edu>>
> > Subject: Re: Verifying neighborhood precip
> >
> > Jon,
> >
> > Here's the same command but using met-6.0:
> >
> > met-6.0/bin/regrid_data_plane
> > MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000.grib2 G212
> > MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc<
> > http://MRMS_GaugeCorr_QPE_24H_00.00_20180608-120000_G212.nc>
-field
> > 'name="GaugeCorrQPE24H"; level="Z0";' -width 72 -method MAX -name
> > GaugeCorrQPE24H_MAX_72
> >
> > Note that the censor_thresh/censor_val settings aren't included...
and
> > neither is the "-shape CIRCLE" option... those were added met-6.1.
> > With met-6.0, you'll get a square interpolation area instead of a
circle.
> >
> > As for whether or not this is appropriate... My understanding is
that
> > you have a forecast field of probabilities that are defined as the
> > probability of some event occurring within 40km of each grid
point.
> > The upscaling method I've suggested is a way to pre-process the
> > observation data to make it consistent with the way the
probabilistic
> > forecast was defined.  Replace the value at each observation grid
> > point with the maximum value within a neighborhood of radius 40km.
> > Once you've transformed the obs in this way, you can use it to
verify
> the probability forecast directly.
> >
> > I believe this is the same method that the HRRR-TLE group at
NOAA/GSD
> > is using the verify their neighborhood probability forecasts.
> >
> > I've cc'ed Tressa Fowler on this email.  She's our resident
> > statistician and may have an opinion on this.
> >
> > Using the CIRCLE shape available in met-6.1 or met-7.0 would be
> > preferable to using squares in met-6.0.  But perhaps that's close
enough.
> >
> > Thanks,
> > John
> >
> >
> > On Mon, Jun 11, 2018 at 9:18 AM, Case, Jonathan (MSFC-ST11)[ENSCO
INC]
> > < jonathan.case-1 at nasa.gov<mailto:jonathan.case-1 at nasa.gov>>
wrote:
> > Oh, I forgot to mention one thing.  Is “max” the appropriate
upscaling
> > method I should use, or is there a conservative
> > upscaling/interpolation approach?
> > -JonC
> >
> > From: Case, Jonathan (MSFC-ST11)[ENSCO INC]
> > Sent: Monday, June 11, 2018 10:16 AM
> > To: 'John Halley Gotway' <johnhg at ucar.edu<mailto:johnhg at ucar.edu>>
> > Cc: Tara Jensen <jensen at ucar.edu<mailto:jensen at ucar.edu>>
> > Subject: RE: Verifying neighborhood precip
> >
> >
>
>
>
>

------------------------------------------------


More information about the Met_help mailing list