[Met_help] [rt.rap.ucar.edu #86119] History for MET V5.2 Ensemble-Stat err (UNCLASSIFIED)

John Halley Gotway via RT met_help at ucar.edu
Mon Jul 16 09:26:42 MDT 2018


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

CLASSIFICATION: UNCLASSIFIED

Request assistance in diagnosing a problem I'm having. The run ends at the point when the ssvar data is computed and the log file says "out of memory  exiting". For this run I specified regridding "to_grid = OBS". In a previous run in MAY, with the same input data I specified "to_grid = FCST" and I did not have this problem and the run was successful. I am running on an HPC and tried two runs during my testing yesterday. One run was at the command line in my home dir and the other run was as a batch job, but the error recurred for both runs. I have attached a compressed file containing my run script, config file, two MET log files, the HPC system log, and text files containing the grid information for the input fcst and obs files.

The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is the log file from MAY (referred to above) which shows the run completed successfully.
The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log file from yesterday's run which failed. 
The HPC system log from yesterday's batch run which provides more detailed info is "METE-S.o6217798"
The run script is "run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
The config file is "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
The grid info is in "precip_fcst_grid_info" and "precip_obs_grid_info"

Please let me know if you need more info. 

Thanks.
R/
John

Mr. John W. Raby
U.S. Army Research Laboratory
White Sands Missile Range, NM 88002
(575) 678-2004 DSN 258-2004
FAX (575) 678-1230 DSN 258-1230
Email: john.w.raby2.civ at mail.mil


CLASSIFICATION: UNCLASSIFIED


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: John Halley Gotway
Time: Tue Jul 10 09:40:41 2018

John,

Thanks for sending your log files and grid information.  I see that
you're
running out of memory when running the Ensemble-Stat tool.

Your forecast domain has dimension 204x204 (1km Lambert Conformal
grid) and
the observation domain has dimension 1121x881 (StageIV 4km grid).  The
observation grid contains about 24 times more points than the forecast
grid.  And you're defining a 28 member ensemble.  So I'm not that
surprised
that memory issues do not show up for the fcst but do show up for the
obs
grid... since the obs grid would require 24 times more memory to store
the
data.

Even though your forecast domain likely only covers a very small
portion of
the observation domain, MET is storing the full 11121x881 grid points
in
memory for each ensemble member.  Most of them however will just
contain
missing data values.

So you've tried setting "to_grid = FCST" and that works.  And you've
tried
setting "to_grid = OBS" and that runs out of memory.

You could consider...
(1) Some HPC systems allow you to request more memory when you submit
a
job.  You'd need to figure out the right batch options, but that may
be
possible.

(2) Instead of setting "to_grid = OBS", you could define 3rd domain at
approximately the 4-km grid spacing similar to the StageIV domain.
And
then you'd regrid both the forecast and observation data to that 3rd
domain.  Look in the file "met-5.2/data/config/README" and search for
"to_grid" to see a description of the grid specification.

Hope this helps.

Thanks,
John


So





On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> Transaction: Ticket created by john.w.raby2.civ at mail.mil
>        Queue: met_help
>      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
>        Owner: Nobody
>   Requestors: john.w.raby2.civ at mail.mil
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
>
> CLASSIFICATION: UNCLASSIFIED
>
> Request assistance in diagnosing a problem I'm having. The run ends
at the
> point when the ssvar data is computed and the log file says "out of
memory
> exiting". For this run I specified regridding "to_grid = OBS". In a
> previous run in MAY, with the same input data I specified "to_grid =
FCST"
> and I did not have this problem and the run was successful. I am
running on
> an HPC and tried two runs during my testing yesterday. One run was
at the
> command line in my home dir and the other run was as a batch job,
but the
> error recurred for both runs. I have attached a compressed file
containing
> my run script, config file, two MET log files, the HPC system log,
and text
> files containing the grid information for the input fcst and obs
files.
>
> The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is the
log
> file from MAY (referred to above) which shows the run completed
> successfully.
> The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
file from
> yesterday's run which failed.
> The HPC system log from yesterday's batch run which provides more
detailed
> info is "METE-S.o6217798"
> The run script is "run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> The config file is
> "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
>
> Please let me know if you need more info.
>
> Thanks.
> R/
> John
>
> Mr. John W. Raby
> U.S. Army Research Laboratory
> White Sands Missile Range, NM 88002
> (575) 678-2004 DSN 258-2004
> FAX (575) 678-1230 DSN 258-1230
> Email: john.w.raby2.civ at mail.mil
>
>
> CLASSIFICATION: UNCLASSIFIED
>
>

------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Tue Jul 10 10:06:59 2018

CLASSIFICATION: UNCLASSIFIED

John -

Thanks for diagnosing the situation. I'm considering the two options
you suggested. Would the use of a masking region the size of the
smaller forcast domain help?

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Tuesday, July 10, 2018 9:41 AM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

Thanks for sending your log files and grid information.  I see that
you're running out of memory when running the Ensemble-Stat tool.

Your forecast domain has dimension 204x204 (1km Lambert Conformal
grid) and the observation domain has dimension 1121x881 (StageIV 4km
grid).  The observation grid contains about 24 times more points than
the forecast grid.  And you're defining a 28 member ensemble.  So I'm
not that surprised that memory issues do not show up for the fcst but
do show up for the obs grid... since the obs grid would require 24
times more memory to store the data.

Even though your forecast domain likely only covers a very small
portion of the observation domain, MET is storing the full 11121x881
grid points in memory for each ensemble member.  Most of them however
will just contain missing data values.

So you've tried setting "to_grid = FCST" and that works.  And you've
tried setting "to_grid = OBS" and that runs out of memory.

You could consider...
(1) Some HPC systems allow you to request more memory when you submit
a job.  You'd need to figure out the right batch options, but that may
be possible.

(2) Instead of setting "to_grid = OBS", you could define 3rd domain at
approximately the 4-km grid spacing similar to the StageIV domain.
And then you'd regrid both the forecast and observation data to that
3rd domain.  Look in the file "met-5.2/data/config/README" and search
for "to_grid" to see a description of the grid specification.

Hope this helps.

Thanks,
John


So





On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> Transaction: Ticket created by john.w.raby2.civ at mail.mil
>        Queue: met_help
>      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
>        Owner: Nobody
>   Requestors: john.w.raby2.civ at mail.mil
>       Status: new
>  Ticket <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
>
> CLASSIFICATION: UNCLASSIFIED
>
> Request assistance in diagnosing a problem I'm having. The run ends
at
> the point when the ssvar data is computed and the log file says "out
> of memory exiting". For this run I specified regridding "to_grid =
> OBS". In a previous run in MAY, with the same input data I specified
"to_grid = FCST"
> and I did not have this problem and the run was successful. I am
> running on an HPC and tried two runs during my testing yesterday.
One
> run was at the command line in my home dir and the other run was as
a
> batch job, but the error recurred for both runs. I have attached a
> compressed file containing my run script, config file, two MET log
> files, the HPC system log, and text files containing the grid
information for the input fcst and obs files.
>
> The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is the
> log file from MAY (referred to above) which shows the run completed
> successfully.
> The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
file
> from yesterday's run which failed.
> The HPC system log from yesterday's batch run which provides more
> detailed info is "METE-S.o6217798"
> The run script is "run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> The config file is
> "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
>
> Please let me know if you need more info.
>
> Thanks.
> R/
> John
>
> Mr. John W. Raby
> U.S. Army Research Laboratory
> White Sands Missile Range, NM 88002
> (575) 678-2004 DSN 258-2004
> FAX (575) 678-1230 DSN 258-1230
> Email: john.w.raby2.civ at mail.mil
>
>
> CLASSIFICATION: UNCLASSIFIED
>
>

CLASSIFICATION: UNCLASSIFIED


------------------------------------------------
Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: John Halley Gotway
Time: Tue Jul 10 11:49:56 2018

John,

I think changing the masking region would have very little impact on
the
memory usage.  MET is still storing all the ensemble member grids as
double
precision values in memory... even though the vast majority of them
are
missing data values.

I think re-gridding to a 3rd domain would be a good solution.  You'd
want
it to cover the geographic extent of the forecast grid but be at
resolution
of the observation grid.

Thanks,
John



On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> Thanks for diagnosing the situation. I'm considering the two options
you
> suggested. Would the use of a masking region the size of the smaller
> forcast domain help?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Tuesday, July 10, 2018 9:41 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
> Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify the
> identity of the sender, and confirm the authenticity of all links
contained
> within the message prior to copying and pasting the address to a Web
> browser.
>
>
>
>
> ----
>
> John,
>
> Thanks for sending your log files and grid information.  I see that
you're
> running out of memory when running the Ensemble-Stat tool.
>
> Your forecast domain has dimension 204x204 (1km Lambert Conformal
grid)
> and the observation domain has dimension 1121x881 (StageIV 4km
grid).  The
> observation grid contains about 24 times more points than the
forecast
> grid.  And you're defining a 28 member ensemble.  So I'm not that
surprised
> that memory issues do not show up for the fcst but do show up for
the obs
> grid... since the obs grid would require 24 times more memory to
store the
> data.
>
> Even though your forecast domain likely only covers a very small
portion
> of the observation domain, MET is storing the full 11121x881 grid
points in
> memory for each ensemble member.  Most of them however will just
contain
> missing data values.
>
> So you've tried setting "to_grid = FCST" and that works.  And you've
tried
> setting "to_grid = OBS" and that runs out of memory.
>
> You could consider...
> (1) Some HPC systems allow you to request more memory when you
submit a
> job.  You'd need to figure out the right batch options, but that may
be
> possible.
>
> (2) Instead of setting "to_grid = OBS", you could define 3rd domain
at
> approximately the 4-km grid spacing similar to the StageIV domain.
And
> then you'd regrid both the forecast and observation data to that 3rd
> domain.  Look in the file "met-5.2/data/config/README" and search
for
> "to_grid" to see a description of the grid specification.
>
> Hope this helps.
>
> Thanks,
> John
>
>
> So
>
>
>
>
>
> On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> >        Queue: met_help
> >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> >        Owner: Nobody
> >   Requestors: john.w.raby2.civ at mail.mil
> >       Status: new
> >  Ticket <Caution-url:
> > Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > Request assistance in diagnosing a problem I'm having. The run
ends at
> > the point when the ssvar data is computed and the log file says
"out
> > of memory exiting". For this run I specified regridding "to_grid =
> > OBS". In a previous run in MAY, with the same input data I
specified
> "to_grid = FCST"
> > and I did not have this problem and the run was successful. I am
> > running on an HPC and tried two runs during my testing yesterday.
One
> > run was at the command line in my home dir and the other run was
as a
> > batch job, but the error recurred for both runs. I have attached a
> > compressed file containing my run script, config file, two MET log
> > files, the HPC system log, and text files containing the grid
> information for the input fcst and obs files.
> >
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is
the
> > log file from MAY (referred to above) which shows the run
completed
> > successfully.
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
file
> > from yesterday's run which failed.
> > The HPC system log from yesterday's batch run which provides more
> > detailed info is "METE-S.o6217798"
> > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > The config file is
> > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
> >
> > Please let me know if you need more info.
> >
> > Thanks.
> > R/
> > John
> >
> > Mr. John W. Raby
> > U.S. Army Research Laboratory
> > White Sands Missile Range, NM 88002
> > (575) 678-2004 DSN 258-2004
> > FAX (575) 678-1230 DSN 258-1230
> > Email: john.w.raby2.civ at mail.mil
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Tue Jul 10 12:57:37 2018

CLASSIFICATION: UNCLASSIFIED

John -

Thanks for answering my question on the masking region. My two
attempts so far to invoke the use of large-memory nodes using a PBS
command has not resulted in success. The HPC User's Guide provided the
command to use, but it doesn't appear that it is working , or if it
is, the additional memory is not enough. I'll try regridding to a
third domain next.

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Tuesday, July 10, 2018 11:50 AM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

I think changing the masking region would have very little impact on
the memory usage.  MET is still storing all the ensemble member grids
as double precision values in memory... even though the vast majority
of them are missing data values.

I think re-gridding to a 3rd domain would be a good solution.  You'd
want it to cover the geographic extent of the forecast grid but be at
resolution of the observation grid.

Thanks,
John



On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> Thanks for diagnosing the situation. I'm considering the two options
> you suggested. Would the use of a masking region the size of the
> smaller forcast domain help?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> Sent: Tuesday, July 10, 2018 9:41 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
> <john.w.raby2.civ at mail.mil>
> Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify
> the identity of the sender, and confirm the authenticity of all
links
> contained within the message prior to copying and pasting the
address
> to a Web browser.
>
>
>
>
> ----
>
> John,
>
> Thanks for sending your log files and grid information.  I see that
> you're running out of memory when running the Ensemble-Stat tool.
>
> Your forecast domain has dimension 204x204 (1km Lambert Conformal
> grid) and the observation domain has dimension 1121x881 (StageIV 4km
> grid).  The observation grid contains about 24 times more points
than
> the forecast grid.  And you're defining a 28 member ensemble.  So
I'm
> not that surprised that memory issues do not show up for the fcst
but
> do show up for the obs grid... since the obs grid would require 24
> times more memory to store the data.
>
> Even though your forecast domain likely only covers a very small
> portion of the observation domain, MET is storing the full 11121x881
> grid points in memory for each ensemble member.  Most of them
however
> will just contain missing data values.
>
> So you've tried setting "to_grid = FCST" and that works.  And you've
> tried setting "to_grid = OBS" and that runs out of memory.
>
> You could consider...
> (1) Some HPC systems allow you to request more memory when you
submit
> a job.  You'd need to figure out the right batch options, but that
may
> be possible.
>
> (2) Instead of setting "to_grid = OBS", you could define 3rd domain
at
> approximately the 4-km grid spacing similar to the StageIV domain.
> And then you'd regrid both the forecast and observation data to that
> 3rd domain.  Look in the file "met-5.2/data/config/README" and
search
> for "to_grid" to see a description of the grid specification.
>
> Hope this helps.
>
> Thanks,
> John
>
>
> So
>
>
>
>
>
> On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> >        Queue: met_help
> >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> >        Owner: Nobody
> >   Requestors: john.w.raby2.civ at mail.mil
> >       Status: new
> >  Ticket <Caution-Caution-url:
> > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > 119 >
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > Request assistance in diagnosing a problem I'm having. The run
ends
> > at the point when the ssvar data is computed and the log file says
> > "out of memory exiting". For this run I specified regridding
> > "to_grid = OBS". In a previous run in MAY, with the same input
data
> > I specified
> "to_grid = FCST"
> > and I did not have this problem and the run was successful. I am
> > running on an HPC and tried two runs during my testing yesterday.
> > One run was at the command line in my home dir and the other run
was
> > as a batch job, but the error recurred for both runs. I have
> > attached a compressed file containing my run script, config file,
> > two MET log files, the HPC system log, and text files containing
the
> > grid
> information for the input fcst and obs files.
> >
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is
the
> > log file from MAY (referred to above) which shows the run
completed
> > successfully.
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
> > file from yesterday's run which failed.
> > The HPC system log from yesterday's batch run which provides more
> > detailed info is "METE-S.o6217798"
> > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > The config file is
> > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
> >
> > Please let me know if you need more info.
> >
> > Thanks.
> > R/
> > John
> >
> > Mr. John W. Raby
> > U.S. Army Research Laboratory
> > White Sands Missile Range, NM 88002
> > (575) 678-2004 DSN 258-2004
> > FAX (575) 678-1230 DSN 258-1230
> > Email: john.w.raby2.civ at mail.mil
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

CLASSIFICATION: UNCLASSIFIED


------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Tue Jul 10 16:47:43 2018

CLASSIFICATION: UNCLASSIFIED

John -

I've been using ncdump on the WRF geo_em file, the met_em file and the
wrfout file (all NetCDF) and I can't locate the grid specification
specs required for the regridding. How do you find those specs?

I'm pretty sure that for the fcst file the Nx = 204 and Ny is 204 and
for the 4km prcip file Nx is 1121 and Ny is 881. My target 3rd domain
would be the 204 X 204. I have the lat/long extents of the fcst domain
using the corner_lats and corner_longs info in the dump of the geo_em
file.

Doing searches in the dump files for the listed  specs don't (per the
README file) do not produce hits. Not sure where to turn to for this.
Maybe WRF specs are not named the same?

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Tuesday, July 10, 2018 11:50 AM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

I think changing the masking region would have very little impact on
the memory usage.  MET is still storing all the ensemble member grids
as double precision values in memory... even though the vast majority
of them are missing data values.

I think re-gridding to a 3rd domain would be a good solution.  You'd
want it to cover the geographic extent of the forecast grid but be at
resolution of the observation grid.

Thanks,
John



On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> Thanks for diagnosing the situation. I'm considering the two options
> you suggested. Would the use of a masking region the size of the
> smaller forcast domain help?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> Sent: Tuesday, July 10, 2018 9:41 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
> <john.w.raby2.civ at mail.mil>
> Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify
> the identity of the sender, and confirm the authenticity of all
links
> contained within the message prior to copying and pasting the
address
> to a Web browser.
>
>
>
>
> ----
>
> John,
>
> Thanks for sending your log files and grid information.  I see that
> you're running out of memory when running the Ensemble-Stat tool.
>
> Your forecast domain has dimension 204x204 (1km Lambert Conformal
> grid) and the observation domain has dimension 1121x881 (StageIV 4km
> grid).  The observation grid contains about 24 times more points
than
> the forecast grid.  And you're defining a 28 member ensemble.  So
I'm
> not that surprised that memory issues do not show up for the fcst
but
> do show up for the obs grid... since the obs grid would require 24
> times more memory to store the data.
>
> Even though your forecast domain likely only covers a very small
> portion of the observation domain, MET is storing the full 11121x881
> grid points in memory for each ensemble member.  Most of them
however
> will just contain missing data values.
>
> So you've tried setting "to_grid = FCST" and that works.  And you've
> tried setting "to_grid = OBS" and that runs out of memory.
>
> You could consider...
> (1) Some HPC systems allow you to request more memory when you
submit
> a job.  You'd need to figure out the right batch options, but that
may
> be possible.
>
> (2) Instead of setting "to_grid = OBS", you could define 3rd domain
at
> approximately the 4-km grid spacing similar to the StageIV domain.
> And then you'd regrid both the forecast and observation data to that
> 3rd domain.  Look in the file "met-5.2/data/config/README" and
search
> for "to_grid" to see a description of the grid specification.
>
> Hope this helps.
>
> Thanks,
> John
>
>
> So
>
>
>
>
>
> On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> >        Queue: met_help
> >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> >        Owner: Nobody
> >   Requestors: john.w.raby2.civ at mail.mil
> >       Status: new
> >  Ticket <Caution-Caution-url:
> > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > 119 >
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > Request assistance in diagnosing a problem I'm having. The run
ends
> > at the point when the ssvar data is computed and the log file says
> > "out of memory exiting". For this run I specified regridding
> > "to_grid = OBS". In a previous run in MAY, with the same input
data
> > I specified
> "to_grid = FCST"
> > and I did not have this problem and the run was successful. I am
> > running on an HPC and tried two runs during my testing yesterday.
> > One run was at the command line in my home dir and the other run
was
> > as a batch job, but the error recurred for both runs. I have
> > attached a compressed file containing my run script, config file,
> > two MET log files, the HPC system log, and text files containing
the
> > grid
> information for the input fcst and obs files.
> >
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is
the
> > log file from MAY (referred to above) which shows the run
completed
> > successfully.
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
> > file from yesterday's run which failed.
> > The HPC system log from yesterday's batch run which provides more
> > detailed info is "METE-S.o6217798"
> > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > The config file is
> > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
> >
> > Please let me know if you need more info.
> >
> > Thanks.
> > R/
> > John
> >
> > Mr. John W. Raby
> > U.S. Army Research Laboratory
> > White Sands Missile Range, NM 88002
> > (575) 678-2004 DSN 258-2004
> > FAX (575) 678-1230 DSN 258-1230
> > Email: john.w.raby2.civ at mail.mil
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

CLASSIFICATION: UNCLASSIFIED


------------------------------------------------
Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Wed Jul 11 09:08:18 2018

CLASSIFICATION: UNCLASSIFIED

John -

I did ncdump on the output of Pcp-Combine which is the forecast grid
of accum precip. I noticed that the grid specifications which appear
in this dump appear to be those more closely matching those you
referred to in the README file. See attached text file which is the
output of the ncdump. I printed the projection info below:

:Projection = "Lambert Conformal" ;
		:scale_lat_1 = "39.032000" ;
		:scale_lat_2 = "39.032000" ;
		:lat_pin = "38.113000" ;
		:lon_pin = "-78.112000" ;
		:x_pin = "0.000000" ;
		:y_pin = "0.000000" ;
		:lon_orient = "-76.952000" ;
		:d_km = "1.000000" ;
		:r_km = "6371.200000" ;
		:nx = "204" ;
		:ny = "204 grid_points" ;


So, for the fcst domain I now have what looks like Nx, Ny, lon_orient,
D_km, R_km from the resemblance with the README spec names.

Can I assume that "standard_parallel_1" is the same as :scale_lat_1 =
"39.032000" above and that "standard_parallel_2" is the same as
:scale_lat_2 = "39.032000" above?

Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the same as
:lon_pin = "-78.112000"?

So, to perform the regrid per your suggestion, I would set "d_km" to 4
vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the same
size as the fcst domain with a grid resolution of 4km. Does this sound
right? All the other required specs tie the geographic location to
that of my fcst domain, so I would use the same values as are show
above, correct?

Thanks.

R/
John


-----Original Message-----
From: Raby, John W CIV USARMY RDECOM ARL (US)
Sent: Tuesday, July 10, 2018 4:48 PM
To: 'met_help at ucar.edu' <met_help at ucar.edu>
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

CLASSIFICATION: UNCLASSIFIED

John -

I've been using ncdump on the WRF geo_em file, the met_em file and the
wrfout file (all NetCDF) and I can't locate the grid specification
specs required for the regridding. How do you find those specs?

I'm pretty sure that for the fcst file the Nx = 204 and Ny is 204 and
for the 4km prcip file Nx is 1121 and Ny is 881. My target 3rd domain
would be the 204 X 204. I have the lat/long extents of the fcst domain
using the corner_lats and corner_longs info in the dump of the geo_em
file.

Doing searches in the dump files for the listed  specs don't (per the
README file) do not produce hits. Not sure where to turn to for this.
Maybe WRF specs are not named the same?

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Tuesday, July 10, 2018 11:50 AM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

I think changing the masking region would have very little impact on
the memory usage.  MET is still storing all the ensemble member grids
as double precision values in memory... even though the vast majority
of them are missing data values.

I think re-gridding to a 3rd domain would be a good solution.  You'd
want it to cover the geographic extent of the forecast grid but be at
resolution of the observation grid.

Thanks,
John



On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> Thanks for diagnosing the situation. I'm considering the two options
> you suggested. Would the use of a masking region the size of the
> smaller forcast domain help?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> Sent: Tuesday, July 10, 2018 9:41 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
> <john.w.raby2.civ at mail.mil>
> Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify
> the identity of the sender, and confirm the authenticity of all
links
> contained within the message prior to copying and pasting the
address
> to a Web browser.
>
>
>
>
> ----
>
> John,
>
> Thanks for sending your log files and grid information.  I see that
> you're running out of memory when running the Ensemble-Stat tool.
>
> Your forecast domain has dimension 204x204 (1km Lambert Conformal
> grid) and the observation domain has dimension 1121x881 (StageIV 4km
> grid).  The observation grid contains about 24 times more points
than
> the forecast grid.  And you're defining a 28 member ensemble.  So
I'm
> not that surprised that memory issues do not show up for the fcst
but
> do show up for the obs grid... since the obs grid would require 24
> times more memory to store the data.
>
> Even though your forecast domain likely only covers a very small
> portion of the observation domain, MET is storing the full 11121x881
> grid points in memory for each ensemble member.  Most of them
however
> will just contain missing data values.
>
> So you've tried setting "to_grid = FCST" and that works.  And you've
> tried setting "to_grid = OBS" and that runs out of memory.
>
> You could consider...
> (1) Some HPC systems allow you to request more memory when you
submit
> a job.  You'd need to figure out the right batch options, but that
may
> be possible.
>
> (2) Instead of setting "to_grid = OBS", you could define 3rd domain
at
> approximately the 4-km grid spacing similar to the StageIV domain.
> And then you'd regrid both the forecast and observation data to that
> 3rd domain.  Look in the file "met-5.2/data/config/README" and
search
> for "to_grid" to see a description of the grid specification.
>
> Hope this helps.
>
> Thanks,
> John
>
>
> So
>
>
>
>
>
> On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> >        Queue: met_help
> >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> >        Owner: Nobody
> >   Requestors: john.w.raby2.civ at mail.mil
> >       Status: new
> >  Ticket <Caution-Caution-url:
> > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > 119 >
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > Request assistance in diagnosing a problem I'm having. The run
ends
> > at the point when the ssvar data is computed and the log file says
> > "out of memory exiting". For this run I specified regridding
> > "to_grid = OBS". In a previous run in MAY, with the same input
data
> > I specified
> "to_grid = FCST"
> > and I did not have this problem and the run was successful. I am
> > running on an HPC and tried two runs during my testing yesterday.
> > One run was at the command line in my home dir and the other run
was
> > as a batch job, but the error recurred for both runs. I have
> > attached a compressed file containing my run script, config file,
> > two MET log files, the HPC system log, and text files containing
the
> > grid
> information for the input fcst and obs files.
> >
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is
the
> > log file from MAY (referred to above) which shows the run
completed
> > successfully.
> > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
> > file from yesterday's run which failed.
> > The HPC system log from yesterday's batch run which provides more
> > detailed info is "METE-S.o6217798"
> > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > The config file is
> > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
> >
> > Please let me know if you need more info.
> >
> > Thanks.
> > R/
> > John
> >
> > Mr. John W. Raby
> > U.S. Army Research Laboratory
> > White Sands Missile Range, NM 88002
> > (575) 678-2004 DSN 258-2004
> > FAX (575) 678-1230 DSN 258-1230
> > Email: john.w.raby2.civ at mail.mil
> >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

CLASSIFICATION: UNCLASSIFIED
CLASSIFICATION: UNCLASSIFIED

------------------------------------------------
Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: John Halley Gotway
Time: Wed Jul 11 10:18:17 2018

John,

Yes, that all sounds good to me.  Just give it a try using
regrid_data_plane.  And then run the output through plot_data_plane to
see
how it looks.

Actually, I'd suggest regridding a sample *forecast* file to that new
domain... and running that through plot_data_plane to see how it
looks.
You can play around with it however you'd like.  Perhaps increasing
from
51x51 to something slightly larger to make sure your forecast data is
fully
contained inside your new verification domain.

Thanks,
John

On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> I did ncdump on the output of Pcp-Combine which is the forecast grid
of
> accum precip. I noticed that the grid specifications which appear in
this
> dump appear to be those more closely matching those you referred to
in the
> README file. See attached text file which is the output of the
ncdump. I
> printed the projection info below:
>
> :Projection = "Lambert Conformal" ;
>                 :scale_lat_1 = "39.032000" ;
>                 :scale_lat_2 = "39.032000" ;
>                 :lat_pin = "38.113000" ;
>                 :lon_pin = "-78.112000" ;
>                 :x_pin = "0.000000" ;
>                 :y_pin = "0.000000" ;
>                 :lon_orient = "-76.952000" ;
>                 :d_km = "1.000000" ;
>                 :r_km = "6371.200000" ;
>                 :nx = "204" ;
>                 :ny = "204 grid_points" ;
>
>
> So, for the fcst domain I now have what looks like Nx, Ny,
lon_orient,
> D_km, R_km from the resemblance with the README spec names.
>
> Can I assume that "standard_parallel_1" is the same as :scale_lat_1
=
> "39.032000" above and that "standard_parallel_2" is the same as
> :scale_lat_2 = "39.032000" above?
>
> Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the same as
> :lon_pin = "-78.112000"?
>
> So, to perform the regrid per your suggestion, I would set "d_km" to
4
> vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the same
size as
> the fcst domain with a grid resolution of 4km. Does this sound
right? All
> the other required specs tie the geographic location to that of my
fcst
> domain, so I would use the same values as are show above, correct?
>
> Thanks.
>
> R/
> John
>
>
> -----Original Message-----
> From: Raby, John W CIV USARMY RDECOM ARL (US)
> Sent: Tuesday, July 10, 2018 4:48 PM
> To: 'met_help at ucar.edu' <met_help at ucar.edu>
> Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> I've been using ncdump on the WRF geo_em file, the met_em file and
the
> wrfout file (all NetCDF) and I can't locate the grid specification
specs
> required for the regridding. How do you find those specs?
>
> I'm pretty sure that for the fcst file the Nx = 204 and Ny is 204
and for
> the 4km prcip file Nx is 1121 and Ny is 881. My target 3rd domain
would be
> the 204 X 204. I have the lat/long extents of the fcst domain using
the
> corner_lats and corner_longs info in the dump of the geo_em file.
>
> Doing searches in the dump files for the listed  specs don't (per
the
> README file) do not produce hits. Not sure where to turn to for
this. Maybe
> WRF specs are not named the same?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Tuesday, July 10, 2018 11:50 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify the
> identity of the sender, and confirm the authenticity of all links
contained
> within the message prior to copying and pasting the address to a Web
> browser.
>
>
>
>
> ----
>
> John,
>
> I think changing the masking region would have very little impact on
the
> memory usage.  MET is still storing all the ensemble member grids as
double
> precision values in memory... even though the vast majority of them
are
> missing data values.
>
> I think re-gridding to a 3rd domain would be a good solution.  You'd
want
> it to cover the geographic extent of the forecast grid but be at
resolution
> of the observation grid.
>
> Thanks,
> John
>
>
>
> On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-url:
> > Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > Thanks for diagnosing the situation. I'm considering the two
options
> > you suggested. Would the use of a masking region the size of the
> > smaller forcast domain help?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> > Sent: Tuesday, July 10, 2018 9:41 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
verify
> > the identity of the sender, and confirm the authenticity of all
links
> > contained within the message prior to copying and pasting the
address
> > to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > Thanks for sending your log files and grid information.  I see
that
> > you're running out of memory when running the Ensemble-Stat tool.
> >
> > Your forecast domain has dimension 204x204 (1km Lambert Conformal
> > grid) and the observation domain has dimension 1121x881 (StageIV
4km
> > grid).  The observation grid contains about 24 times more points
than
> > the forecast grid.  And you're defining a 28 member ensemble.  So
I'm
> > not that surprised that memory issues do not show up for the fcst
but
> > do show up for the obs grid... since the obs grid would require 24
> > times more memory to store the data.
> >
> > Even though your forecast domain likely only covers a very small
> > portion of the observation domain, MET is storing the full
11121x881
> > grid points in memory for each ensemble member.  Most of them
however
> > will just contain missing data values.
> >
> > So you've tried setting "to_grid = FCST" and that works.  And
you've
> > tried setting "to_grid = OBS" and that runs out of memory.
> >
> > You could consider...
> > (1) Some HPC systems allow you to request more memory when you
submit
> > a job.  You'd need to figure out the right batch options, but that
may
> > be possible.
> >
> > (2) Instead of setting "to_grid = OBS", you could define 3rd
domain at
> > approximately the 4-km grid spacing similar to the StageIV domain.
> > And then you'd regrid both the forecast and observation data to
that
> > 3rd domain.  Look in the file "met-5.2/data/config/README" and
search
> > for "to_grid" to see a description of the grid specification.
> >
> > Hope this helps.
> >
> > Thanks,
> > John
> >
> >
> > So
> >
> >
> >
> >
> >
> > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > >        Queue: met_help
> > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > >        Owner: Nobody
> > >   Requestors: john.w.raby2.civ at mail.mil
> > >       Status: new
> > >  Ticket <Caution-Caution-url:
> > > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > > 119 >
> > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > Request assistance in diagnosing a problem I'm having. The run
ends
> > > at the point when the ssvar data is computed and the log file
says
> > > "out of memory exiting". For this run I specified regridding
> > > "to_grid = OBS". In a previous run in MAY, with the same input
data
> > > I specified
> > "to_grid = FCST"
> > > and I did not have this problem and the run was successful. I am
> > > running on an HPC and tried two runs during my testing
yesterday.
> > > One run was at the command line in my home dir and the other run
was
> > > as a batch job, but the error recurred for both runs. I have
> > > attached a compressed file containing my run script, config
file,
> > > two MET log files, the HPC system log, and text files containing
the
> > > grid
> > information for the input fcst and obs files.
> > >
> > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is
the
> > > log file from MAY (referred to above) which shows the run
completed
> > > successfully.
> > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
> > > file from yesterday's run which failed.
> > > The HPC system log from yesterday's batch run which provides
more
> > > detailed info is "METE-S.o6217798"
> > > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > The config file is
> > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
> > >
> > > Please let me know if you need more info.
> > >
> > > Thanks.
> > > R/
> > > John
> > >
> > > Mr. John W. Raby
> > > U.S. Army Research Laboratory
> > > White Sands Missile Range, NM 88002
> > > (575) 678-2004 DSN 258-2004
> > > FAX (575) 678-1230 DSN 258-1230
> > > Email: john.w.raby2.civ at mail.mil
> > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
> CLASSIFICATION: UNCLASSIFIED
>
>

------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Wed Jul 11 10:36:34 2018

CLASSIFICATION: UNCLASSIFIED

John -

Thanks for confirming the grid spec info. So, if I use that info in
regrid_data_plane to regrid the forecast to a 4km grid, can I then run
MET Ensemble-Stat without regridding and use the regridded fcst file
and the 4km precip observations files as inputs or do I have to use
regridding again?

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Wednesday, July 11, 2018 10:18 AM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

Yes, that all sounds good to me.  Just give it a try using
regrid_data_plane.  And then run the output through plot_data_plane to
see how it looks.

Actually, I'd suggest regridding a sample *forecast* file to that new
domain... and running that through plot_data_plane to see how it
looks.
You can play around with it however you'd like.  Perhaps increasing
from
51x51 to something slightly larger to make sure your forecast data is
fully contained inside your new verification domain.

Thanks,
John

On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> I did ncdump on the output of Pcp-Combine which is the forecast grid
> of accum precip. I noticed that the grid specifications which appear
> in this dump appear to be those more closely matching those you
> referred to in the README file. See attached text file which is the
> output of the ncdump. I printed the projection info below:
>
> :Projection = "Lambert Conformal" ;
>                 :scale_lat_1 = "39.032000" ;
>                 :scale_lat_2 = "39.032000" ;
>                 :lat_pin = "38.113000" ;
>                 :lon_pin = "-78.112000" ;
>                 :x_pin = "0.000000" ;
>                 :y_pin = "0.000000" ;
>                 :lon_orient = "-76.952000" ;
>                 :d_km = "1.000000" ;
>                 :r_km = "6371.200000" ;
>                 :nx = "204" ;
>                 :ny = "204 grid_points" ;
>
>
> So, for the fcst domain I now have what looks like Nx, Ny,
lon_orient,
> D_km, R_km from the resemblance with the README spec names.
>
> Can I assume that "standard_parallel_1" is the same as :scale_lat_1
=
> "39.032000" above and that "standard_parallel_2" is the same as
> :scale_lat_2 = "39.032000" above?
>
> Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the same as
> :lon_pin = "-78.112000"?
>
> So, to perform the regrid per your suggestion, I would set "d_km" to
4
> vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the same
> size as the fcst domain with a grid resolution of 4km. Does this
sound
> right? All the other required specs tie the geographic location to
> that of my fcst domain, so I would use the same values as are show
above, correct?
>
> Thanks.
>
> R/
> John
>
>
> -----Original Message-----
> From: Raby, John W CIV USARMY RDECOM ARL (US)
> Sent: Tuesday, July 10, 2018 4:48 PM
> To: 'met_help at ucar.edu' <met_help at ucar.edu>
> Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> I've been using ncdump on the WRF geo_em file, the met_em file and
the
> wrfout file (all NetCDF) and I can't locate the grid specification
> specs required for the regridding. How do you find those specs?
>
> I'm pretty sure that for the fcst file the Nx = 204 and Ny is 204
and
> for the 4km prcip file Nx is 1121 and Ny is 881. My target 3rd
domain
> would be the 204 X 204. I have the lat/long extents of the fcst
domain
> using the corner_lats and corner_longs info in the dump of the
geo_em file.
>
> Doing searches in the dump files for the listed  specs don't (per
the
> README file) do not produce hits. Not sure where to turn to for
this.
> Maybe WRF specs are not named the same?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> Sent: Tuesday, July 10, 2018 11:50 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
> <john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify
> the identity of the sender, and confirm the authenticity of all
links
> contained within the message prior to copying and pasting the
address
> to a Web browser.
>
>
>
>
> ----
>
> John,
>
> I think changing the masking region would have very little impact on
> the memory usage.  MET is still storing all the ensemble member
grids
> as double precision values in memory... even though the vast
majority
> of them are missing data values.
>
> I think re-gridding to a 3rd domain would be a good solution.  You'd
> want it to cover the geographic extent of the forecast grid but be
at
> resolution of the observation grid.
>
> Thanks,
> John
>
>
>
> On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-Caution-url:
> > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > 119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > Thanks for diagnosing the situation. I'm considering the two
options
> > you suggested. Would the use of a masking region the size of the
> > smaller forcast domain help?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT
> > [Caution-Caution-mailto:met_help at ucar.edu]
> > Sent: Tuesday, July 10, 2018 9:41 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
> > verify the identity of the sender, and confirm the authenticity of
> > all links contained within the message prior to copying and
pasting
> > the address to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > Thanks for sending your log files and grid information.  I see
that
> > you're running out of memory when running the Ensemble-Stat tool.
> >
> > Your forecast domain has dimension 204x204 (1km Lambert Conformal
> > grid) and the observation domain has dimension 1121x881 (StageIV
4km
> > grid).  The observation grid contains about 24 times more points
> > than the forecast grid.  And you're defining a 28 member ensemble.
> > So I'm not that surprised that memory issues do not show up for
the
> > fcst but do show up for the obs grid... since the obs grid would
> > require 24 times more memory to store the data.
> >
> > Even though your forecast domain likely only covers a very small
> > portion of the observation domain, MET is storing the full
11121x881
> > grid points in memory for each ensemble member.  Most of them
> > however will just contain missing data values.
> >
> > So you've tried setting "to_grid = FCST" and that works.  And
you've
> > tried setting "to_grid = OBS" and that runs out of memory.
> >
> > You could consider...
> > (1) Some HPC systems allow you to request more memory when you
> > submit a job.  You'd need to figure out the right batch options,
but
> > that may be possible.
> >
> > (2) Instead of setting "to_grid = OBS", you could define 3rd
domain
> > at approximately the 4-km grid spacing similar to the StageIV
domain.
> > And then you'd regrid both the forecast and observation data to
that
> > 3rd domain.  Look in the file "met-5.2/data/config/README" and
> > search for "to_grid" to see a description of the grid
specification.
> >
> > Hope this helps.
> >
> > Thanks,
> > John
> >
> >
> > So
> >
> >
> >
> >
> >
> > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > >        Queue: met_help
> > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > >        Owner: Nobody
> > >   Requestors: john.w.raby2.civ at mail.mil
> > >       Status: new
> > >  Ticket <Caution-Caution-Caution-url:
> > > Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > html?id=86
> > > 119 >
> > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > Request assistance in diagnosing a problem I'm having. The run
> > > ends at the point when the ssvar data is computed and the log
file
> > > says "out of memory exiting". For this run I specified
regridding
> > > "to_grid = OBS". In a previous run in MAY, with the same input
> > > data I specified
> > "to_grid = FCST"
> > > and I did not have this problem and the run was successful. I am
> > > running on an HPC and tried two runs during my testing
yesterday.
> > > One run was at the command line in my home dir and the other run
> > > was as a batch job, but the error recurred for both runs. I have
> > > attached a compressed file containing my run script, config
file,
> > > two MET log files, the HPC system log, and text files containing
> > > the grid
> > information for the input fcst and obs files.
> > >
> > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log" is
> > > the log file from MAY (referred to above) which shows the run
> > > completed successfully.
> > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the log
> > > file from yesterday's run which failed.
> > > The HPC system log from yesterday's batch run which provides
more
> > > detailed info is "METE-S.o6217798"
> > > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > The config file is
> > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > The grid info is in "precip_fcst_grid_info" and
"precip_obs_grid_info"
> > >
> > > Please let me know if you need more info.
> > >
> > > Thanks.
> > > R/
> > > John
> > >
> > > Mr. John W. Raby
> > > U.S. Army Research Laboratory
> > > White Sands Missile Range, NM 88002
> > > (575) 678-2004 DSN 258-2004
> > > FAX (575) 678-1230 DSN 258-1230
> > > Email: john.w.raby2.civ at mail.mil
> > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
> CLASSIFICATION: UNCLASSIFIED
>
>

CLASSIFICATION: UNCLASSIFIED


------------------------------------------------
Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: John Halley Gotway
Time: Wed Jul 11 11:10:10 2018

John,

I would suggest these following steps.

(1) Start with a sample forecast file on its 201x201 domain.
(2) Use the regrid_data_plane and plot_data_plane tools to test our
your
proposed 4-km tile.  You'll pass in a grid specification string to
regrid_data_plane ... do the regridding ... plot the result using
plot_point_obs.  If the forecast data isn't fully contained in the
tile,
adjust the grid spec and try again.
(3) Once you have the grid spec the way you want, edit the Ensemble-
Stat
config file by setting:
   regrid = {
     to_grid = "YOUR GRID SPEC_GOES HERE";
   ... }
(4) Run ensemble-stat just like you were doing before, passing the
model
and obs data on their native grids.  Let ensemble-stat do the
regridding
for you rather than having to run regrid_data_plane manually.

One last suggestion, since you're processing precip, it's generally
recommended that you use the budget interpolation option: -method
BUDGET
-width 2

Thanks,
John

On Wed, Jul 11, 2018 at 10:37 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> Thanks for confirming the grid spec info. So, if I use that info in
> regrid_data_plane to regrid the forecast to a 4km grid, can I then
run MET
> Ensemble-Stat without regridding and use the regridded fcst file and
the
> 4km precip observations files as inputs or do I have to use
regridding
> again?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Wednesday, July 11, 2018 10:18 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify the
> identity of the sender, and confirm the authenticity of all links
contained
> within the message prior to copying and pasting the address to a Web
> browser.
>
>
>
>
> ----
>
> John,
>
> Yes, that all sounds good to me.  Just give it a try using
> regrid_data_plane.  And then run the output through plot_data_plane
to see
> how it looks.
>
> Actually, I'd suggest regridding a sample *forecast* file to that
new
> domain... and running that through plot_data_plane to see how it
looks.
> You can play around with it however you'd like.  Perhaps increasing
from
> 51x51 to something slightly larger to make sure your forecast data
is
> fully contained inside your new verification domain.
>
> Thanks,
> John
>
> On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-url:
> > Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > I did ncdump on the output of Pcp-Combine which is the forecast
grid
> > of accum precip. I noticed that the grid specifications which
appear
> > in this dump appear to be those more closely matching those you
> > referred to in the README file. See attached text file which is
the
> > output of the ncdump. I printed the projection info below:
> >
> > :Projection = "Lambert Conformal" ;
> >                 :scale_lat_1 = "39.032000" ;
> >                 :scale_lat_2 = "39.032000" ;
> >                 :lat_pin = "38.113000" ;
> >                 :lon_pin = "-78.112000" ;
> >                 :x_pin = "0.000000" ;
> >                 :y_pin = "0.000000" ;
> >                 :lon_orient = "-76.952000" ;
> >                 :d_km = "1.000000" ;
> >                 :r_km = "6371.200000" ;
> >                 :nx = "204" ;
> >                 :ny = "204 grid_points" ;
> >
> >
> > So, for the fcst domain I now have what looks like Nx, Ny,
lon_orient,
> > D_km, R_km from the resemblance with the README spec names.
> >
> > Can I assume that "standard_parallel_1" is the same as
:scale_lat_1 =
> > "39.032000" above and that "standard_parallel_2" is the same as
> > :scale_lat_2 = "39.032000" above?
> >
> > Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the same
as
> > :lon_pin = "-78.112000"?
> >
> > So, to perform the regrid per your suggestion, I would set "d_km"
to 4
> > vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the
same
> > size as the fcst domain with a grid resolution of 4km. Does this
sound
> > right? All the other required specs tie the geographic location to
> > that of my fcst domain, so I would use the same values as are show
> above, correct?
> >
> > Thanks.
> >
> > R/
> > John
> >
> >
> > -----Original Message-----
> > From: Raby, John W CIV USARMY RDECOM ARL (US)
> > Sent: Tuesday, July 10, 2018 4:48 PM
> > To: 'met_help at ucar.edu' <met_help at ucar.edu>
> > Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > I've been using ncdump on the WRF geo_em file, the met_em file and
the
> > wrfout file (all NetCDF) and I can't locate the grid specification
> > specs required for the regridding. How do you find those specs?
> >
> > I'm pretty sure that for the fcst file the Nx = 204 and Ny is 204
and
> > for the 4km prcip file Nx is 1121 and Ny is 881. My target 3rd
domain
> > would be the 204 X 204. I have the lat/long extents of the fcst
domain
> > using the corner_lats and corner_longs info in the dump of the
geo_em
> file.
> >
> > Doing searches in the dump files for the listed  specs don't (per
the
> > README file) do not produce hits. Not sure where to turn to for
this.
> > Maybe WRF specs are not named the same?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> > Sent: Tuesday, July 10, 2018 11:50 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
verify
> > the identity of the sender, and confirm the authenticity of all
links
> > contained within the message prior to copying and pasting the
address
> > to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > I think changing the masking region would have very little impact
on
> > the memory usage.  MET is still storing all the ensemble member
grids
> > as double precision values in memory... even though the vast
majority
> > of them are missing data values.
> >
> > I think re-gridding to a 3rd domain would be a good solution.
You'd
> > want it to cover the geographic extent of the forecast grid but be
at
> > resolution of the observation grid.
> >
> > Thanks,
> > John
> >
> >
> >
> > On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > <Caution-Caution-url:
> > > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > > 119 >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > Thanks for diagnosing the situation. I'm considering the two
options
> > > you suggested. Would the use of a masking region the size of the
> > > smaller forcast domain help?
> > >
> > > R/
> > > John
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT
> > > [Caution-Caution-mailto:met_help at ucar.edu]
> > > Sent: Tuesday, July 10, 2018 9:41 AM
> > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > <john.w.raby2.civ at mail.mil>
> > > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> > > Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > All active links contained in this email were disabled.  Please
> > > verify the identity of the sender, and confirm the authenticity
of
> > > all links contained within the message prior to copying and
pasting
> > > the address to a Web browser.
> > >
> > >
> > >
> > >
> > > ----
> > >
> > > John,
> > >
> > > Thanks for sending your log files and grid information.  I see
that
> > > you're running out of memory when running the Ensemble-Stat
tool.
> > >
> > > Your forecast domain has dimension 204x204 (1km Lambert
Conformal
> > > grid) and the observation domain has dimension 1121x881 (StageIV
4km
> > > grid).  The observation grid contains about 24 times more points
> > > than the forecast grid.  And you're defining a 28 member
ensemble.
> > > So I'm not that surprised that memory issues do not show up for
the
> > > fcst but do show up for the obs grid... since the obs grid would
> > > require 24 times more memory to store the data.
> > >
> > > Even though your forecast domain likely only covers a very small
> > > portion of the observation domain, MET is storing the full
11121x881
> > > grid points in memory for each ensemble member.  Most of them
> > > however will just contain missing data values.
> > >
> > > So you've tried setting "to_grid = FCST" and that works.  And
you've
> > > tried setting "to_grid = OBS" and that runs out of memory.
> > >
> > > You could consider...
> > > (1) Some HPC systems allow you to request more memory when you
> > > submit a job.  You'd need to figure out the right batch options,
but
> > > that may be possible.
> > >
> > > (2) Instead of setting "to_grid = OBS", you could define 3rd
domain
> > > at approximately the 4-km grid spacing similar to the StageIV
domain.
> > > And then you'd regrid both the forecast and observation data to
that
> > > 3rd domain.  Look in the file "met-5.2/data/config/README" and
> > > search for "to_grid" to see a description of the grid
specification.
> > >
> > > Hope this helps.
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > So
> > >
> > >
> > >
> > >
> > >
> > > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > > >        Queue: met_help
> > > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > >        Owner: Nobody
> > > >   Requestors: john.w.raby2.civ at mail.mil
> > > >       Status: new
> > > >  Ticket <Caution-Caution-Caution-url:
> > > > Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > html?id=86
> > > > 119 >
> > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > > Request assistance in diagnosing a problem I'm having. The run
> > > > ends at the point when the ssvar data is computed and the log
file
> > > > says "out of memory exiting". For this run I specified
regridding
> > > > "to_grid = OBS". In a previous run in MAY, with the same input
> > > > data I specified
> > > "to_grid = FCST"
> > > > and I did not have this problem and the run was successful. I
am
> > > > running on an HPC and tried two runs during my testing
yesterday.
> > > > One run was at the command line in my home dir and the other
run
> > > > was as a batch job, but the error recurred for both runs. I
have
> > > > attached a compressed file containing my run script, config
file,
> > > > two MET log files, the HPC system log, and text files
containing
> > > > the grid
> > > information for the input fcst and obs files.
> > > >
> > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log"
is
> > > > the log file from MAY (referred to above) which shows the run
> > > > completed successfully.
> > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the
log
> > > > file from yesterday's run which failed.
> > > > The HPC system log from yesterday's batch run which provides
more
> > > > detailed info is "METE-S.o6217798"
> > > > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > > The config file is
> > > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > > The grid info is in "precip_fcst_grid_info" and
> "precip_obs_grid_info"
> > > >
> > > > Please let me know if you need more info.
> > > >
> > > > Thanks.
> > > > R/
> > > > John
> > > >
> > > > Mr. John W. Raby
> > > > U.S. Army Research Laboratory
> > > > White Sands Missile Range, NM 88002
> > > > (575) 678-2004 DSN 258-2004
> > > > FAX (575) 678-1230 DSN 258-1230
> > > > Email: john.w.raby2.civ at mail.mil
> > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Wed Jul 11 12:05:54 2018

CLASSIFICATION: UNCLASSIFIED

John  -

Thanks for the guidance on doing the regridding and the tip on using
the budget interpolation option.

In step (4), since I'll be inputting the model and obs data on the
original native grids, why does the regridding which ensemble-stat
will do not cause the same memory issue?

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Wednesday, July 11, 2018 11:10 AM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

I would suggest these following steps.

(1) Start with a sample forecast file on its 201x201 domain.
(2) Use the regrid_data_plane and plot_data_plane tools to test our
your proposed 4-km tile.  You'll pass in a grid specification string
to regrid_data_plane ... do the regridding ... plot the result using
plot_point_obs.  If the forecast data isn't fully contained in the
tile, adjust the grid spec and try again.
(3) Once you have the grid spec the way you want, edit the Ensemble-
Stat config file by setting:
   regrid = {
     to_grid = "YOUR GRID SPEC_GOES HERE";
   ... }
(4) Run ensemble-stat just like you were doing before, passing the
model and obs data on their native grids.  Let ensemble-stat do the
regridding for you rather than having to run regrid_data_plane
manually.

One last suggestion, since you're processing precip, it's generally
recommended that you use the budget interpolation option: -method
BUDGET -width 2

Thanks,
John

On Wed, Jul 11, 2018 at 10:37 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> Thanks for confirming the grid spec info. So, if I use that info in
> regrid_data_plane to regrid the forecast to a 4km grid, can I then
run
> MET Ensemble-Stat without regridding and use the regridded fcst file
> and the 4km precip observations files as inputs or do I have to use
> regridding again?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> Sent: Wednesday, July 11, 2018 10:18 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
> <john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify
> the identity of the sender, and confirm the authenticity of all
links
> contained within the message prior to copying and pasting the
address
> to a Web browser.
>
>
>
>
> ----
>
> John,
>
> Yes, that all sounds good to me.  Just give it a try using
> regrid_data_plane.  And then run the output through plot_data_plane
to
> see how it looks.
>
> Actually, I'd suggest regridding a sample *forecast* file to that
new
> domain... and running that through plot_data_plane to see how it
looks.
> You can play around with it however you'd like.  Perhaps increasing
> from
> 51x51 to something slightly larger to make sure your forecast data
is
> fully contained inside your new verification domain.
>
> Thanks,
> John
>
> On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-Caution-url:
> > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > 119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > I did ncdump on the output of Pcp-Combine which is the forecast
grid
> > of accum precip. I noticed that the grid specifications which
appear
> > in this dump appear to be those more closely matching those you
> > referred to in the README file. See attached text file which is
the
> > output of the ncdump. I printed the projection info below:
> >
> > :Projection = "Lambert Conformal" ;
> >                 :scale_lat_1 = "39.032000" ;
> >                 :scale_lat_2 = "39.032000" ;
> >                 :lat_pin = "38.113000" ;
> >                 :lon_pin = "-78.112000" ;
> >                 :x_pin = "0.000000" ;
> >                 :y_pin = "0.000000" ;
> >                 :lon_orient = "-76.952000" ;
> >                 :d_km = "1.000000" ;
> >                 :r_km = "6371.200000" ;
> >                 :nx = "204" ;
> >                 :ny = "204 grid_points" ;
> >
> >
> > So, for the fcst domain I now have what looks like Nx, Ny,
> > lon_orient, D_km, R_km from the resemblance with the README spec
names.
> >
> > Can I assume that "standard_parallel_1" is the same as
:scale_lat_1
> > = "39.032000" above and that "standard_parallel_2" is the same as
> > :scale_lat_2 = "39.032000" above?
> >
> > Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the same
as
> > :lon_pin = "-78.112000"?
> >
> > So, to perform the regrid per your suggestion, I would set "d_km"
to
> > 4 vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the
> > same size as the fcst domain with a grid resolution of 4km. Does
> > this sound right? All the other required specs tie the geographic
> > location to that of my fcst domain, so I would use the same values
> > as are show
> above, correct?
> >
> > Thanks.
> >
> > R/
> > John
> >
> >
> > -----Original Message-----
> > From: Raby, John W CIV USARMY RDECOM ARL (US)
> > Sent: Tuesday, July 10, 2018 4:48 PM
> > To: 'met_help at ucar.edu' <met_help at ucar.edu>
> > Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > I've been using ncdump on the WRF geo_em file, the met_em file and
> > the wrfout file (all NetCDF) and I can't locate the grid
> > specification specs required for the regridding. How do you find
those specs?
> >
> > I'm pretty sure that for the fcst file the Nx = 204 and Ny is 204
> > and for the 4km prcip file Nx is 1121 and Ny is 881. My target 3rd
> > domain would be the 204 X 204. I have the lat/long extents of the
> > fcst domain using the corner_lats and corner_longs info in the
dump
> > of the geo_em
> file.
> >
> > Doing searches in the dump files for the listed  specs don't (per
> > the README file) do not produce hits. Not sure where to turn to
for this.
> > Maybe WRF specs are not named the same?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT
> > [Caution-Caution-mailto:met_help at ucar.edu]
> > Sent: Tuesday, July 10, 2018 11:50 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
> > verify the identity of the sender, and confirm the authenticity of
> > all links contained within the message prior to copying and
pasting
> > the address to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > I think changing the masking region would have very little impact
on
> > the memory usage.  MET is still storing all the ensemble member
> > grids as double precision values in memory... even though the vast
> > majority of them are missing data values.
> >
> > I think re-gridding to a 3rd domain would be a good solution.
You'd
> > want it to cover the geographic extent of the forecast grid but be
> > at resolution of the observation grid.
> >
> > Thanks,
> > John
> >
> >
> >
> > On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > <Caution-Caution-Caution-url:
> > > Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > html?id=86
> > > 119 >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > Thanks for diagnosing the situation. I'm considering the two
> > > options you suggested. Would the use of a masking region the
size
> > > of the smaller forcast domain help?
> > >
> > > R/
> > > John
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT
> > > [Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > Sent: Tuesday, July 10, 2018 9:41 AM
> > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > <john.w.raby2.civ at mail.mil>
> > > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> > > Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > All active links contained in this email were disabled.  Please
> > > verify the identity of the sender, and confirm the authenticity
of
> > > all links contained within the message prior to copying and
> > > pasting the address to a Web browser.
> > >
> > >
> > >
> > >
> > > ----
> > >
> > > John,
> > >
> > > Thanks for sending your log files and grid information.  I see
> > > that you're running out of memory when running the Ensemble-Stat
tool.
> > >
> > > Your forecast domain has dimension 204x204 (1km Lambert
Conformal
> > > grid) and the observation domain has dimension 1121x881 (StageIV
> > > 4km grid).  The observation grid contains about 24 times more
> > > points than the forecast grid.  And you're defining a 28 member
ensemble.
> > > So I'm not that surprised that memory issues do not show up for
> > > the fcst but do show up for the obs grid... since the obs grid
> > > would require 24 times more memory to store the data.
> > >
> > > Even though your forecast domain likely only covers a very small
> > > portion of the observation domain, MET is storing the full
> > > 11121x881 grid points in memory for each ensemble member.  Most
of
> > > them however will just contain missing data values.
> > >
> > > So you've tried setting "to_grid = FCST" and that works.  And
> > > you've tried setting "to_grid = OBS" and that runs out of
memory.
> > >
> > > You could consider...
> > > (1) Some HPC systems allow you to request more memory when you
> > > submit a job.  You'd need to figure out the right batch options,
> > > but that may be possible.
> > >
> > > (2) Instead of setting "to_grid = OBS", you could define 3rd
> > > domain at approximately the 4-km grid spacing similar to the
StageIV domain.
> > > And then you'd regrid both the forecast and observation data to
> > > that 3rd domain.  Look in the file "met-5.2/data/config/README"
> > > and search for "to_grid" to see a description of the grid
specification.
> > >
> > > Hope this helps.
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > So
> > >
> > >
> > >
> > >
> > >
> > > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > > >        Queue: met_help
> > > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > >        Owner: Nobody
> > > >   Requestors: john.w.raby2.civ at mail.mil
> > > >       Status: new
> > > >  Ticket <Caution-Caution-Caution-Caution-url:
> > > > Caution-Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > html?id=86
> > > > 119 >
> > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > > Request assistance in diagnosing a problem I'm having. The run
> > > > ends at the point when the ssvar data is computed and the log
> > > > file says "out of memory exiting". For this run I specified
> > > > regridding "to_grid = OBS". In a previous run in MAY, with the
> > > > same input data I specified
> > > "to_grid = FCST"
> > > > and I did not have this problem and the run was successful. I
am
> > > > running on an HPC and tried two runs during my testing
yesterday.
> > > > One run was at the command line in my home dir and the other
run
> > > > was as a batch job, but the error recurred for both runs. I
have
> > > > attached a compressed file containing my run script, config
> > > > file, two MET log files, the HPC system log, and text files
> > > > containing the grid
> > > information for the input fcst and obs files.
> > > >
> > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log"
is
> > > > the log file from MAY (referred to above) which shows the run
> > > > completed successfully.
> > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the
log
> > > > file from yesterday's run which failed.
> > > > The HPC system log from yesterday's batch run which provides
> > > > more detailed info is "METE-S.o6217798"
> > > > The run script is
"run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > > The config file is
> > > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > > The grid info is in "precip_fcst_grid_info" and
> "precip_obs_grid_info"
> > > >
> > > > Please let me know if you need more info.
> > > >
> > > > Thanks.
> > > > R/
> > > > John
> > > >
> > > > Mr. John W. Raby
> > > > U.S. Army Research Laboratory
> > > > White Sands Missile Range, NM 88002
> > > > (575) 678-2004 DSN 258-2004
> > > > FAX (575) 678-1230 DSN 258-1230
> > > > Email: john.w.raby2.civ at mail.mil
> > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

CLASSIFICATION: UNCLASSIFIED


------------------------------------------------
Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: John Halley Gotway
Time: Wed Jul 11 12:17:51 2018

John,

For each field being evaluated, Ensemble-Stat reads all the ensemble
member
data into memory.  When it reads data from a file, it reads it on it's
native grid, does any requested regridding, and stores the result in
memory.

A rough comparison of the memory required for the regridding options
is...

(1) to_grid = FCST: 201x201 grid x 28 members = 1,131,228 data values
in
memory
(2) to_grid = OBS: 1121x881 grid x 28 members = 27,652,828 data values
in
memory
(3) to_grid = "user-defined 4-km tile": 51x51 grid x 28 members =
72,828
data values in memory

As you can see, the third option that you're working on now consumes
the
least amount of memory by far.

Thanks,
John

On Wed, Jul 11, 2018 at 12:06 PM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John  -
>
> Thanks for the guidance on doing the regridding and the tip on using
the
> budget interpolation option.
>
> In step (4), since I'll be inputting the model and obs data on the
> original native grids, why does the regridding which ensemble-stat
will do
> not cause the same memory issue?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Wednesday, July 11, 2018 11:10 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify the
> identity of the sender, and confirm the authenticity of all links
contained
> within the message prior to copying and pasting the address to a Web
> browser.
>
>
>
>
> ----
>
> John,
>
> I would suggest these following steps.
>
> (1) Start with a sample forecast file on its 201x201 domain.
> (2) Use the regrid_data_plane and plot_data_plane tools to test our
your
> proposed 4-km tile.  You'll pass in a grid specification string to
> regrid_data_plane ... do the regridding ... plot the result using
> plot_point_obs.  If the forecast data isn't fully contained in the
tile,
> adjust the grid spec and try again.
> (3) Once you have the grid spec the way you want, edit the Ensemble-
Stat
> config file by setting:
>    regrid = {
>      to_grid = "YOUR GRID SPEC_GOES HERE";
>    ... }
> (4) Run ensemble-stat just like you were doing before, passing the
model
> and obs data on their native grids.  Let ensemble-stat do the
regridding
> for you rather than having to run regrid_data_plane manually.
>
> One last suggestion, since you're processing precip, it's generally
> recommended that you use the budget interpolation option: -method
BUDGET
> -width 2
>
> Thanks,
> John
>
> On Wed, Jul 11, 2018 at 10:37 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-url:
> > Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > Thanks for confirming the grid spec info. So, if I use that info
in
> > regrid_data_plane to regrid the forecast to a 4km grid, can I then
run
> > MET Ensemble-Stat without regridding and use the regridded fcst
file
> > and the 4km precip observations files as inputs or do I have to
use
> > regridding again?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> > Sent: Wednesday, July 11, 2018 10:18 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
verify
> > the identity of the sender, and confirm the authenticity of all
links
> > contained within the message prior to copying and pasting the
address
> > to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > Yes, that all sounds good to me.  Just give it a try using
> > regrid_data_plane.  And then run the output through
plot_data_plane to
> > see how it looks.
> >
> > Actually, I'd suggest regridding a sample *forecast* file to that
new
> > domain... and running that through plot_data_plane to see how it
looks.
> > You can play around with it however you'd like.  Perhaps
increasing
> > from
> > 51x51 to something slightly larger to make sure your forecast data
is
> > fully contained inside your new verification domain.
> >
> > Thanks,
> > John
> >
> > On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > <Caution-Caution-url:
> > > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > > 119 >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > I did ncdump on the output of Pcp-Combine which is the forecast
grid
> > > of accum precip. I noticed that the grid specifications which
appear
> > > in this dump appear to be those more closely matching those you
> > > referred to in the README file. See attached text file which is
the
> > > output of the ncdump. I printed the projection info below:
> > >
> > > :Projection = "Lambert Conformal" ;
> > >                 :scale_lat_1 = "39.032000" ;
> > >                 :scale_lat_2 = "39.032000" ;
> > >                 :lat_pin = "38.113000" ;
> > >                 :lon_pin = "-78.112000" ;
> > >                 :x_pin = "0.000000" ;
> > >                 :y_pin = "0.000000" ;
> > >                 :lon_orient = "-76.952000" ;
> > >                 :d_km = "1.000000" ;
> > >                 :r_km = "6371.200000" ;
> > >                 :nx = "204" ;
> > >                 :ny = "204 grid_points" ;
> > >
> > >
> > > So, for the fcst domain I now have what looks like Nx, Ny,
> > > lon_orient, D_km, R_km from the resemblance with the README spec
names.
> > >
> > > Can I assume that "standard_parallel_1" is the same as
:scale_lat_1
> > > = "39.032000" above and that "standard_parallel_2" is the same
as
> > > :scale_lat_2 = "39.032000" above?
> > >
> > > Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the
same as
> > > :lon_pin = "-78.112000"?
> > >
> > > So, to perform the regrid per your suggestion, I would set
"d_km" to
> > > 4 vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the
> > > same size as the fcst domain with a grid resolution of 4km. Does
> > > this sound right? All the other required specs tie the
geographic
> > > location to that of my fcst domain, so I would use the same
values
> > > as are show
> > above, correct?
> > >
> > > Thanks.
> > >
> > > R/
> > > John
> > >
> > >
> > > -----Original Message-----
> > > From: Raby, John W CIV USARMY RDECOM ARL (US)
> > > Sent: Tuesday, July 10, 2018 4:48 PM
> > > To: 'met_help at ucar.edu' <met_help at ucar.edu>
> > > Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > > Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > I've been using ncdump on the WRF geo_em file, the met_em file
and
> > > the wrfout file (all NetCDF) and I can't locate the grid
> > > specification specs required for the regridding. How do you find
those
> specs?
> > >
> > > I'm pretty sure that for the fcst file the Nx = 204 and Ny is
204
> > > and for the 4km prcip file Nx is 1121 and Ny is 881. My target
3rd
> > > domain would be the 204 X 204. I have the lat/long extents of
the
> > > fcst domain using the corner_lats and corner_longs info in the
dump
> > > of the geo_em
> > file.
> > >
> > > Doing searches in the dump files for the listed  specs don't
(per
> > > the README file) do not produce hits. Not sure where to turn to
for
> this.
> > > Maybe WRF specs are not named the same?
> > >
> > > R/
> > > John
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT
> > > [Caution-Caution-mailto:met_help at ucar.edu]
> > > Sent: Tuesday, July 10, 2018 11:50 AM
> > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > <john.w.raby2.civ at mail.mil>
> > > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > > Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > All active links contained in this email were disabled.  Please
> > > verify the identity of the sender, and confirm the authenticity
of
> > > all links contained within the message prior to copying and
pasting
> > > the address to a Web browser.
> > >
> > >
> > >
> > >
> > > ----
> > >
> > > John,
> > >
> > > I think changing the masking region would have very little
impact on
> > > the memory usage.  MET is still storing all the ensemble member
> > > grids as double precision values in memory... even though the
vast
> > > majority of them are missing data values.
> > >
> > > I think re-gridding to a 3rd domain would be a good solution.
You'd
> > > want it to cover the geographic extent of the forecast grid but
be
> > > at resolution of the observation grid.
> > >
> > > Thanks,
> > > John
> > >
> > >
> > >
> > > On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > <Caution-Caution-Caution-url:
> > > > Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > html?id=86
> > > > 119 >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > > John -
> > > >
> > > > Thanks for diagnosing the situation. I'm considering the two
> > > > options you suggested. Would the use of a masking region the
size
> > > > of the smaller forcast domain help?
> > > >
> > > > R/
> > > > John
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT
> > > > [Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, July 10, 2018 9:41 AM
> > > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > > <john.w.raby2.civ at mail.mil>
> > > > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > > > Ensemble-Stat err (UNCLASSIFIED)
> > > >
> > > > All active links contained in this email were disabled.
Please
> > > > verify the identity of the sender, and confirm the
authenticity of
> > > > all links contained within the message prior to copying and
> > > > pasting the address to a Web browser.
> > > >
> > > >
> > > >
> > > >
> > > > ----
> > > >
> > > > John,
> > > >
> > > > Thanks for sending your log files and grid information.  I see
> > > > that you're running out of memory when running the Ensemble-
Stat
> tool.
> > > >
> > > > Your forecast domain has dimension 204x204 (1km Lambert
Conformal
> > > > grid) and the observation domain has dimension 1121x881
(StageIV
> > > > 4km grid).  The observation grid contains about 24 times more
> > > > points than the forecast grid.  And you're defining a 28
member
> ensemble.
> > > > So I'm not that surprised that memory issues do not show up
for
> > > > the fcst but do show up for the obs grid... since the obs grid
> > > > would require 24 times more memory to store the data.
> > > >
> > > > Even though your forecast domain likely only covers a very
small
> > > > portion of the observation domain, MET is storing the full
> > > > 11121x881 grid points in memory for each ensemble member.
Most of
> > > > them however will just contain missing data values.
> > > >
> > > > So you've tried setting "to_grid = FCST" and that works.  And
> > > > you've tried setting "to_grid = OBS" and that runs out of
memory.
> > > >
> > > > You could consider...
> > > > (1) Some HPC systems allow you to request more memory when you
> > > > submit a job.  You'd need to figure out the right batch
options,
> > > > but that may be possible.
> > > >
> > > > (2) Instead of setting "to_grid = OBS", you could define 3rd
> > > > domain at approximately the 4-km grid spacing similar to the
StageIV
> domain.
> > > > And then you'd regrid both the forecast and observation data
to
> > > > that 3rd domain.  Look in the file "met-
5.2/data/config/README"
> > > > and search for "to_grid" to see a description of the grid
> specification.
> > > >
> > > > Hope this helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > So
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> > > > met_help at ucar.edu> wrote:
> > > >
> > > > >
> > > > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > > > >        Queue: met_help
> > > > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > > >        Owner: Nobody
> > > > >   Requestors: john.w.raby2.civ at mail.mil
> > > > >       Status: new
> > > > >  Ticket <Caution-Caution-Caution-Caution-url:
> > > > > Caution-Caution-Caution-Caution-
> https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > > html?id=86
> > > > > 119 >
> > > > >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > > Request assistance in diagnosing a problem I'm having. The
run
> > > > > ends at the point when the ssvar data is computed and the
log
> > > > > file says "out of memory exiting". For this run I specified
> > > > > regridding "to_grid = OBS". In a previous run in MAY, with
the
> > > > > same input data I specified
> > > > "to_grid = FCST"
> > > > > and I did not have this problem and the run was successful.
I am
> > > > > running on an HPC and tried two runs during my testing
yesterday.
> > > > > One run was at the command line in my home dir and the other
run
> > > > > was as a batch job, but the error recurred for both runs. I
have
> > > > > attached a compressed file containing my run script, config
> > > > > file, two MET log files, the HPC system log, and text files
> > > > > containing the grid
> > > > information for the input fcst and obs files.
> > > > >
> > > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log"
is
> > > > > the log file from MAY (referred to above) which shows the
run
> > > > > completed successfully.
> > > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the
log
> > > > > file from yesterday's run which failed.
> > > > > The HPC system log from yesterday's batch run which provides
> > > > > more detailed info is "METE-S.o6217798"
> > > > > The run script is
> "run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > > > The config file is
> > > > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > > > The grid info is in "precip_fcst_grid_info" and
> > "precip_obs_grid_info"
> > > > >
> > > > > Please let me know if you need more info.
> > > > >
> > > > > Thanks.
> > > > > R/
> > > > > John
> > > > >
> > > > > Mr. John W. Raby
> > > > > U.S. Army Research Laboratory
> > > > > White Sands Missile Range, NM 88002
> > > > > (575) 678-2004 DSN 258-2004
> > > > > FAX (575) 678-1230 DSN 258-1230
> > > > > Email: john.w.raby2.civ at mail.mil
> > > > >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > >
> > > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Wed Jul 11 12:21:42 2018

CLASSIFICATION: UNCLASSIFIED

John -

I see now from your example. The rub comes with the regridding step.

Thanks. Between today and into tomorrow, I hope to have some results I
can provide for feedback.

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Wednesday, July 11, 2018 12:18 PM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

For each field being evaluated, Ensemble-Stat reads all the ensemble
member data into memory.  When it reads data from a file, it reads it
on it's native grid, does any requested regridding, and stores the
result in memory.

A rough comparison of the memory required for the regridding options
is...

(1) to_grid = FCST: 201x201 grid x 28 members = 1,131,228 data values
in memory
(2) to_grid = OBS: 1121x881 grid x 28 members = 27,652,828 data values
in memory
(3) to_grid = "user-defined 4-km tile": 51x51 grid x 28 members =
72,828 data values in memory

As you can see, the third option that you're working on now consumes
the least amount of memory by far.

Thanks,
John

On Wed, Jul 11, 2018 at 12:06 PM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John  -
>
> Thanks for the guidance on doing the regridding and the tip on using
> the budget interpolation option.
>
> In step (4), since I'll be inputting the model and obs data on the
> original native grids, why does the regridding which ensemble-stat
> will do not cause the same memory issue?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> Sent: Wednesday, July 11, 2018 11:10 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
> <john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify
> the identity of the sender, and confirm the authenticity of all
links
> contained within the message prior to copying and pasting the
address
> to a Web browser.
>
>
>
>
> ----
>
> John,
>
> I would suggest these following steps.
>
> (1) Start with a sample forecast file on its 201x201 domain.
> (2) Use the regrid_data_plane and plot_data_plane tools to test our
> your proposed 4-km tile.  You'll pass in a grid specification string
> to regrid_data_plane ... do the regridding ... plot the result using
> plot_point_obs.  If the forecast data isn't fully contained in the
> tile, adjust the grid spec and try again.
> (3) Once you have the grid spec the way you want, edit the
> Ensemble-Stat config file by setting:
>    regrid = {
>      to_grid = "YOUR GRID SPEC_GOES HERE";
>    ... }
> (4) Run ensemble-stat just like you were doing before, passing the
> model and obs data on their native grids.  Let ensemble-stat do the
> regridding for you rather than having to run regrid_data_plane
manually.
>
> One last suggestion, since you're processing precip, it's generally
> recommended that you use the budget interpolation option: -method
> BUDGET -width 2
>
> Thanks,
> John
>
> On Wed, Jul 11, 2018 at 10:37 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-Caution-url:
> > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > 119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > Thanks for confirming the grid spec info. So, if I use that info
in
> > regrid_data_plane to regrid the forecast to a 4km grid, can I then
> > run MET Ensemble-Stat without regridding and use the regridded
fcst
> > file and the 4km precip observations files as inputs or do I have
to
> > use regridding again?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT
> > [Caution-Caution-mailto:met_help at ucar.edu]
> > Sent: Wednesday, July 11, 2018 10:18 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
> > verify the identity of the sender, and confirm the authenticity of
> > all links contained within the message prior to copying and
pasting
> > the address to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > Yes, that all sounds good to me.  Just give it a try using
> > regrid_data_plane.  And then run the output through
plot_data_plane
> > to see how it looks.
> >
> > Actually, I'd suggest regridding a sample *forecast* file to that
> > new domain... and running that through plot_data_plane to see how
it looks.
> > You can play around with it however you'd like.  Perhaps
increasing
> > from
> > 51x51 to something slightly larger to make sure your forecast data
> > is fully contained inside your new verification domain.
> >
> > Thanks,
> > John
> >
> > On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > <Caution-Caution-Caution-url:
> > > Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > html?id=86
> > > 119 >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > I did ncdump on the output of Pcp-Combine which is the forecast
> > > grid of accum precip. I noticed that the grid specifications
which
> > > appear in this dump appear to be those more closely matching
those
> > > you referred to in the README file. See attached text file which
> > > is the output of the ncdump. I printed the projection info
below:
> > >
> > > :Projection = "Lambert Conformal" ;
> > >                 :scale_lat_1 = "39.032000" ;
> > >                 :scale_lat_2 = "39.032000" ;
> > >                 :lat_pin = "38.113000" ;
> > >                 :lon_pin = "-78.112000" ;
> > >                 :x_pin = "0.000000" ;
> > >                 :y_pin = "0.000000" ;
> > >                 :lon_orient = "-76.952000" ;
> > >                 :d_km = "1.000000" ;
> > >                 :r_km = "6371.200000" ;
> > >                 :nx = "204" ;
> > >                 :ny = "204 grid_points" ;
> > >
> > >
> > > So, for the fcst domain I now have what looks like Nx, Ny,
> > > lon_orient, D_km, R_km from the resemblance with the README spec
names.
> > >
> > > Can I assume that "standard_parallel_1" is the same as
> > > :scale_lat_1 = "39.032000" above and that "standard_parallel_2"
is
> > > the same as
> > > :scale_lat_2 = "39.032000" above?
> > >
> > > Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the
same
> > > as :lon_pin = "-78.112000"?
> > >
> > > So, to perform the regrid per your suggestion, I would set
"d_km"
> > > to
> > > 4 vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the
> > > same size as the fcst domain with a grid resolution of 4km. Does
> > > this sound right? All the other required specs tie the
geographic
> > > location to that of my fcst domain, so I would use the same
values
> > > as are show
> > above, correct?
> > >
> > > Thanks.
> > >
> > > R/
> > > John
> > >
> > >
> > > -----Original Message-----
> > > From: Raby, John W CIV USARMY RDECOM ARL (US)
> > > Sent: Tuesday, July 10, 2018 4:48 PM
> > > To: 'met_help at ucar.edu' <met_help at ucar.edu>
> > > Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
> > > V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > I've been using ncdump on the WRF geo_em file, the met_em file
and
> > > the wrfout file (all NetCDF) and I can't locate the grid
> > > specification specs required for the regridding. How do you find
> > > those
> specs?
> > >
> > > I'm pretty sure that for the fcst file the Nx = 204 and Ny is
204
> > > and for the 4km prcip file Nx is 1121 and Ny is 881. My target
3rd
> > > domain would be the 204 X 204. I have the lat/long extents of
the
> > > fcst domain using the corner_lats and corner_longs info in the
> > > dump of the geo_em
> > file.
> > >
> > > Doing searches in the dump files for the listed  specs don't
(per
> > > the README file) do not produce hits. Not sure where to turn to
> > > for
> this.
> > > Maybe WRF specs are not named the same?
> > >
> > > R/
> > > John
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT
> > > [Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > Sent: Tuesday, July 10, 2018 11:50 AM
> > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > <john.w.raby2.civ at mail.mil>
> > > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
> > > V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > All active links contained in this email were disabled.  Please
> > > verify the identity of the sender, and confirm the authenticity
of
> > > all links contained within the message prior to copying and
> > > pasting the address to a Web browser.
> > >
> > >
> > >
> > >
> > > ----
> > >
> > > John,
> > >
> > > I think changing the masking region would have very little
impact
> > > on the memory usage.  MET is still storing all the ensemble
member
> > > grids as double precision values in memory... even though the
vast
> > > majority of them are missing data values.
> > >
> > > I think re-gridding to a 3rd domain would be a good solution.
> > > You'd want it to cover the geographic extent of the forecast
grid
> > > but be at resolution of the observation grid.
> > >
> > > Thanks,
> > > John
> > >
> > >
> > >
> > > On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > <Caution-Caution-Caution-Caution-url:
> > > > Caution-Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > html?id=86
> > > > 119 >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > > John -
> > > >
> > > > Thanks for diagnosing the situation. I'm considering the two
> > > > options you suggested. Would the use of a masking region the
> > > > size of the smaller forcast domain help?
> > > >
> > > > R/
> > > > John
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT
> > > > [Caution-Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, July 10, 2018 9:41 AM
> > > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > > <john.w.raby2.civ at mail.mil>
> > > > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > > > Ensemble-Stat err (UNCLASSIFIED)
> > > >
> > > > All active links contained in this email were disabled.
Please
> > > > verify the identity of the sender, and confirm the
authenticity
> > > > of all links contained within the message prior to copying and
> > > > pasting the address to a Web browser.
> > > >
> > > >
> > > >
> > > >
> > > > ----
> > > >
> > > > John,
> > > >
> > > > Thanks for sending your log files and grid information.  I see
> > > > that you're running out of memory when running the Ensemble-
Stat
> tool.
> > > >
> > > > Your forecast domain has dimension 204x204 (1km Lambert
> > > > Conformal
> > > > grid) and the observation domain has dimension 1121x881
(StageIV
> > > > 4km grid).  The observation grid contains about 24 times more
> > > > points than the forecast grid.  And you're defining a 28
member
> ensemble.
> > > > So I'm not that surprised that memory issues do not show up
for
> > > > the fcst but do show up for the obs grid... since the obs grid
> > > > would require 24 times more memory to store the data.
> > > >
> > > > Even though your forecast domain likely only covers a very
small
> > > > portion of the observation domain, MET is storing the full
> > > > 11121x881 grid points in memory for each ensemble member.
Most
> > > > of them however will just contain missing data values.
> > > >
> > > > So you've tried setting "to_grid = FCST" and that works.  And
> > > > you've tried setting "to_grid = OBS" and that runs out of
memory.
> > > >
> > > > You could consider...
> > > > (1) Some HPC systems allow you to request more memory when you
> > > > submit a job.  You'd need to figure out the right batch
options,
> > > > but that may be possible.
> > > >
> > > > (2) Instead of setting "to_grid = OBS", you could define 3rd
> > > > domain at approximately the 4-km grid spacing similar to the
> > > > StageIV
> domain.
> > > > And then you'd regrid both the forecast and observation data
to
> > > > that 3rd domain.  Look in the file "met-
5.2/data/config/README"
> > > > and search for "to_grid" to see a description of the grid
> specification.
> > > >
> > > > Hope this helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > So
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> > > > met_help at ucar.edu> wrote:
> > > >
> > > > >
> > > > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > > > >        Queue: met_help
> > > > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > > >        Owner: Nobody
> > > > >   Requestors: john.w.raby2.civ at mail.mil
> > > > >       Status: new
> > > > >  Ticket <Caution-Caution-Caution-Caution-Caution-url:
> > > > > Caution-Caution-Caution-Caution-
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > > html?id=86
> > > > > 119 >
> > > > >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > > Request assistance in diagnosing a problem I'm having. The
run
> > > > > ends at the point when the ssvar data is computed and the
log
> > > > > file says "out of memory exiting". For this run I specified
> > > > > regridding "to_grid = OBS". In a previous run in MAY, with
the
> > > > > same input data I specified
> > > > "to_grid = FCST"
> > > > > and I did not have this problem and the run was successful.
I
> > > > > am running on an HPC and tried two runs during my testing
yesterday.
> > > > > One run was at the command line in my home dir and the other
> > > > > run was as a batch job, but the error recurred for both
runs.
> > > > > I have attached a compressed file containing my run script,
> > > > > config file, two MET log files, the HPC system log, and text
> > > > > files containing the grid
> > > > information for the input fcst and obs files.
> > > > >
> > > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log"
> > > > > is the log file from MAY (referred to above) which shows the
> > > > > run completed successfully.
> > > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the
> > > > > log file from yesterday's run which failed.
> > > > > The HPC system log from yesterday's batch run which provides
> > > > > more detailed info is "METE-S.o6217798"
> > > > > The run script is
> "run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > > > The config file is
> > > > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > > > The grid info is in "precip_fcst_grid_info" and
> > "precip_obs_grid_info"
> > > > >
> > > > > Please let me know if you need more info.
> > > > >
> > > > > Thanks.
> > > > > R/
> > > > > John
> > > > >
> > > > > Mr. John W. Raby
> > > > > U.S. Army Research Laboratory
> > > > > White Sands Missile Range, NM 88002
> > > > > (575) 678-2004 DSN 258-2004
> > > > > FAX (575) 678-1230 DSN 258-1230
> > > > > Email: john.w.raby2.civ at mail.mil
> > > > >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > >
> > > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

CLASSIFICATION: UNCLASSIFIED


------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: Raby, John W USA CIV
Time: Fri Jul 13 06:02:47 2018

CLASSIFICATION: UNCLASSIFIED

John -

I had to drop off this project yesterday and didn't have a chance to
try using the 4-km tile. Not sure when I'll get back to it, but I will
definitely need to follow through on it and I'll let you know the
results then.

Thanks for your help on this.

R/
John

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Wednesday, July 11, 2018 12:18 PM
To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
Ensemble-Stat err (UNCLASSIFIED)

All active links contained in this email were disabled.  Please verify
the identity of the sender, and confirm the authenticity of all links
contained within the message prior to copying and pasting the address
to a Web browser.




----

John,

For each field being evaluated, Ensemble-Stat reads all the ensemble
member data into memory.  When it reads data from a file, it reads it
on it's native grid, does any requested regridding, and stores the
result in memory.

A rough comparison of the memory required for the regridding options
is...

(1) to_grid = FCST: 201x201 grid x 28 members = 1,131,228 data values
in memory
(2) to_grid = OBS: 1121x881 grid x 28 members = 27,652,828 data values
in memory
(3) to_grid = "user-defined 4-km tile": 51x51 grid x 28 members =
72,828 data values in memory

As you can see, the third option that you're working on now consumes
the least amount of memory by far.

Thanks,
John

On Wed, Jul 11, 2018 at 12:06 PM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <Caution-url:
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John  -
>
> Thanks for the guidance on doing the regridding and the tip on using
> the budget interpolation option.
>
> In step (4), since I'll be inputting the model and obs data on the
> original native grids, why does the regridding which ensemble-stat
> will do not cause the same memory issue?
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> Sent: Wednesday, July 11, 2018 11:10 AM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
> <john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify
> the identity of the sender, and confirm the authenticity of all
links
> contained within the message prior to copying and pasting the
address
> to a Web browser.
>
>
>
>
> ----
>
> John,
>
> I would suggest these following steps.
>
> (1) Start with a sample forecast file on its 201x201 domain.
> (2) Use the regrid_data_plane and plot_data_plane tools to test our
> your proposed 4-km tile.  You'll pass in a grid specification string
> to regrid_data_plane ... do the regridding ... plot the result using
> plot_point_obs.  If the forecast data isn't fully contained in the
> tile, adjust the grid spec and try again.
> (3) Once you have the grid spec the way you want, edit the
> Ensemble-Stat config file by setting:
>    regrid = {
>      to_grid = "YOUR GRID SPEC_GOES HERE";
>    ... }
> (4) Run ensemble-stat just like you were doing before, passing the
> model and obs data on their native grids.  Let ensemble-stat do the
> regridding for you rather than having to run regrid_data_plane
manually.
>
> One last suggestion, since you're processing precip, it's generally
> recommended that you use the budget interpolation option: -method
> BUDGET -width 2
>
> Thanks,
> John
>
> On Wed, Jul 11, 2018 at 10:37 AM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-Caution-url:
> > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > 119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John -
> >
> > Thanks for confirming the grid spec info. So, if I use that info
in
> > regrid_data_plane to regrid the forecast to a 4km grid, can I then
> > run MET Ensemble-Stat without regridding and use the regridded
fcst
> > file and the 4km precip observations files as inputs or do I have
to
> > use regridding again?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT
> > [Caution-Caution-mailto:met_help at ucar.edu]
> > Sent: Wednesday, July 11, 2018 10:18 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
> > verify the identity of the sender, and confirm the authenticity of
> > all links contained within the message prior to copying and
pasting
> > the address to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > Yes, that all sounds good to me.  Just give it a try using
> > regrid_data_plane.  And then run the output through
plot_data_plane
> > to see how it looks.
> >
> > Actually, I'd suggest regridding a sample *forecast* file to that
> > new domain... and running that through plot_data_plane to see how
it looks.
> > You can play around with it however you'd like.  Perhaps
increasing
> > from
> > 51x51 to something slightly larger to make sure your forecast data
> > is fully contained inside your new verification domain.
> >
> > Thanks,
> > John
> >
> > On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > <Caution-Caution-Caution-url:
> > > Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > html?id=86
> > > 119 >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > I did ncdump on the output of Pcp-Combine which is the forecast
> > > grid of accum precip. I noticed that the grid specifications
which
> > > appear in this dump appear to be those more closely matching
those
> > > you referred to in the README file. See attached text file which
> > > is the output of the ncdump. I printed the projection info
below:
> > >
> > > :Projection = "Lambert Conformal" ;
> > >                 :scale_lat_1 = "39.032000" ;
> > >                 :scale_lat_2 = "39.032000" ;
> > >                 :lat_pin = "38.113000" ;
> > >                 :lon_pin = "-78.112000" ;
> > >                 :x_pin = "0.000000" ;
> > >                 :y_pin = "0.000000" ;
> > >                 :lon_orient = "-76.952000" ;
> > >                 :d_km = "1.000000" ;
> > >                 :r_km = "6371.200000" ;
> > >                 :nx = "204" ;
> > >                 :ny = "204 grid_points" ;
> > >
> > >
> > > So, for the fcst domain I now have what looks like Nx, Ny,
> > > lon_orient, D_km, R_km from the resemblance with the README spec
names.
> > >
> > > Can I assume that "standard_parallel_1" is the same as
> > > :scale_lat_1 = "39.032000" above and that "standard_parallel_2"
is
> > > the same as
> > > :scale_lat_2 = "39.032000" above?
> > >
> > > Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the
same
> > > as :lon_pin = "-78.112000"?
> > >
> > > So, to perform the regrid per your suggestion, I would set
"d_km"
> > > to
> > > 4 vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain the
> > > same size as the fcst domain with a grid resolution of 4km. Does
> > > this sound right? All the other required specs tie the
geographic
> > > location to that of my fcst domain, so I would use the same
values
> > > as are show
> > above, correct?
> > >
> > > Thanks.
> > >
> > > R/
> > > John
> > >
> > >
> > > -----Original Message-----
> > > From: Raby, John W CIV USARMY RDECOM ARL (US)
> > > Sent: Tuesday, July 10, 2018 4:48 PM
> > > To: 'met_help at ucar.edu' <met_help at ucar.edu>
> > > Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
> > > V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > I've been using ncdump on the WRF geo_em file, the met_em file
and
> > > the wrfout file (all NetCDF) and I can't locate the grid
> > > specification specs required for the regridding. How do you find
> > > those
> specs?
> > >
> > > I'm pretty sure that for the fcst file the Nx = 204 and Ny is
204
> > > and for the 4km prcip file Nx is 1121 and Ny is 881. My target
3rd
> > > domain would be the 204 X 204. I have the lat/long extents of
the
> > > fcst domain using the corner_lats and corner_longs info in the
> > > dump of the geo_em
> > file.
> > >
> > > Doing searches in the dump files for the listed  specs don't
(per
> > > the README file) do not produce hits. Not sure where to turn to
> > > for
> this.
> > > Maybe WRF specs are not named the same?
> > >
> > > R/
> > > John
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT
> > > [Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > Sent: Tuesday, July 10, 2018 11:50 AM
> > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > <john.w.raby2.civ at mail.mil>
> > > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
> > > V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > All active links contained in this email were disabled.  Please
> > > verify the identity of the sender, and confirm the authenticity
of
> > > all links contained within the message prior to copying and
> > > pasting the address to a Web browser.
> > >
> > >
> > >
> > >
> > > ----
> > >
> > > John,
> > >
> > > I think changing the masking region would have very little
impact
> > > on the memory usage.  MET is still storing all the ensemble
member
> > > grids as double precision values in memory... even though the
vast
> > > majority of them are missing data values.
> > >
> > > I think re-gridding to a 3rd domain would be a good solution.
> > > You'd want it to cover the geographic extent of the forecast
grid
> > > but be at resolution of the observation grid.
> > >
> > > Thanks,
> > > John
> > >
> > >
> > >
> > > On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > <Caution-Caution-Caution-Caution-url:
> > > > Caution-Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > html?id=86
> > > > 119 >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > > John -
> > > >
> > > > Thanks for diagnosing the situation. I'm considering the two
> > > > options you suggested. Would the use of a masking region the
> > > > size of the smaller forcast domain help?
> > > >
> > > > R/
> > > > John
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT
> > > > [Caution-Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, July 10, 2018 9:41 AM
> > > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > > <john.w.raby2.civ at mail.mil>
> > > > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > > > Ensemble-Stat err (UNCLASSIFIED)
> > > >
> > > > All active links contained in this email were disabled.
Please
> > > > verify the identity of the sender, and confirm the
authenticity
> > > > of all links contained within the message prior to copying and
> > > > pasting the address to a Web browser.
> > > >
> > > >
> > > >
> > > >
> > > > ----
> > > >
> > > > John,
> > > >
> > > > Thanks for sending your log files and grid information.  I see
> > > > that you're running out of memory when running the Ensemble-
Stat
> tool.
> > > >
> > > > Your forecast domain has dimension 204x204 (1km Lambert
> > > > Conformal
> > > > grid) and the observation domain has dimension 1121x881
(StageIV
> > > > 4km grid).  The observation grid contains about 24 times more
> > > > points than the forecast grid.  And you're defining a 28
member
> ensemble.
> > > > So I'm not that surprised that memory issues do not show up
for
> > > > the fcst but do show up for the obs grid... since the obs grid
> > > > would require 24 times more memory to store the data.
> > > >
> > > > Even though your forecast domain likely only covers a very
small
> > > > portion of the observation domain, MET is storing the full
> > > > 11121x881 grid points in memory for each ensemble member.
Most
> > > > of them however will just contain missing data values.
> > > >
> > > > So you've tried setting "to_grid = FCST" and that works.  And
> > > > you've tried setting "to_grid = OBS" and that runs out of
memory.
> > > >
> > > > You could consider...
> > > > (1) Some HPC systems allow you to request more memory when you
> > > > submit a job.  You'd need to figure out the right batch
options,
> > > > but that may be possible.
> > > >
> > > > (2) Instead of setting "to_grid = OBS", you could define 3rd
> > > > domain at approximately the 4-km grid spacing similar to the
> > > > StageIV
> domain.
> > > > And then you'd regrid both the forecast and observation data
to
> > > > that 3rd domain.  Look in the file "met-
5.2/data/config/README"
> > > > and search for "to_grid" to see a description of the grid
> specification.
> > > >
> > > > Hope this helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > So
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT <
> > > > met_help at ucar.edu> wrote:
> > > >
> > > > >
> > > > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > > > >        Queue: met_help
> > > > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > > >        Owner: Nobody
> > > > >   Requestors: john.w.raby2.civ at mail.mil
> > > > >       Status: new
> > > > >  Ticket <Caution-Caution-Caution-Caution-Caution-url:
> > > > > Caution-Caution-Caution-Caution-
> Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > > html?id=86
> > > > > 119 >
> > > > >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > > Request assistance in diagnosing a problem I'm having. The
run
> > > > > ends at the point when the ssvar data is computed and the
log
> > > > > file says "out of memory exiting". For this run I specified
> > > > > regridding "to_grid = OBS". In a previous run in MAY, with
the
> > > > > same input data I specified
> > > > "to_grid = FCST"
> > > > > and I did not have this problem and the run was successful.
I
> > > > > am running on an HPC and tried two runs during my testing
yesterday.
> > > > > One run was at the command line in my home dir and the other
> > > > > run was as a batch job, but the error recurred for both
runs.
> > > > > I have attached a compressed file containing my run script,
> > > > > config file, two MET log files, the HPC system log, and text
> > > > > files containing the grid
> > > > information for the input fcst and obs files.
> > > > >
> > > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_lead06_log"
> > > > > is the log file from MAY (referred to above) which shows the
> > > > > run completed successfully.
> > > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is the
> > > > > log file from yesterday's run which failed.
> > > > > The HPC system log from yesterday's batch run which provides
> > > > > more detailed info is "METE-S.o6217798"
> > > > > The run script is
> "run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > > > The config file is
> > > > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > > > The grid info is in "precip_fcst_grid_info" and
> > "precip_obs_grid_info"
> > > > >
> > > > > Please let me know if you need more info.
> > > > >
> > > > > Thanks.
> > > > > R/
> > > > > John
> > > > >
> > > > > Mr. John W. Raby
> > > > > U.S. Army Research Laboratory
> > > > > White Sands Missile Range, NM 88002
> > > > > (575) 678-2004 DSN 258-2004
> > > > > FAX (575) 678-1230 DSN 258-1230
> > > > > Email: john.w.raby2.civ at mail.mil
> > > > >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > >
> > > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

CLASSIFICATION: UNCLASSIFIED


------------------------------------------------
Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
From: John Halley Gotway
Time: Mon Jul 16 09:26:19 2018

John,

OK, I'll go ahead and resolve this ticket now.  When you're able to
get
back to it, just let us know what other issues or questions arise.

Thanks,
John

On Fri, Jul 13, 2018 at 6:02 AM Raby, John W USA CIV via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
>
> CLASSIFICATION: UNCLASSIFIED
>
> John -
>
> I had to drop off this project yesterday and didn't have a chance to
try
> using the 4-km tile. Not sure when I'll get back to it, but I will
> definitely need to follow through on it and I'll let you know the
results
> then.
>
> Thanks for your help on this.
>
> R/
> John
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Wednesday, July 11, 2018 12:18 PM
> To: Raby, John W CIV USARMY RDECOM ARL (US)
<john.w.raby2.civ at mail.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET V5.2
> Ensemble-Stat err (UNCLASSIFIED)
>
> All active links contained in this email were disabled.  Please
verify the
> identity of the sender, and confirm the authenticity of all links
contained
> within the message prior to copying and pasting the address to a Web
> browser.
>
>
>
>
> ----
>
> John,
>
> For each field being evaluated, Ensemble-Stat reads all the ensemble
> member data into memory.  When it reads data from a file, it reads
it on
> it's native grid, does any requested regridding, and stores the
result in
> memory.
>
> A rough comparison of the memory required for the regridding options
is...
>
> (1) to_grid = FCST: 201x201 grid x 28 members = 1,131,228 data
values in
> memory
> (2) to_grid = OBS: 1121x881 grid x 28 members = 27,652,828 data
values in
> memory
> (3) to_grid = "user-defined 4-km tile": 51x51 grid x 28 members =
72,828
> data values in memory
>
> As you can see, the third option that you're working on now consumes
the
> least amount of memory by far.
>
> Thanks,
> John
>
> On Wed, Jul 11, 2018 at 12:06 PM Raby, John W USA CIV via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <Caution-url:
> > Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86119 >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> > John  -
> >
> > Thanks for the guidance on doing the regridding and the tip on
using
> > the budget interpolation option.
> >
> > In step (4), since I'll be inputting the model and obs data on the
> > original native grids, why does the regridding which ensemble-stat
> > will do not cause the same memory issue?
> >
> > R/
> > John
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [Caution-mailto:met_help at ucar.edu]
> > Sent: Wednesday, July 11, 2018 11:10 AM
> > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > <john.w.raby2.civ at mail.mil>
> > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > Ensemble-Stat err (UNCLASSIFIED)
> >
> > All active links contained in this email were disabled.  Please
verify
> > the identity of the sender, and confirm the authenticity of all
links
> > contained within the message prior to copying and pasting the
address
> > to a Web browser.
> >
> >
> >
> >
> > ----
> >
> > John,
> >
> > I would suggest these following steps.
> >
> > (1) Start with a sample forecast file on its 201x201 domain.
> > (2) Use the regrid_data_plane and plot_data_plane tools to test
our
> > your proposed 4-km tile.  You'll pass in a grid specification
string
> > to regrid_data_plane ... do the regridding ... plot the result
using
> > plot_point_obs.  If the forecast data isn't fully contained in the
> > tile, adjust the grid spec and try again.
> > (3) Once you have the grid spec the way you want, edit the
> > Ensemble-Stat config file by setting:
> >    regrid = {
> >      to_grid = "YOUR GRID SPEC_GOES HERE";
> >    ... }
> > (4) Run ensemble-stat just like you were doing before, passing the
> > model and obs data on their native grids.  Let ensemble-stat do
the
> > regridding for you rather than having to run regrid_data_plane
manually.
> >
> > One last suggestion, since you're processing precip, it's
generally
> > recommended that you use the budget interpolation option: -method
> > BUDGET -width 2
> >
> > Thanks,
> > John
> >
> > On Wed, Jul 11, 2018 at 10:37 AM Raby, John W USA CIV via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > <Caution-Caution-url:
> > > Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=86
> > > 119 >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > > John -
> > >
> > > Thanks for confirming the grid spec info. So, if I use that info
in
> > > regrid_data_plane to regrid the forecast to a 4km grid, can I
then
> > > run MET Ensemble-Stat without regridding and use the regridded
fcst
> > > file and the 4km precip observations files as inputs or do I
have to
> > > use regridding again?
> > >
> > > R/
> > > John
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT
> > > [Caution-Caution-mailto:met_help at ucar.edu]
> > > Sent: Wednesday, July 11, 2018 10:18 AM
> > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > <john.w.raby2.civ at mail.mil>
> > > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > > Ensemble-Stat err (UNCLASSIFIED)
> > >
> > > All active links contained in this email were disabled.  Please
> > > verify the identity of the sender, and confirm the authenticity
of
> > > all links contained within the message prior to copying and
pasting
> > > the address to a Web browser.
> > >
> > >
> > >
> > >
> > > ----
> > >
> > > John,
> > >
> > > Yes, that all sounds good to me.  Just give it a try using
> > > regrid_data_plane.  And then run the output through
plot_data_plane
> > > to see how it looks.
> > >
> > > Actually, I'd suggest regridding a sample *forecast* file to
that
> > > new domain... and running that through plot_data_plane to see
how it
> looks.
> > > You can play around with it however you'd like.  Perhaps
increasing
> > > from
> > > 51x51 to something slightly larger to make sure your forecast
data
> > > is fully contained inside your new verification domain.
> > >
> > > Thanks,
> > > John
> > >
> > > On Wed, Jul 11, 2018 at 9:08 AM Raby, John W USA CIV via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > <Caution-Caution-Caution-url:
> > > > Caution-Caution-Caution-
https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > html?id=86
> > > > 119 >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > > John -
> > > >
> > > > I did ncdump on the output of Pcp-Combine which is the
forecast
> > > > grid of accum precip. I noticed that the grid specifications
which
> > > > appear in this dump appear to be those more closely matching
those
> > > > you referred to in the README file. See attached text file
which
> > > > is the output of the ncdump. I printed the projection info
below:
> > > >
> > > > :Projection = "Lambert Conformal" ;
> > > >                 :scale_lat_1 = "39.032000" ;
> > > >                 :scale_lat_2 = "39.032000" ;
> > > >                 :lat_pin = "38.113000" ;
> > > >                 :lon_pin = "-78.112000" ;
> > > >                 :x_pin = "0.000000" ;
> > > >                 :y_pin = "0.000000" ;
> > > >                 :lon_orient = "-76.952000" ;
> > > >                 :d_km = "1.000000" ;
> > > >                 :r_km = "6371.200000" ;
> > > >                 :nx = "204" ;
> > > >                 :ny = "204 grid_points" ;
> > > >
> > > >
> > > > So, for the fcst domain I now have what looks like Nx, Ny,
> > > > lon_orient, D_km, R_km from the resemblance with the README
spec
> names.
> > > >
> > > > Can I assume that "standard_parallel_1" is the same as
> > > > :scale_lat_1 = "39.032000" above and that
"standard_parallel_2" is
> > > > the same as
> > > > :scale_lat_2 = "39.032000" above?
> > > >
> > > > Is lat_ll the same as :lat_pin = "38.113000"  and lon_ll the
same
> > > > as :lon_pin = "-78.112000"?
> > > >
> > > > So, to perform the regrid per your suggestion, I would set
"d_km"
> > > > to
> > > > 4 vice 1  and "Nx" and "Ny" to 51 (204/4) to create a domain
the
> > > > same size as the fcst domain with a grid resolution of 4km.
Does
> > > > this sound right? All the other required specs tie the
geographic
> > > > location to that of my fcst domain, so I would use the same
values
> > > > as are show
> > > above, correct?
> > > >
> > > > Thanks.
> > > >
> > > > R/
> > > > John
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: Raby, John W CIV USARMY RDECOM ARL (US)
> > > > Sent: Tuesday, July 10, 2018 4:48 PM
> > > > To: 'met_help at ucar.edu' <met_help at ucar.edu>
> > > > Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
> > > > V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > > John -
> > > >
> > > > I've been using ncdump on the WRF geo_em file, the met_em file
and
> > > > the wrfout file (all NetCDF) and I can't locate the grid
> > > > specification specs required for the regridding. How do you
find
> > > > those
> > specs?
> > > >
> > > > I'm pretty sure that for the fcst file the Nx = 204 and Ny is
204
> > > > and for the 4km prcip file Nx is 1121 and Ny is 881. My target
3rd
> > > > domain would be the 204 X 204. I have the lat/long extents of
the
> > > > fcst domain using the corner_lats and corner_longs info in the
> > > > dump of the geo_em
> > > file.
> > > >
> > > > Doing searches in the dump files for the listed  specs don't
(per
> > > > the README file) do not produce hits. Not sure where to turn
to
> > > > for
> > this.
> > > > Maybe WRF specs are not named the same?
> > > >
> > > > R/
> > > > John
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT
> > > > [Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, July 10, 2018 11:50 AM
> > > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > > <john.w.raby2.civ at mail.mil>
> > > > Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
> > > > V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > >
> > > > All active links contained in this email were disabled.
Please
> > > > verify the identity of the sender, and confirm the
authenticity of
> > > > all links contained within the message prior to copying and
> > > > pasting the address to a Web browser.
> > > >
> > > >
> > > >
> > > >
> > > > ----
> > > >
> > > > John,
> > > >
> > > > I think changing the masking region would have very little
impact
> > > > on the memory usage.  MET is still storing all the ensemble
member
> > > > grids as double precision values in memory... even though the
vast
> > > > majority of them are missing data values.
> > > >
> > > > I think re-gridding to a 3rd domain would be a good solution.
> > > > You'd want it to cover the geographic extent of the forecast
grid
> > > > but be at resolution of the observation grid.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > >
> > > > On Tue, Jul 10, 2018 at 10:07 AM Raby, John W USA CIV via RT <
> > > > met_help at ucar.edu> wrote:
> > > >
> > > > >
> > > > > <Caution-Caution-Caution-Caution-url:
> > > > > Caution-Caution-Caution-Caution-
> https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > > html?id=86
> > > > > 119 >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > > John -
> > > > >
> > > > > Thanks for diagnosing the situation. I'm considering the two
> > > > > options you suggested. Would the use of a masking region the
> > > > > size of the smaller forcast domain help?
> > > > >
> > > > > R/
> > > > > John
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT
> > > > > [Caution-Caution-Caution-Caution-mailto:met_help at ucar.edu]
> > > > > Sent: Tuesday, July 10, 2018 9:41 AM
> > > > > To: Raby, John W CIV USARMY RDECOM ARL (US)
> > > > > <john.w.raby2.civ at mail.mil>
> > > > > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #86119] MET
V5.2
> > > > > Ensemble-Stat err (UNCLASSIFIED)
> > > > >
> > > > > All active links contained in this email were disabled.
Please
> > > > > verify the identity of the sender, and confirm the
authenticity
> > > > > of all links contained within the message prior to copying
and
> > > > > pasting the address to a Web browser.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ----
> > > > >
> > > > > John,
> > > > >
> > > > > Thanks for sending your log files and grid information.  I
see
> > > > > that you're running out of memory when running the Ensemble-
Stat
> > tool.
> > > > >
> > > > > Your forecast domain has dimension 204x204 (1km Lambert
> > > > > Conformal
> > > > > grid) and the observation domain has dimension 1121x881
(StageIV
> > > > > 4km grid).  The observation grid contains about 24 times
more
> > > > > points than the forecast grid.  And you're defining a 28
member
> > ensemble.
> > > > > So I'm not that surprised that memory issues do not show up
for
> > > > > the fcst but do show up for the obs grid... since the obs
grid
> > > > > would require 24 times more memory to store the data.
> > > > >
> > > > > Even though your forecast domain likely only covers a very
small
> > > > > portion of the observation domain, MET is storing the full
> > > > > 11121x881 grid points in memory for each ensemble member.
Most
> > > > > of them however will just contain missing data values.
> > > > >
> > > > > So you've tried setting "to_grid = FCST" and that works.
And
> > > > > you've tried setting "to_grid = OBS" and that runs out of
memory.
> > > > >
> > > > > You could consider...
> > > > > (1) Some HPC systems allow you to request more memory when
you
> > > > > submit a job.  You'd need to figure out the right batch
options,
> > > > > but that may be possible.
> > > > >
> > > > > (2) Instead of setting "to_grid = OBS", you could define 3rd
> > > > > domain at approximately the 4-km grid spacing similar to the
> > > > > StageIV
> > domain.
> > > > > And then you'd regrid both the forecast and observation data
to
> > > > > that 3rd domain.  Look in the file "met-
5.2/data/config/README"
> > > > > and search for "to_grid" to see a description of the grid
> > specification.
> > > > >
> > > > > Hope this helps.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > >
> > > > > So
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Jul 10, 2018 at 7:56 AM Raby, John W USA CIV via RT
<
> > > > > met_help at ucar.edu> wrote:
> > > > >
> > > > > >
> > > > > > Tue Jul 10 07:55:59 2018: Request 86119 was acted upon.
> > > > > > Transaction: Ticket created by john.w.raby2.civ at mail.mil
> > > > > >        Queue: met_help
> > > > > >      Subject: MET V5.2 Ensemble-Stat err (UNCLASSIFIED)
> > > > > >        Owner: Nobody
> > > > > >   Requestors: john.w.raby2.civ at mail.mil
> > > > > >       Status: new
> > > > > >  Ticket <Caution-Caution-Caution-Caution-Caution-url:
> > > > > > Caution-Caution-Caution-Caution-
> > Caution-https://rt.rap.ucar.edu/rt/Ticket/Display.
> > > > > > html?id=86
> > > > > > 119 >
> > > > > >
> > > > > >
> > > > > > CLASSIFICATION: UNCLASSIFIED
> > > > > >
> > > > > > Request assistance in diagnosing a problem I'm having. The
run
> > > > > > ends at the point when the ssvar data is computed and the
log
> > > > > > file says "out of memory exiting". For this run I
specified
> > > > > > regridding "to_grid = OBS". In a previous run in MAY, with
the
> > > > > > same input data I specified
> > > > > "to_grid = FCST"
> > > > > > and I did not have this problem and the run was
successful. I
> > > > > > am running on an HPC and tried two runs during my testing
> yesterday.
> > > > > > One run was at the command line in my home dir and the
other
> > > > > > run was as a batch job, but the error recurred for both
runs.
> > > > > > I have attached a compressed file containing my run
script,
> > > > > > config file, two MET log files, the HPC system log, and
text
> > > > > > files containing the grid
> > > > > information for the input fcst and obs files.
> > > > > >
> > > > > > The MET log file
"m3o3_Dumais_28mem_ens_06hrfcst_lead06_log"
> > > > > > is the log file from MAY (referred to above) which shows
the
> > > > > > run completed successfully.
> > > > > > The MET log file "m3o3_Dumais_28mem_ens_06hrfcst_log" is
the
> > > > > > log file from yesterday's run which failed.
> > > > > > The HPC system log from yesterday's batch run which
provides
> > > > > > more detailed info is "METE-S.o6217798"
> > > > > > The run script is
> > "run_ensemble_stat_Dumais_m3o3_28mem_ens_hr06_EXC"
> > > > > > The config file is
> > > > > > "EnsembleStatConfig_m3o3_Dumais_WRF_28mem_DC_ens_hr06_EXC"
> > > > > > The grid info is in "precip_fcst_grid_info" and
> > > "precip_obs_grid_info"
> > > > > >
> > > > > > Please let me know if you need more info.
> > > > > >
> > > > > > Thanks.
> > > > > > R/
> > > > > > John
> > > > > >
> > > > > > Mr. John W. Raby
> > > > > > U.S. Army Research Laboratory
> > > > > > White Sands Missile Range, NM 88002
> > > > > > (575) 678-2004 DSN 258-2004
> > > > > > FAX (575) 678-1230 DSN 258-1230
> > > > > > Email: john.w.raby2.civ at mail.mil
> > > > > >
> > > > > >
> > > > > > CLASSIFICATION: UNCLASSIFIED
> > > > > >
> > > > > >
> > > > >
> > > > > CLASSIFICATION: UNCLASSIFIED
> > > > >
> > > > >
> > > > >
> > > >
> > > > CLASSIFICATION: UNCLASSIFIED
> > > > CLASSIFICATION: UNCLASSIFIED
> > > >
> > > >
> > >
> > > CLASSIFICATION: UNCLASSIFIED
> > >
> > >
> > >
> >
> > CLASSIFICATION: UNCLASSIFIED
> >
> >
> >
>
> CLASSIFICATION: UNCLASSIFIED
>
>
>

------------------------------------------------


More information about the Met_help mailing list