[Met_help] [rt.rap.ucar.edu #89075] History for MET v8 updates needed?

John Halley Gotway via RT met_help at ucar.edu
Tue Jul 9 12:06:44 MDT 2019


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Hi,

I've started using the METv8 system built on the WCOSS Dell (mars/venus)
lately, and have been getting some weird output.    While trying to
understand my problem I looked at the known issues
<https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php>
page, and realized that our build might not have the latest patches.  I'm
guessing my symptoms won't be solved with an update, but could we get the
latest build installed just in case?

Thanks,

Matthew Pyle

-rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22 18:56
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: MET v8 updates needed?
From: Julie Prestopnik
Time: Mon Feb 25 14:34:24 2019

Hi Matthew.

My apologies for the delay in updating the met-8.0 package with the
latest
patches on the WCOSS machines.   I have just updated met-8.0 to
include the
latest set of patches on venus and hope to update the other machines
soon.

Please follow up with a new ticket if you encounter any other problems
or
have any questions.  Thanks!

Julie



On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov via RT <
met_help at ucar.edu> wrote:

>
> Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
> Transaction: Ticket created by Matthew.Pyle at noaa.gov
>        Queue: met_help
>      Subject: MET v8 updates needed?
>        Owner: Nobody
>   Requestors: Matthew.Pyle at noaa.gov
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
>
>
> Hi,
>
> I've started using the METv8 system built on the WCOSS Dell
(mars/venus)
> lately, and have been getting some weird output.    While trying to
> understand my problem I looked at the known issues
>
<https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php>
> page, and realized that our build might not have the latest patches.
I'm
> guessing my symptoms won't be solved with an update, but could we
get the
> latest build installed just in case?
>
> Thanks,
>
> Matthew Pyle
>
> -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22 18:56
>
>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
>
>

------------------------------------------------
Subject: MET v8 updates needed?
From: "matthew.pyle at noaa.gov"
Time: Tue Feb 26 07:41:25 2019

Hi Julie,

I should have done this test prior to your update, but I'm still
seeing
some weird behavior definitely coming from pcp_combine on the Dell.

I have collected inputs into
/gpfs/dell2/emc/modeling/noscrub/Matthew.Pyle/met_hold/ on venus.

In that directory I did the following:

mems="01 02 03 04 05 06 07 08 09 10"
for mem in $mems
do
pcp_combine -add prcip.m${mem}.t00z.conus.f03 3
prcip.m${mem}.t00z.conus.f06 3 6h_mem_${mem}.nc
done

In the output netCDF files, mems 01-04 display the 6 h total as white-
noise
looking jibberish, while mems 05-10 look coherent and physical.  I'm
not
sure if it is related, but mems 01-04 cover a smaller portion of the
grid
with valid data than mems 05-10, but all members have undefined
portions of
the grid.

I put some of the netCDF output files onto Theia under
/scratch4/NCEPDEV/fv3-cam/noscrub/Matthew.Pyle

>From the Dell with MET v8:

6h_mem_01.nc
6h_mem_05.nc

>From the IBM with MET v6:

ibm_6h_mem_01.nc
ibm_6h_mem_05.nc

The "mem_01" files look very different, while the "mem_05" files look
nearly identical when plotted.

Any help would be appreciated!

Thanks,

Matt

On Mon, Feb 25, 2019 at 4:34 PM Julie Prestopnik via RT
<met_help at ucar.edu>
wrote:

> Hi Matthew.
>
> My apologies for the delay in updating the met-8.0 package with the
latest
> patches on the WCOSS machines.   I have just updated met-8.0 to
include the
> latest set of patches on venus and hope to update the other machines
soon.
>
> Please follow up with a new ticket if you encounter any other
problems or
> have any questions.  Thanks!
>
> Julie
>
>
>
> On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
> > Transaction: Ticket created by Matthew.Pyle at noaa.gov
> >        Queue: met_help
> >      Subject: MET v8 updates needed?
> >        Owner: Nobody
> >   Requestors: Matthew.Pyle at noaa.gov
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> >
> >
> > Hi,
> >
> > I've started using the METv8 system built on the WCOSS Dell
(mars/venus)
> > lately, and have been getting some weird output.    While trying
to
> > understand my problem I looked at the known issues
> >
<https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php>
> > page, and realized that our build might not have the latest
patches.  I'm
> > guessing my symptoms won't be solved with an update, but could we
get the
> > latest build installed just in case?
> >
> > Thanks,
> >
> > Matthew Pyle
> >
> > -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22 18:56
> >
> >
>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
> >
> >
>
>

------------------------------------------------
Subject: MET v8 updates needed?
From: Julie Prestopnik
Time: Tue Feb 26 10:45:27 2019

Thank you for bringing this to our attention.  I will look into it and
follow up with you once I have more information.

Thanks,
Julie

On Tue, Feb 26, 2019 at 7:41 AM matthew.pyle at noaa.gov via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
>
> Hi Julie,
>
> I should have done this test prior to your update, but I'm still
seeing
> some weird behavior definitely coming from pcp_combine on the Dell.
>
> I have collected inputs into
> /gpfs/dell2/emc/modeling/noscrub/Matthew.Pyle/met_hold/ on venus.
>
> In that directory I did the following:
>
> mems="01 02 03 04 05 06 07 08 09 10"
> for mem in $mems
> do
> pcp_combine -add prcip.m${mem}.t00z.conus.f03 3
> prcip.m${mem}.t00z.conus.f06 3 6h_mem_${mem}.nc
> done
>
> In the output netCDF files, mems 01-04 display the 6 h total as
white-noise
> looking jibberish, while mems 05-10 look coherent and physical.  I'm
not
> sure if it is related, but mems 01-04 cover a smaller portion of the
grid
> with valid data than mems 05-10, but all members have undefined
portions of
> the grid.
>
> I put some of the netCDF output files onto Theia under
> /scratch4/NCEPDEV/fv3-cam/noscrub/Matthew.Pyle
>
> From the Dell with MET v8:
>
> 6h_mem_01.nc
> 6h_mem_05.nc
>
> From the IBM with MET v6:
>
> ibm_6h_mem_01.nc
> ibm_6h_mem_05.nc
>
> The "mem_01" files look very different, while the "mem_05" files
look
> nearly identical when plotted.
>
> Any help would be appreciated!
>
> Thanks,
>
> Matt
>
> On Mon, Feb 25, 2019 at 4:34 PM Julie Prestopnik via RT
<met_help at ucar.edu
> >
> wrote:
>
> > Hi Matthew.
> >
> > My apologies for the delay in updating the met-8.0 package with
the
> latest
> > patches on the WCOSS machines.   I have just updated met-8.0 to
include
> the
> > latest set of patches on venus and hope to update the other
machines
> soon.
> >
> > Please follow up with a new ticket if you encounter any other
problems or
> > have any questions.  Thanks!
> >
> > Julie
> >
> >
> >
> > On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
> > > Transaction: Ticket created by Matthew.Pyle at noaa.gov
> > >        Queue: met_help
> > >      Subject: MET v8 updates needed?
> > >        Owner: Nobody
> > >   Requestors: Matthew.Pyle at noaa.gov
> > >       Status: new
> > >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
> >
> > >
> > >
> > > Hi,
> > >
> > > I've started using the METv8 system built on the WCOSS Dell
> (mars/venus)
> > > lately, and have been getting some weird output.    While trying
to
> > > understand my problem I looked at the known issues
> > >
<https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php
> >
> > > page, and realized that our build might not have the latest
patches.
> I'm
> > > guessing my symptoms won't be solved with an update, but could
we get
> the
> > > latest build installed just in case?
> > >
> > > Thanks,
> > >
> > > Matthew Pyle
> > >
> > > -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22 18:56
> > >
> > >
> >
>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
> > >
> > >
> >
> >
>
>

------------------------------------------------
Subject: MET v8 updates needed?
From: John Halley Gotway
Time: Wed Feb 27 13:58:25 2019

Hi Matt,

Julie and I spent some time trying to debug this issue on venus today.
Unfortunately, we don't have an easy solution to report.

- This isn't a bug in specifically in pcp_combine.  MET's library code
is
reading bad data values from the members 1 - 5 files.  So we've been
using
plot_data_plane to test... just read data from member 1 and plot it.
- When we pull the GRIB2 files down to our NCAR machine and run
plot_data_plane, all the plots look good.
- When we convert from GRIB2 to GRIB2 (cnvgrib -g21), copy the GRIB1
files
up to WCOSS, and rerun plot_data_plane, the plots look good.
- When we run wgrib2 directly on WCOSS to read the GRIB2 file in and
write
it out to another GRIB2 file, then the plots looks good.

So there's something very weird going on here.

If we can investigate the difference between the input to and output
from
wgrib2, perhaps that'll identify what's going on.

As a very temporary, cludge workaround on WCOSS, you could add in
calls to
wgrib2 like this:



*mems="01 02 03 04 05 06 07 08 09 10"for mem in $memsdo*
*$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f03.wgrib2
prcip.m${mem}.t00z.conus.f03*

*$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f06.wgrib2
prcip.m${mem}.t00z.conus.f06*


*pcp_combine -add prcip.m${mem}.t00z.conus.f03.wgrib2
3prcip.m${mem}.t00z.conus.f06.wgrib2 3 6h_mem_${mem}.ncdone*

I would expect that that'd result in good output.  Is there something
special about members 1 through 5?  Do you know of any differences
that
might point us in the right direction?

Thanks,
John

On Tue, Feb 26, 2019 at 10:45 AM Julie Prestopnik via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
>
> Thank you for bringing this to our attention.  I will look into it
and
> follow up with you once I have more information.
>
> Thanks,
> Julie
>
> On Tue, Feb 26, 2019 at 7:41 AM matthew.pyle at noaa.gov via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> >
> > Hi Julie,
> >
> > I should have done this test prior to your update, but I'm still
seeing
> > some weird behavior definitely coming from pcp_combine on the
Dell.
> >
> > I have collected inputs into
> > /gpfs/dell2/emc/modeling/noscrub/Matthew.Pyle/met_hold/ on venus.
> >
> > In that directory I did the following:
> >
> > mems="01 02 03 04 05 06 07 08 09 10"
> > for mem in $mems
> > do
> > pcp_combine -add prcip.m${mem}.t00z.conus.f03 3
> > prcip.m${mem}.t00z.conus.f06 3 6h_mem_${mem}.nc
> > done
> >
> > In the output netCDF files, mems 01-04 display the 6 h total as
> white-noise
> > looking jibberish, while mems 05-10 look coherent and physical.
I'm not
> > sure if it is related, but mems 01-04 cover a smaller portion of
the grid
> > with valid data than mems 05-10, but all members have undefined
portions
> of
> > the grid.
> >
> > I put some of the netCDF output files onto Theia under
> > /scratch4/NCEPDEV/fv3-cam/noscrub/Matthew.Pyle
> >
> > From the Dell with MET v8:
> >
> > 6h_mem_01.nc
> > 6h_mem_05.nc
> >
> > From the IBM with MET v6:
> >
> > ibm_6h_mem_01.nc
> > ibm_6h_mem_05.nc
> >
> > The "mem_01" files look very different, while the "mem_05" files
look
> > nearly identical when plotted.
> >
> > Any help would be appreciated!
> >
> > Thanks,
> >
> > Matt
> >
> > On Mon, Feb 25, 2019 at 4:34 PM Julie Prestopnik via RT <
> met_help at ucar.edu
> > >
> > wrote:
> >
> > > Hi Matthew.
> > >
> > > My apologies for the delay in updating the met-8.0 package with
the
> > latest
> > > patches on the WCOSS machines.   I have just updated met-8.0 to
include
> > the
> > > latest set of patches on venus and hope to update the other
machines
> > soon.
> > >
> > > Please follow up with a new ticket if you encounter any other
problems
> or
> > > have any questions.  Thanks!
> > >
> > > Julie
> > >
> > >
> > >
> > > On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > > >
> > > > Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
> > > > Transaction: Ticket created by Matthew.Pyle at noaa.gov
> > > >        Queue: met_help
> > > >      Subject: MET v8 updates needed?
> > > >        Owner: Nobody
> > > >   Requestors: Matthew.Pyle at noaa.gov
> > > >       Status: new
> > > >  Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
> > >
> > > >
> > > >
> > > > Hi,
> > > >
> > > > I've started using the METv8 system built on the WCOSS Dell
> > (mars/venus)
> > > > lately, and have been getting some weird output.    While
trying to
> > > > understand my problem I looked at the known issues
> > > > <
>
https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php
> > >
> > > > page, and realized that our build might not have the latest
patches.
> > I'm
> > > > guessing my symptoms won't be solved with an update, but could
we get
> > the
> > > > latest build installed just in case?
> > > >
> > > > Thanks,
> > > >
> > > > Matthew Pyle
> > > >
> > > > -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22 18:56
> > > >
> > > >
> > >
> >
>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
> > > >
> > > >
> > >
> > >
> >
> >
>
>

------------------------------------------------
Subject: MET v8 updates needed?
From: "matthew.pyle at noaa.gov"
Time: Thu Feb 28 12:02:27 2019

Hi John,

Thanks for the feedback.  You were correct that recreating the files
by
pushing them through wgrib2 did allow pcp_combine to produce proper
looking
output on the Dell.  Unfortunately further investigation into what is
different between the problematic and acceptable GRIB2 files has been
hampered by the development Dell being down for an upgrade today.  I
will
be back in touch once I know a bit more.

Thanks,

Matt

On Wed, Feb 27, 2019 at 3:58 PM John Halley Gotway via RT
<met_help at ucar.edu>
wrote:

> Hi Matt,
>
> Julie and I spent some time trying to debug this issue on venus
today.
> Unfortunately, we don't have an easy solution to report.
>
> - This isn't a bug in specifically in pcp_combine.  MET's library
code is
> reading bad data values from the members 1 - 5 files.  So we've been
using
> plot_data_plane to test... just read data from member 1 and plot it.
> - When we pull the GRIB2 files down to our NCAR machine and run
> plot_data_plane, all the plots look good.
> - When we convert from GRIB2 to GRIB2 (cnvgrib -g21), copy the GRIB1
files
> up to WCOSS, and rerun plot_data_plane, the plots look good.
> - When we run wgrib2 directly on WCOSS to read the GRIB2 file in and
write
> it out to another GRIB2 file, then the plots looks good.
>
> So there's something very weird going on here.
>
> If we can investigate the difference between the input to and output
from
> wgrib2, perhaps that'll identify what's going on.
>
> As a very temporary, cludge workaround on WCOSS, you could add in
calls to
> wgrib2 like this:
>
>
>
> *mems="01 02 03 04 05 06 07 08 09 10"for mem in $memsdo*
> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f03.wgrib2
> prcip.m${mem}.t00z.conus.f03*
>
> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f06.wgrib2
> prcip.m${mem}.t00z.conus.f06*
>
>
> *pcp_combine -add prcip.m${mem}.t00z.conus.f03.wgrib2
> 3prcip.m${mem}.t00z.conus.f06.wgrib2 3 6h_mem_${mem}.ncdone*
>
> I would expect that that'd result in good output.  Is there
something
> special about members 1 through 5?  Do you know of any differences
that
> might point us in the right direction?
>
> Thanks,
> John
>
> On Tue, Feb 26, 2019 at 10:45 AM Julie Prestopnik via RT <
> met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> >
> > Thank you for bringing this to our attention.  I will look into it
and
> > follow up with you once I have more information.
> >
> > Thanks,
> > Julie
> >
> > On Tue, Feb 26, 2019 at 7:41 AM matthew.pyle at noaa.gov via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> > >
> > > Hi Julie,
> > >
> > > I should have done this test prior to your update, but I'm still
seeing
> > > some weird behavior definitely coming from pcp_combine on the
Dell.
> > >
> > > I have collected inputs into
> > > /gpfs/dell2/emc/modeling/noscrub/Matthew.Pyle/met_hold/ on
venus.
> > >
> > > In that directory I did the following:
> > >
> > > mems="01 02 03 04 05 06 07 08 09 10"
> > > for mem in $mems
> > > do
> > > pcp_combine -add prcip.m${mem}.t00z.conus.f03 3
> > > prcip.m${mem}.t00z.conus.f06 3 6h_mem_${mem}.nc
> > > done
> > >
> > > In the output netCDF files, mems 01-04 display the 6 h total as
> > white-noise
> > > looking jibberish, while mems 05-10 look coherent and physical.
I'm
> not
> > > sure if it is related, but mems 01-04 cover a smaller portion of
the
> grid
> > > with valid data than mems 05-10, but all members have undefined
> portions
> > of
> > > the grid.
> > >
> > > I put some of the netCDF output files onto Theia under
> > > /scratch4/NCEPDEV/fv3-cam/noscrub/Matthew.Pyle
> > >
> > > From the Dell with MET v8:
> > >
> > > 6h_mem_01.nc
> > > 6h_mem_05.nc
> > >
> > > From the IBM with MET v6:
> > >
> > > ibm_6h_mem_01.nc
> > > ibm_6h_mem_05.nc
> > >
> > > The "mem_01" files look very different, while the "mem_05" files
look
> > > nearly identical when plotted.
> > >
> > > Any help would be appreciated!
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > > On Mon, Feb 25, 2019 at 4:34 PM Julie Prestopnik via RT <
> > met_help at ucar.edu
> > > >
> > > wrote:
> > >
> > > > Hi Matthew.
> > > >
> > > > My apologies for the delay in updating the met-8.0 package
with the
> > > latest
> > > > patches on the WCOSS machines.   I have just updated met-8.0
to
> include
> > > the
> > > > latest set of patches on venus and hope to update the other
machines
> > > soon.
> > > >
> > > > Please follow up with a new ticket if you encounter any other
> problems
> > or
> > > > have any questions.  Thanks!
> > > >
> > > > Julie
> > > >
> > > >
> > > >
> > > > On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov via RT
<
> > > > met_help at ucar.edu> wrote:
> > > >
> > > > >
> > > > > Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
> > > > > Transaction: Ticket created by Matthew.Pyle at noaa.gov
> > > > >        Queue: met_help
> > > > >      Subject: MET v8 updates needed?
> > > > >        Owner: Nobody
> > > > >   Requestors: Matthew.Pyle at noaa.gov
> > > > >       Status: new
> > > > >  Ticket <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
> > > >
> > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > > I've started using the METv8 system built on the WCOSS Dell
> > > (mars/venus)
> > > > > lately, and have been getting some weird output.    While
trying to
> > > > > understand my problem I looked at the known issues
> > > > > <
> >
https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php
> > > >
> > > > > page, and realized that our build might not have the latest
> patches.
> > > I'm
> > > > > guessing my symptoms won't be solved with an update, but
could we
> get
> > > the
> > > > > latest build installed just in case?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Matthew Pyle
> > > > >
> > > > > -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22 18:56
> > > > >
> > > > >
> > > >
> > >
> >
>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
>
>

------------------------------------------------
Subject: MET v8 updates needed?
From: "matthew.pyle at noaa.gov"
Time: Mon Mar 04 06:52:22 2019

Hi John,

A little more information for you.  I think the key aspect of the
preprocessing through wgrib2 is that it writes out the file with
simple
packing.  Packing it with the same (jpeg) option as the original file
returns a bit-identical file.  Packing it using a complex packing
option
produced a file that also was problematic for the MET codes on Dell.
My
suspicion is that the issue is G2 library related somehow.

I'm grasping at straws a bit here, but did notice some related
libraries
that are using different versions between the v8 build on the Dell,
and the
v6 version on the IBM (which doesn't have the same issues processing
the
files).

v8/Dell:

libpng-1.6.34
zlib-1.2.11

v6/IBM:

libpng-1.2.34
zlib-1.2.6

The older versions used on the IBM look to agree with the NCEP GRIB2
page
<https://www.nco.ncep.noaa.gov/pmb/codes/GRIB2/>

Would it make sense to produce an alternate V8 build using the older
libraries?  It would eliminate a possibility if nothing else.

-Matt



On Thu, Feb 28, 2019 at 2:01 PM Matthew Pyle - NOAA Federal <
matthew.pyle at noaa.gov> wrote:

> Hi John,
>
> Thanks for the feedback.  You were correct that recreating the files
by
> pushing them through wgrib2 did allow pcp_combine to produce proper
looking
> output on the Dell.  Unfortunately further investigation into what
is
> different between the problematic and acceptable GRIB2 files has
been
> hampered by the development Dell being down for an upgrade today.  I
will
> be back in touch once I know a bit more.
>
> Thanks,
>
> Matt
>
> On Wed, Feb 27, 2019 at 3:58 PM John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Hi Matt,
>>
>> Julie and I spent some time trying to debug this issue on venus
today.
>> Unfortunately, we don't have an easy solution to report.
>>
>> - This isn't a bug in specifically in pcp_combine.  MET's library
code is
>> reading bad data values from the members 1 - 5 files.  So we've
been using
>> plot_data_plane to test... just read data from member 1 and plot
it.
>> - When we pull the GRIB2 files down to our NCAR machine and run
>> plot_data_plane, all the plots look good.
>> - When we convert from GRIB2 to GRIB2 (cnvgrib -g21), copy the
GRIB1 files
>> up to WCOSS, and rerun plot_data_plane, the plots look good.
>> - When we run wgrib2 directly on WCOSS to read the GRIB2 file in
and write
>> it out to another GRIB2 file, then the plots looks good.
>>
>> So there's something very weird going on here.
>>
>> If we can investigate the difference between the input to and
output from
>> wgrib2, perhaps that'll identify what's going on.
>>
>> As a very temporary, cludge workaround on WCOSS, you could add in
calls to
>> wgrib2 like this:
>>
>>
>>
>> *mems="01 02 03 04 05 06 07 08 09 10"for mem in $memsdo*
>> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f03.wgrib2
>> prcip.m${mem}.t00z.conus.f03*
>>
>> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f06.wgrib2
>> prcip.m${mem}.t00z.conus.f06*
>>
>>
>> *pcp_combine -add prcip.m${mem}.t00z.conus.f03.wgrib2
>> 3prcip.m${mem}.t00z.conus.f06.wgrib2 3 6h_mem_${mem}.ncdone*
>>
>> I would expect that that'd result in good output.  Is there
something
>> special about members 1 through 5?  Do you know of any differences
that
>> might point us in the right direction?
>>
>> Thanks,
>> John
>>
>> On Tue, Feb 26, 2019 at 10:45 AM Julie Prestopnik via RT <
>> met_help at ucar.edu>
>> wrote:
>>
>> >
>> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
>> >
>> > Thank you for bringing this to our attention.  I will look into
it and
>> > follow up with you once I have more information.
>> >
>> > Thanks,
>> > Julie
>> >
>> > On Tue, Feb 26, 2019 at 7:41 AM matthew.pyle at noaa.gov via RT <
>> > met_help at ucar.edu> wrote:
>> >
>> > >
>> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
>> > >
>> > > Hi Julie,
>> > >
>> > > I should have done this test prior to your update, but I'm
still
>> seeing
>> > > some weird behavior definitely coming from pcp_combine on the
Dell.
>> > >
>> > > I have collected inputs into
>> > > /gpfs/dell2/emc/modeling/noscrub/Matthew.Pyle/met_hold/ on
venus.
>> > >
>> > > In that directory I did the following:
>> > >
>> > > mems="01 02 03 04 05 06 07 08 09 10"
>> > > for mem in $mems
>> > > do
>> > > pcp_combine -add prcip.m${mem}.t00z.conus.f03 3
>> > > prcip.m${mem}.t00z.conus.f06 3 6h_mem_${mem}.nc
>> > > done
>> > >
>> > > In the output netCDF files, mems 01-04 display the 6 h total as
>> > white-noise
>> > > looking jibberish, while mems 05-10 look coherent and physical.
I'm
>> not
>> > > sure if it is related, but mems 01-04 cover a smaller portion
of the
>> grid
>> > > with valid data than mems 05-10, but all members have undefined
>> portions
>> > of
>> > > the grid.
>> > >
>> > > I put some of the netCDF output files onto Theia under
>> > > /scratch4/NCEPDEV/fv3-cam/noscrub/Matthew.Pyle
>> > >
>> > > From the Dell with MET v8:
>> > >
>> > > 6h_mem_01.nc
>> > > 6h_mem_05.nc
>> > >
>> > > From the IBM with MET v6:
>> > >
>> > > ibm_6h_mem_01.nc
>> > > ibm_6h_mem_05.nc
>> > >
>> > > The "mem_01" files look very different, while the "mem_05"
files look
>> > > nearly identical when plotted.
>> > >
>> > > Any help would be appreciated!
>> > >
>> > > Thanks,
>> > >
>> > > Matt
>> > >
>> > > On Mon, Feb 25, 2019 at 4:34 PM Julie Prestopnik via RT <
>> > met_help at ucar.edu
>> > > >
>> > > wrote:
>> > >
>> > > > Hi Matthew.
>> > > >
>> > > > My apologies for the delay in updating the met-8.0 package
with the
>> > > latest
>> > > > patches on the WCOSS machines.   I have just updated met-8.0
to
>> include
>> > > the
>> > > > latest set of patches on venus and hope to update the other
machines
>> > > soon.
>> > > >
>> > > > Please follow up with a new ticket if you encounter any other
>> problems
>> > or
>> > > > have any questions.  Thanks!
>> > > >
>> > > > Julie
>> > > >
>> > > >
>> > > >
>> > > > On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov via RT
<
>> > > > met_help at ucar.edu> wrote:
>> > > >
>> > > > >
>> > > > > Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
>> > > > > Transaction: Ticket created by Matthew.Pyle at noaa.gov
>> > > > >        Queue: met_help
>> > > > >      Subject: MET v8 updates needed?
>> > > > >        Owner: Nobody
>> > > > >   Requestors: Matthew.Pyle at noaa.gov
>> > > > >       Status: new
>> > > > >  Ticket <URL:
>> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
>> > > >
>> > > > >
>> > > > >
>> > > > > Hi,
>> > > > >
>> > > > > I've started using the METv8 system built on the WCOSS Dell
>> > > (mars/venus)
>> > > > > lately, and have been getting some weird output.    While
trying
>> to
>> > > > > understand my problem I looked at the known issues
>> > > > > <
>> >
https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php
>> > > >
>> > > > > page, and realized that our build might not have the latest
>> patches.
>> > > I'm
>> > > > > guessing my symptoms won't be solved with an update, but
could we
>> get
>> > > the
>> > > > > latest build installed just in case?
>> > > > >
>> > > > > Thanks,
>> > > > >
>> > > > > Matthew Pyle
>> > > > >
>> > > > > -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22 18:56
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
>> > > > >
>> > > > >
>> > > >
>> > > >
>> > >
>> > >
>> >
>> >
>>
>>

------------------------------------------------
Subject: MET v8 updates needed?
From: Julie Prestopnik
Time: Mon Mar 11 11:06:14 2019

Hi Matthew.

I can't seem to find what version of the jpeg library is on the Dell
and on
the IBM.  Would you be able to help with that?

I installed jpeg-6b on the Dell and then reinstalled jasper, then
g2clib,
the MET, but that still does not work.  Perhaps a different version
might,
but it would be useful to know what is currently being used on the
Dell and
on the IBM.

Thanks,
Julie

On Mon, Mar 4, 2019 at 6:52 AM matthew.pyle at noaa.gov via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
>
> Hi John,
>
> A little more information for you.  I think the key aspect of the
> preprocessing through wgrib2 is that it writes out the file with
simple
> packing.  Packing it with the same (jpeg) option as the original
file
> returns a bit-identical file.  Packing it using a complex packing
option
> produced a file that also was problematic for the MET codes on Dell.
My
> suspicion is that the issue is G2 library related somehow.
>
> I'm grasping at straws a bit here, but did notice some related
libraries
> that are using different versions between the v8 build on the Dell,
and the
> v6 version on the IBM (which doesn't have the same issues processing
the
> files).
>
> v8/Dell:
>
> libpng-1.6.34
> zlib-1.2.11
>
> v6/IBM:
>
> libpng-1.2.34
> zlib-1.2.6
>
> The older versions used on the IBM look to agree with the NCEP GRIB2
page
> <https://www.nco.ncep.noaa.gov/pmb/codes/GRIB2/>
>
> Would it make sense to produce an alternate V8 build using the older
> libraries?  It would eliminate a possibility if nothing else.
>
> -Matt
>
>
>
> On Thu, Feb 28, 2019 at 2:01 PM Matthew Pyle - NOAA Federal <
> matthew.pyle at noaa.gov> wrote:
>
> > Hi John,
> >
> > Thanks for the feedback.  You were correct that recreating the
files by
> > pushing them through wgrib2 did allow pcp_combine to produce
proper
> looking
> > output on the Dell.  Unfortunately further investigation into what
is
> > different between the problematic and acceptable GRIB2 files has
been
> > hampered by the development Dell being down for an upgrade today.
I will
> > be back in touch once I know a bit more.
> >
> > Thanks,
> >
> > Matt
> >
> > On Wed, Feb 27, 2019 at 3:58 PM John Halley Gotway via RT <
> > met_help at ucar.edu> wrote:
> >
> >> Hi Matt,
> >>
> >> Julie and I spent some time trying to debug this issue on venus
today.
> >> Unfortunately, we don't have an easy solution to report.
> >>
> >> - This isn't a bug in specifically in pcp_combine.  MET's library
code
> is
> >> reading bad data values from the members 1 - 5 files.  So we've
been
> using
> >> plot_data_plane to test... just read data from member 1 and plot
it.
> >> - When we pull the GRIB2 files down to our NCAR machine and run
> >> plot_data_plane, all the plots look good.
> >> - When we convert from GRIB2 to GRIB2 (cnvgrib -g21), copy the
GRIB1
> files
> >> up to WCOSS, and rerun plot_data_plane, the plots look good.
> >> - When we run wgrib2 directly on WCOSS to read the GRIB2 file in
and
> write
> >> it out to another GRIB2 file, then the plots looks good.
> >>
> >> So there's something very weird going on here.
> >>
> >> If we can investigate the difference between the input to and
output
> from
> >> wgrib2, perhaps that'll identify what's going on.
> >>
> >> As a very temporary, cludge workaround on WCOSS, you could add in
calls
> to
> >> wgrib2 like this:
> >>
> >>
> >>
> >> *mems="01 02 03 04 05 06 07 08 09 10"for mem in $memsdo*
> >> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f03.wgrib2
> >> prcip.m${mem}.t00z.conus.f03*
> >>
> >> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f06.wgrib2
> >> prcip.m${mem}.t00z.conus.f06*
> >>
> >>
> >> *pcp_combine -add prcip.m${mem}.t00z.conus.f03.wgrib2
> >> 3prcip.m${mem}.t00z.conus.f06.wgrib2 3 6h_mem_${mem}.ncdone*
> >>
> >> I would expect that that'd result in good output.  Is there
something
> >> special about members 1 through 5?  Do you know of any
differences that
> >> might point us in the right direction?
> >>
> >> Thanks,
> >> John
> >>
> >> On Tue, Feb 26, 2019 at 10:45 AM Julie Prestopnik via RT <
> >> met_help at ucar.edu>
> >> wrote:
> >>
> >> >
> >> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> >> >
> >> > Thank you for bringing this to our attention.  I will look into
it and
> >> > follow up with you once I have more information.
> >> >
> >> > Thanks,
> >> > Julie
> >> >
> >> > On Tue, Feb 26, 2019 at 7:41 AM matthew.pyle at noaa.gov via RT <
> >> > met_help at ucar.edu> wrote:
> >> >
> >> > >
> >> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
>
> >> > >
> >> > > Hi Julie,
> >> > >
> >> > > I should have done this test prior to your update, but I'm
still
> >> seeing
> >> > > some weird behavior definitely coming from pcp_combine on the
Dell.
> >> > >
> >> > > I have collected inputs into
> >> > > /gpfs/dell2/emc/modeling/noscrub/Matthew.Pyle/met_hold/ on
venus.
> >> > >
> >> > > In that directory I did the following:
> >> > >
> >> > > mems="01 02 03 04 05 06 07 08 09 10"
> >> > > for mem in $mems
> >> > > do
> >> > > pcp_combine -add prcip.m${mem}.t00z.conus.f03 3
> >> > > prcip.m${mem}.t00z.conus.f06 3 6h_mem_${mem}.nc
> >> > > done
> >> > >
> >> > > In the output netCDF files, mems 01-04 display the 6 h total
as
> >> > white-noise
> >> > > looking jibberish, while mems 05-10 look coherent and
physical.  I'm
> >> not
> >> > > sure if it is related, but mems 01-04 cover a smaller portion
of the
> >> grid
> >> > > with valid data than mems 05-10, but all members have
undefined
> >> portions
> >> > of
> >> > > the grid.
> >> > >
> >> > > I put some of the netCDF output files onto Theia under
> >> > > /scratch4/NCEPDEV/fv3-cam/noscrub/Matthew.Pyle
> >> > >
> >> > > From the Dell with MET v8:
> >> > >
> >> > > 6h_mem_01.nc
> >> > > 6h_mem_05.nc
> >> > >
> >> > > From the IBM with MET v6:
> >> > >
> >> > > ibm_6h_mem_01.nc
> >> > > ibm_6h_mem_05.nc
> >> > >
> >> > > The "mem_01" files look very different, while the "mem_05"
files
> look
> >> > > nearly identical when plotted.
> >> > >
> >> > > Any help would be appreciated!
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Matt
> >> > >
> >> > > On Mon, Feb 25, 2019 at 4:34 PM Julie Prestopnik via RT <
> >> > met_help at ucar.edu
> >> > > >
> >> > > wrote:
> >> > >
> >> > > > Hi Matthew.
> >> > > >
> >> > > > My apologies for the delay in updating the met-8.0 package
with
> the
> >> > > latest
> >> > > > patches on the WCOSS machines.   I have just updated met-
8.0 to
> >> include
> >> > > the
> >> > > > latest set of patches on venus and hope to update the other
> machines
> >> > > soon.
> >> > > >
> >> > > > Please follow up with a new ticket if you encounter any
other
> >> problems
> >> > or
> >> > > > have any questions.  Thanks!
> >> > > >
> >> > > > Julie
> >> > > >
> >> > > >
> >> > > >
> >> > > > On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov via
RT <
> >> > > > met_help at ucar.edu> wrote:
> >> > > >
> >> > > > >
> >> > > > > Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
> >> > > > > Transaction: Ticket created by Matthew.Pyle at noaa.gov
> >> > > > >        Queue: met_help
> >> > > > >      Subject: MET v8 updates needed?
> >> > > > >        Owner: Nobody
> >> > > > >   Requestors: Matthew.Pyle at noaa.gov
> >> > > > >       Status: new
> >> > > > >  Ticket <URL:
> >> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
> >> > > >
> >> > > > >
> >> > > > >
> >> > > > > Hi,
> >> > > > >
> >> > > > > I've started using the METv8 system built on the WCOSS
Dell
> >> > > (mars/venus)
> >> > > > > lately, and have been getting some weird output.    While
trying
> >> to
> >> > > > > understand my problem I looked at the known issues
> >> > > > > <
> >> >
https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php
> >> > > >
> >> > > > > page, and realized that our build might not have the
latest
> >> patches.
> >> > > I'm
> >> > > > > guessing my symptoms won't be solved with an update, but
could
> we
> >> get
> >> > > the
> >> > > > > latest build installed just in case?
> >> > > > >
> >> > > > > Thanks,
> >> > > > >
> >> > > > > Matthew Pyle
> >> > > > >
> >> > > > > -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22
18:56
> >> > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
> >> > > > >
> >> > > > >
> >> > > >
> >> > > >
> >> > >
> >> > >
> >> >
> >> >
> >>
> >>
>
>

------------------------------------------------
Subject: MET v8 updates needed?
From: "matthew.pyle at noaa.gov"
Time: Mon Mar 11 12:18:46 2019

Hi Julie,

I was thinking that the MET code used the external libraries it
included
with the source code.  Does it actually point at libraries on the
system?

Here are module options I see on the WCOSS Dell system:

libpng/1.2.44
libpng/1.2.59
jasper/1.900.1
jasper/1.900.29
zlib/1.2.11

IBM:
png/v1.2.44(default)
jasper/v1.900.1(default)
z/v1.2.6(default)

I feel like I might not be answering your question  - if so, please
feel
free to ask again!

-Matt

On Mon, Mar 11, 2019 at 1:06 PM Julie Prestopnik via RT
<met_help at ucar.edu>
wrote:

> Hi Matthew.
>
> I can't seem to find what version of the jpeg library is on the Dell
and on
> the IBM.  Would you be able to help with that?
>
> I installed jpeg-6b on the Dell and then reinstalled jasper, then
g2clib,
> the MET, but that still does not work.  Perhaps a different version
might,
> but it would be useful to know what is currently being used on the
Dell and
> on the IBM.
>
> Thanks,
> Julie
>
> On Mon, Mar 4, 2019 at 6:52 AM matthew.pyle at noaa.gov via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> >
> > Hi John,
> >
> > A little more information for you.  I think the key aspect of the
> > preprocessing through wgrib2 is that it writes out the file with
simple
> > packing.  Packing it with the same (jpeg) option as the original
file
> > returns a bit-identical file.  Packing it using a complex packing
option
> > produced a file that also was problematic for the MET codes on
Dell.  My
> > suspicion is that the issue is G2 library related somehow.
> >
> > I'm grasping at straws a bit here, but did notice some related
libraries
> > that are using different versions between the v8 build on the
Dell, and
> the
> > v6 version on the IBM (which doesn't have the same issues
processing the
> > files).
> >
> > v8/Dell:
> >
> > libpng-1.6.34
> > zlib-1.2.11
> >
> > v6/IBM:
> >
> > libpng-1.2.34
> > zlib-1.2.6
> >
> > The older versions used on the IBM look to agree with the NCEP
GRIB2 page
> > <https://www.nco.ncep.noaa.gov/pmb/codes/GRIB2/>
> >
> > Would it make sense to produce an alternate V8 build using the
older
> > libraries?  It would eliminate a possibility if nothing else.
> >
> > -Matt
> >
> >
> >
> > On Thu, Feb 28, 2019 at 2:01 PM Matthew Pyle - NOAA Federal <
> > matthew.pyle at noaa.gov> wrote:
> >
> > > Hi John,
> > >
> > > Thanks for the feedback.  You were correct that recreating the
files by
> > > pushing them through wgrib2 did allow pcp_combine to produce
proper
> > looking
> > > output on the Dell.  Unfortunately further investigation into
what is
> > > different between the problematic and acceptable GRIB2 files has
been
> > > hampered by the development Dell being down for an upgrade
today.  I
> will
> > > be back in touch once I know a bit more.
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > > On Wed, Feb 27, 2019 at 3:58 PM John Halley Gotway via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > >> Hi Matt,
> > >>
> > >> Julie and I spent some time trying to debug this issue on venus
today.
> > >> Unfortunately, we don't have an easy solution to report.
> > >>
> > >> - This isn't a bug in specifically in pcp_combine.  MET's
library code
> > is
> > >> reading bad data values from the members 1 - 5 files.  So we've
been
> > using
> > >> plot_data_plane to test... just read data from member 1 and
plot it.
> > >> - When we pull the GRIB2 files down to our NCAR machine and run
> > >> plot_data_plane, all the plots look good.
> > >> - When we convert from GRIB2 to GRIB2 (cnvgrib -g21), copy the
GRIB1
> > files
> > >> up to WCOSS, and rerun plot_data_plane, the plots look good.
> > >> - When we run wgrib2 directly on WCOSS to read the GRIB2 file
in and
> > write
> > >> it out to another GRIB2 file, then the plots looks good.
> > >>
> > >> So there's something very weird going on here.
> > >>
> > >> If we can investigate the difference between the input to and
output
> > from
> > >> wgrib2, perhaps that'll identify what's going on.
> > >>
> > >> As a very temporary, cludge workaround on WCOSS, you could add
in
> calls
> > to
> > >> wgrib2 like this:
> > >>
> > >>
> > >>
> > >> *mems="01 02 03 04 05 06 07 08 09 10"for mem in $memsdo*
> > >> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f03.wgrib2
> > >> prcip.m${mem}.t00z.conus.f03*
> > >>
> > >> *$WGRIB2 -grib_out prcip.m${mem}.t00z.conus.f06.wgrib2
> > >> prcip.m${mem}.t00z.conus.f06*
> > >>
> > >>
> > >> *pcp_combine -add prcip.m${mem}.t00z.conus.f03.wgrib2
> > >> 3prcip.m${mem}.t00z.conus.f06.wgrib2 3 6h_mem_${mem}.ncdone*
> > >>
> > >> I would expect that that'd result in good output.  Is there
something
> > >> special about members 1 through 5?  Do you know of any
differences
> that
> > >> might point us in the right direction?
> > >>
> > >> Thanks,
> > >> John
> > >>
> > >> On Tue, Feb 26, 2019 at 10:45 AM Julie Prestopnik via RT <
> > >> met_help at ucar.edu>
> > >> wrote:
> > >>
> > >> >
> > >> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
>
> > >> >
> > >> > Thank you for bringing this to our attention.  I will look
into it
> and
> > >> > follow up with you once I have more information.
> > >> >
> > >> > Thanks,
> > >> > Julie
> > >> >
> > >> > On Tue, Feb 26, 2019 at 7:41 AM matthew.pyle at noaa.gov via RT
<
> > >> > met_help at ucar.edu> wrote:
> > >> >
> > >> > >
> > >> > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> > >> > >
> > >> > > Hi Julie,
> > >> > >
> > >> > > I should have done this test prior to your update, but I'm
still
> > >> seeing
> > >> > > some weird behavior definitely coming from pcp_combine on
the
> Dell.
> > >> > >
> > >> > > I have collected inputs into
> > >> > > /gpfs/dell2/emc/modeling/noscrub/Matthew.Pyle/met_hold/ on
venus.
> > >> > >
> > >> > > In that directory I did the following:
> > >> > >
> > >> > > mems="01 02 03 04 05 06 07 08 09 10"
> > >> > > for mem in $mems
> > >> > > do
> > >> > > pcp_combine -add prcip.m${mem}.t00z.conus.f03 3
> > >> > > prcip.m${mem}.t00z.conus.f06 3 6h_mem_${mem}.nc
> > >> > > done
> > >> > >
> > >> > > In the output netCDF files, mems 01-04 display the 6 h
total as
> > >> > white-noise
> > >> > > looking jibberish, while mems 05-10 look coherent and
physical.
> I'm
> > >> not
> > >> > > sure if it is related, but mems 01-04 cover a smaller
portion of
> the
> > >> grid
> > >> > > with valid data than mems 05-10, but all members have
undefined
> > >> portions
> > >> > of
> > >> > > the grid.
> > >> > >
> > >> > > I put some of the netCDF output files onto Theia under
> > >> > > /scratch4/NCEPDEV/fv3-cam/noscrub/Matthew.Pyle
> > >> > >
> > >> > > From the Dell with MET v8:
> > >> > >
> > >> > > 6h_mem_01.nc
> > >> > > 6h_mem_05.nc
> > >> > >
> > >> > > From the IBM with MET v6:
> > >> > >
> > >> > > ibm_6h_mem_01.nc
> > >> > > ibm_6h_mem_05.nc
> > >> > >
> > >> > > The "mem_01" files look very different, while the "mem_05"
files
> > look
> > >> > > nearly identical when plotted.
> > >> > >
> > >> > > Any help would be appreciated!
> > >> > >
> > >> > > Thanks,
> > >> > >
> > >> > > Matt
> > >> > >
> > >> > > On Mon, Feb 25, 2019 at 4:34 PM Julie Prestopnik via RT <
> > >> > met_help at ucar.edu
> > >> > > >
> > >> > > wrote:
> > >> > >
> > >> > > > Hi Matthew.
> > >> > > >
> > >> > > > My apologies for the delay in updating the met-8.0
package with
> > the
> > >> > > latest
> > >> > > > patches on the WCOSS machines.   I have just updated met-
8.0 to
> > >> include
> > >> > > the
> > >> > > > latest set of patches on venus and hope to update the
other
> > machines
> > >> > > soon.
> > >> > > >
> > >> > > > Please follow up with a new ticket if you encounter any
other
> > >> problems
> > >> > or
> > >> > > > have any questions.  Thanks!
> > >> > > >
> > >> > > > Julie
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > On Mon, Feb 25, 2019 at 12:54 PM matthew.pyle at noaa.gov
via RT <
> > >> > > > met_help at ucar.edu> wrote:
> > >> > > >
> > >> > > > >
> > >> > > > > Mon Feb 25 12:54:22 2019: Request 89075 was acted upon.
> > >> > > > > Transaction: Ticket created by Matthew.Pyle at noaa.gov
> > >> > > > >        Queue: met_help
> > >> > > > >      Subject: MET v8 updates needed?
> > >> > > > >        Owner: Nobody
> > >> > > > >   Requestors: Matthew.Pyle at noaa.gov
> > >> > > > >       Status: new
> > >> > > > >  Ticket <URL:
> > >> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075
> > >> > > >
> > >> > > > >
> > >> > > > >
> > >> > > > > Hi,
> > >> > > > >
> > >> > > > > I've started using the METv8 system built on the WCOSS
Dell
> > >> > > (mars/venus)
> > >> > > > > lately, and have been getting some weird output.
While
> trying
> > >> to
> > >> > > > > understand my problem I looked at the known issues
> > >> > > > > <
> > >> >
>
https://dtcenter.org/met/users/support/known_issues/METv8.0/index.php
> > >> > > >
> > >> > > > > page, and realized that our build might not have the
latest
> > >> patches.
> > >> > > I'm
> > >> > > > > guessing my symptoms won't be solved with an update,
but could
> > we
> > >> get
> > >> > > the
> > >> > > > > latest build installed just in case?
> > >> > > > >
> > >> > > > > Thanks,
> > >> > > > >
> > >> > > > > Matthew Pyle
> > >> > > > >
> > >> > > > > -rwxr-xr-x 1 Julie.Prestopnik emcverf 17064080 Oct 22
18:56
> > >> > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
>
/gpfs/dell2/emc/verification/noscrub/Julie.Prestopnik/met/8.0/bin/pcp_combine
> > >> > > > >
> > >> > > > >
> > >> > > >
> > >> > > >
> > >> > >
> > >> > >
> > >> >
> > >> >
> > >>
> > >>
> >
> >
>
>

------------------------------------------------
Subject: MET v8 updates needed?
From: John Halley Gotway
Time: Fri May 24 10:47:36 2019

Julie and Matt,

I just wanted to follow up on this ticket.  We had found that on the
wcoss
dell machines (venus in particular), MET wasn't reading GRIB2 data
well.
After some debugging and talking to NCO, Boi Vuong confirmed that we
should
be compiling the zlib/libpng/jasper libraries using GNU's gcc compiler
even
when we use Intel's icc compiler for everything else.  Compiling those
3
libraries with icc just doesn't work well.

So we'll need to update the compilation script Julie's been running to
do
exactly that.  Matt, I strongly suspect that this is the source of the
problems you've encountered.

Thanks,
John

On Wed, May 15, 2019 at 3:06 PM Julie Prestopnik via RT
<met_help at ucar.edu>
wrote:

>
> Wed May 15 15:05:30 2019: Request 89075 was acted upon.
> Transaction: Given to johnhg (John Halley Gotway) by jpresto
>        Queue: met_help
>      Subject: MET v8 updates needed?
>        Owner: johnhg
>   Requestors: Matthew.Pyle at noaa.gov
>       Status: open
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
>
>
> This transaction appears to have no content
>

------------------------------------------------
Subject: MET v8 updates needed?
From: "matthew.pyle at noaa.gov"
Time: Fri May 24 14:34:36 2019

Thanks for the update, John.  Good to hear that the GRIB2 issue might
be
understood now!

-Matt

On Fri, May 24, 2019 at 12:57 PM John Halley Gotway via RT <
met_help at ucar.edu> wrote:

> Julie and Matt,
>
> I just wanted to follow up on this ticket.  We had found that on the
wcoss
> dell machines (venus in particular), MET wasn't reading GRIB2 data
well.
> After some debugging and talking to NCO, Boi Vuong confirmed that we
should
> be compiling the zlib/libpng/jasper libraries using GNU's gcc
compiler even
> when we use Intel's icc compiler for everything else.  Compiling
those 3
> libraries with icc just doesn't work well.
>
> So we'll need to update the compilation script Julie's been running
to do
> exactly that.  Matt, I strongly suspect that this is the source of
the
> problems you've encountered.
>
> Thanks,
> John
>
> On Wed, May 15, 2019 at 3:06 PM Julie Prestopnik via RT
<met_help at ucar.edu
> >
> wrote:
>
> >
> > Wed May 15 15:05:30 2019: Request 89075 was acted upon.
> > Transaction: Given to johnhg (John Halley Gotway) by jpresto
> >        Queue: met_help
> >      Subject: MET v8 updates needed?
> >        Owner: johnhg
> >   Requestors: Matthew.Pyle at noaa.gov
> >       Status: open
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=89075 >
> >
> >
> > This transaction appears to have no content
> >
>
>

------------------------------------------------


More information about the Met_help mailing list