[Met_help] [rt.rap.ucar.edu #91544] History for point_stat seg faulting

John Halley Gotway via RT met_help at ucar.edu
Fri Oct 11 16:23:40 MDT 2019


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Hey John,

 

I'm trying to extrapolate the production of vertical raob verification plots
using point_stat and stat_analysis like we did together for winds but for
relative humidity now.  But when I run point_stat, it seg faults without
much explanation

 

DEBUG 2:
----------------------------------------------------------------------------
----

DEBUG 2:

DEBUG 2: Reading data for relhum/pre_001013.

DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0 climatology mean
levels, and 0 climatology standard deviation levels.

DEBUG 2:

DEBUG 2:
----------------------------------------------------------------------------
----

DEBUG 2:

DEBUG 2: Searching 4680328 observations from 617 messages.

DEBUG 7:     tbl dims: messge_type: 1  station id: 617  valid_time: 1

run_stats.sh: line 26: 40818 Segmentation fault      point_stat PYTHON_NUMPY
${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log ./out/point_stat.log
-obs_valid_beg 20010101 -obs_valid_end 20200101

 

 

 

>From my log file:

607 DEBUG 2:

608 DEBUG 2: Searching 4680328 observations from 617 messages.

609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617  valid_time: 1

 

Any help would be much appreciated

 

Justin

 

Justin Tsu

Marine Meteorology Division

Data Assimilation/Mesoscale Modeling

Building 704 Room 212

Naval Research Laboratory, Code 7531

7 Grace Hopper Avenue

Monterey, CA 93943-5502

 

Ph. (831) 656-4111

 



----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: point_stat seg faulting
From: John Halley Gotway
Time: Thu Aug 15 17:07:30 2019

Justin,

Well that doesn't seem to be very helpful of Point-Stat at all.  There
isn't much jumping out at me from the log messages you sent.  In fact,
I
hunted around for the DEBUG(7) log message but couldn't find where in
the
code it's being written.  Are you able to send me some sample data to
replicate this behavior?

I'd need to know...
- What version of MET are you running.
- A copy of your Point-Stat config file.
- The python script that you're running.
- The input file for that python script.
- The NetCDF point observation file you're passing to Point-Stat.

If I can replicate the behavior here, it should be easy to run it in
the
debugger and figure it out.

You can post data to our anonymous ftp site as described in "How to
send us
data":
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk

Thanks,
John

On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
>        Queue: met_help
>      Subject: point_stat seg faulting
>        Owner: Nobody
>   Requestors: justin.tsu at nrlmry.navy.mil
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
>
> Hey John,
>
>
>
> I'm trying to extrapolate the production of vertical raob
verification
> plots
> using point_stat and stat_analysis like we did together for winds
but for
> relative humidity now.  But when I run point_stat, it seg faults
without
> much explanation
>
>
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Reading data for relhum/pre_001013.
>
> DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology mean
> levels, and 0 climatology standard deviation levels.
>
> DEBUG 2:
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Searching 4680328 observations from 617 messages.
>
> DEBUG 7:     tbl dims: messge_type: 1  station id: 617  valid_time:
1
>
> run_stats.sh: line 26: 40818 Segmentation fault      point_stat
> PYTHON_NUMPY
> ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> ./out/point_stat.log
> -obs_valid_beg 20010101 -obs_valid_end 20200101
>
>
>
>
>
>
>
> From my log file:
>
> 607 DEBUG 2:
>
> 608 DEBUG 2: Searching 4680328 observations from 617 messages.
>
> 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
>
>
>
> Any help would be much appreciated
>
>
>
> Justin
>
>
>
> Justin Tsu
>
> Marine Meteorology Division
>
> Data Assimilation/Mesoscale Modeling
>
> Building 704 Room 212
>
> Naval Research Laboratory, Code 7531
>
> 7 Grace Hopper Avenue
>
> Monterey, CA 93943-5502
>
>
>
> Ph. (831) 656-4111
>
>
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Thu Aug 15 19:00:13 2019

Hey John,

Ive put my data in tsu_data_20190815/ under met_help.

I am running  met-8.0/met-8.0-with-grib2-support and have provided
everything
on that list you've provided me.  Let me know if you're able to
replicate it

Justin

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, August 15, 2019 4:08 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

Well that doesn't seem to be very helpful of Point-Stat at all.  There
isn't much jumping out at me from the log messages you sent.  In fact,
I
hunted around for the DEBUG(7) log message but couldn't find where in
the
code it's being written.  Are you able to send me some sample data to
replicate this behavior?

I'd need to know...
- What version of MET are you running.
- A copy of your Point-Stat config file.
- The python script that you're running.
- The input file for that python script.
- The NetCDF point observation file you're passing to Point-Stat.

If I can replicate the behavior here, it should be easy to run it in
the
debugger and figure it out.

You can post data to our anonymous ftp site as described in "How to
send us
data":
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk

Thanks,
John

On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
>        Queue: met_help
>      Subject: point_stat seg faulting
>        Owner: Nobody
>   Requestors: justin.tsu at nrlmry.navy.mil
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
>
> Hey John,
>
>
>
> I'm trying to extrapolate the production of vertical raob
verification
> plots
> using point_stat and stat_analysis like we did together for winds
but for
> relative humidity now.  But when I run point_stat, it seg faults
without
> much explanation
>
>
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Reading data for relhum/pre_001013.
>
> DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology mean
> levels, and 0 climatology standard deviation levels.
>
> DEBUG 2:
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Searching 4680328 observations from 617 messages.
>
> DEBUG 7:     tbl dims: messge_type: 1  station id: 617  valid_time:
1
>
> run_stats.sh: line 26: 40818 Segmentation fault      point_stat
> PYTHON_NUMPY
> ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> ./out/point_stat.log
> -obs_valid_beg 20010101 -obs_valid_end 20200101
>
>
>
>
>
>
>
> From my log file:
>
> 607 DEBUG 2:
>
> 608 DEBUG 2: Searching 4680328 observations from 617 messages.
>
> 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
>
>
>
> Any help would be much appreciated
>
>
>
> Justin
>
>
>
> Justin Tsu
>
> Marine Meteorology Division
>
> Data Assimilation/Mesoscale Modeling
>
> Building 704 Room 212
>
> Naval Research Laboratory, Code 7531
>
> 7 Grace Hopper Avenue
>
> Monterey, CA 93943-5502
>
>
>
> Ph. (831) 656-4111
>
>
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Aug 16 12:38:10 2019

Hey John,

Figured out that the seg fault had to do with an incorrect version of
met I
was using.  Running point_stat now without any seg faults.  It is
failing
because I am missing some default values in the message_type_group_map
dictionary that I am not necessarily using such as "WATERSF".

Justin

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, August 15, 2019 4:08 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

Well that doesn't seem to be very helpful of Point-Stat at all.  There
isn't much jumping out at me from the log messages you sent.  In fact,
I
hunted around for the DEBUG(7) log message but couldn't find where in
the
code it's being written.  Are you able to send me some sample data to
replicate this behavior?

I'd need to know...
- What version of MET are you running.
- A copy of your Point-Stat config file.
- The python script that you're running.
- The input file for that python script.
- The NetCDF point observation file you're passing to Point-Stat.

If I can replicate the behavior here, it should be easy to run it in
the
debugger and figure it out.

You can post data to our anonymous ftp site as described in "How to
send us
data":
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk

Thanks,
John

On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
>        Queue: met_help
>      Subject: point_stat seg faulting
>        Owner: Nobody
>   Requestors: justin.tsu at nrlmry.navy.mil
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
>
> Hey John,
>
>
>
> I'm trying to extrapolate the production of vertical raob
verification
> plots
> using point_stat and stat_analysis like we did together for winds
but for
> relative humidity now.  But when I run point_stat, it seg faults
without
> much explanation
>
>
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Reading data for relhum/pre_001013.
>
> DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology mean
> levels, and 0 climatology standard deviation levels.
>
> DEBUG 2:
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Searching 4680328 observations from 617 messages.
>
> DEBUG 7:     tbl dims: messge_type: 1  station id: 617  valid_time:
1
>
> run_stats.sh: line 26: 40818 Segmentation fault      point_stat
> PYTHON_NUMPY
> ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> ./out/point_stat.log
> -obs_valid_beg 20010101 -obs_valid_end 20200101
>
>
>
>
>
>
>
> From my log file:
>
> 607 DEBUG 2:
>
> 608 DEBUG 2: Searching 4680328 observations from 617 messages.
>
> 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
>
>
>
> Any help would be much appreciated
>
>
>
> Justin
>
>
>
> Justin Tsu
>
> Marine Meteorology Division
>
> Data Assimilation/Mesoscale Modeling
>
> Building 704 Room 212
>
> Naval Research Laboratory, Code 7531
>
> 7 Grace Hopper Avenue
>
> Monterey, CA 93943-5502
>
>
>
> Ph. (831) 656-4111
>
>
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Aug 16 13:15:42 2019

Justin,

Great, thanks for sending me the sample data.  Yes, I was able to
replicate
the segfault.  The good news is that this is caused by a simple typo
that's
easy to fix.  If you look in the "obs.field" entry of the relhumConfig
file, you'll see an empty string for the last field listed:

*obs = {    field = [*



*         ...        {name = "dptd";level = ["P988-1006"];},
{name =
"";level = ["P1007-1013"];}    ];*
If you change that empty string to "dptd", the segfault will go away:*
{name = "dpdt";level = ["P1007-1013"];}*
Rerunning met-8.0 with that change, Point-Stat ran to completion (in 2
minutes 48 seconds on my desktop machine), but it produced 0 matched
pairs.  They were discarded because of the valid times (seen using -v
3
command line option to Point-Stat).  The ob file you sent is named "
raob_2015020412.nc" but the actual times in that file are for
"20190426_120000":

*ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*

* hdr_vld_table =  "20190426_120000" ;*

So please be aware of that discrepancy.  To just produce some matched
pairs, I told Point-Stat to use the valid times of the data:
*met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
<http://raob_2015020412.nc> relhumConfig \*
* -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
-obs_valid_end 20190426_120000*

But I still get 0 matched pairs.  This time, it's because of bad
forecast
values:
   *DEBUG 3: Rejected: bad fcst value = 55*

Taking a step back... let's run one of these fields through
plot_data_plane, which results in an error:
*met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
'name="./read_NRL_binary.py
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny) = (97,
97),
(x, y) = (97, 0)

While the numpy object is 97x97, the grid is specified as being
118x118 in
the python script ('nx': 118, 'ny': 118).

Just to get something working, I modified the nx and ny in the python
script:
       'nx':97,
       'ny':97,
Rerunning again, I still didn't get any matched pairs.

So I'd suggest...
- Fix the typo in the config file.
- Figure out the discrepancy between the obs file name timestamp and
the
data in that file.
- Make sure the grid information is consistent with the data in the
python
script.

Obviously though, we don't want to code to be segfaulting in any
condition.  So next, I tested using met-8.1 with that empty string.
This
time it does run with no segfault, but prints a warning about the
empty
string.

Hope that helps.

Thanks,
John

On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> Ive put my data in tsu_data_20190815/ under met_help.
>
> I am running  met-8.0/met-8.0-with-grib2-support and have provided
> everything
> on that list you've provided me.  Let me know if you're able to
replicate
> it
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, August 15, 2019 4:08 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Well that doesn't seem to be very helpful of Point-Stat at all.
There
> isn't much jumping out at me from the log messages you sent.  In
fact, I
> hunted around for the DEBUG(7) log message but couldn't find where
in the
> code it's being written.  Are you able to send me some sample data
to
> replicate this behavior?
>
> I'd need to know...
> - What version of MET are you running.
> - A copy of your Point-Stat config file.
> - The python script that you're running.
> - The input file for that python script.
> - The NetCDF point observation file you're passing to Point-Stat.
>
> If I can replicate the behavior here, it should be easy to run it in
the
> debugger and figure it out.
>
> You can post data to our anonymous ftp site as described in "How to
send us
> data":
>
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> >        Queue: met_help
> >      Subject: point_stat seg faulting
> >        Owner: Nobody
> >   Requestors: justin.tsu at nrlmry.navy.mil
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> >
> > Hey John,
> >
> >
> >
> > I'm trying to extrapolate the production of vertical raob
verification
> > plots
> > using point_stat and stat_analysis like we did together for winds
but for
> > relative humidity now.  But when I run point_stat, it seg faults
without
> > much explanation
> >
> >
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Reading data for relhum/pre_001013.
> >
> > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> mean
> > levels, and 0 climatology standard deviation levels.
> >
> > DEBUG 2:
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> >
> > run_stats.sh: line 26: 40818 Segmentation fault      point_stat
> > PYTHON_NUMPY
> > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > ./out/point_stat.log
> > -obs_valid_beg 20010101 -obs_valid_end 20200101
> >
> >
> >
> >
> >
> >
> >
> > From my log file:
> >
> > 607 DEBUG 2:
> >
> > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> >
> >
> >
> > Any help would be much appreciated
> >
> >
> >
> > Justin
> >
> >
> >
> > Justin Tsu
> >
> > Marine Meteorology Division
> >
> > Data Assimilation/Mesoscale Modeling
> >
> > Building 704 Room 212
> >
> > Naval Research Laboratory, Code 7531
> >
> > 7 Grace Hopper Avenue
> >
> > Monterey, CA 93943-5502
> >
> >
> >
> > Ph. (831) 656-4111
> >
> >
> >
> >
> >
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Thu Aug 29 17:06:28 2019

Thanks John.

Sorry it's taken me such a long time to get to this.  It's nearing the
end of FY19 so I have been finalizing several transition projects and
haven’t had much time to work on MET recently.  I just picked this
back up and have loaded a couple new modules.  Here is what I have to
work with now:

1) intel/xe_2013-sp1-u1
2) netcdf-local/netcdf-met
3) met-8.1/met-8.1a-with-grib2-support
4) ncview-2.1.5/ncview-2.1.5
5) udunits/udunits-2.1.24
6) gcc-6.3.0/gcc-6.3.0
7) ImageMagicK/ImageMagick-6.9.0-10
8) python/anaconda-7-15-15-save.6.6.2017


Running
> point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
-obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out

I get many matched pairs.  Here is a sample of what the log file looks
like for one of the pressure ranges I am verifying on:

15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376, for
observation type radiosonde, over region FULL, for interpolation
method NEAREST(1), using 98 pairs.
15258 DEBUG 3: Number of matched pairs  = 98
15259 DEBUG 3: Observations processed   = 4680328
15260 DEBUG 3: Rejected: SID exclusion  = 0
15261 DEBUG 3: Rejected: obs type       = 3890030
15262 DEBUG 3: Rejected: valid time     = 0
15263 DEBUG 3: Rejected: bad obs value  = 0
15264 DEBUG 3: Rejected: off the grid   = 786506
15265 DEBUG 3: Rejected: topography     = 0
15266 DEBUG 3: Rejected: level mismatch = 3694
15267 DEBUG 3: Rejected: quality marker = 0
15268 DEBUG 3: Rejected: message type   = 0
15269 DEBUG 3: Rejected: masking region = 0
15270 DEBUG 3: Rejected: bad fcst value = 0
15271 DEBUG 3: Rejected: duplicates     = 0
15272 DEBUG 2: Computing Continuous Statistics.
15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0, observation filtering threshold >=0, and field logic UNION.
15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=5.0, observation filtering threshold >=5.0, and field logic UNION.
15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=10.0, observation filtering threshold >=10.0, and field logic UNION.
15276 DEBUG 2: Computing Scalar Partial Sums.
15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0, observation filtering threshold >=0, and field logic UNION.
15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=5.0, observation filtering threshold >=5.0, and field logic UNION.
15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=10.0, observation filtering threshold >=10.0, and field logic UNION.
15280 DEBUG 2:
15281 DEBUG 2:
--------------------------------------------------------------------------------

I am going to work on processing these point stat files to create
those vertical raob plots we had a discussion about.  I remember us
talking about the partial sums file.  Why did we choose to go the
route of producing partial sums then feeding that into series analysis
to generate bias and MSE?  It looks like bias and MSE both exist
within the CNT line type (MBIAS and MSE)?


Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, August 16, 2019 12:16 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

Great, thanks for sending me the sample data.  Yes, I was able to
replicate
the segfault.  The good news is that this is caused by a simple typo
that's
easy to fix.  If you look in the "obs.field" entry of the relhumConfig
file, you'll see an empty string for the last field listed:

*obs = {    field = [*



*         ...        {name = "dptd";level = ["P988-1006"];},
{name =
"";level = ["P1007-1013"];}    ];*
If you change that empty string to "dptd", the segfault will go away:*
{name = "dpdt";level = ["P1007-1013"];}*
Rerunning met-8.0 with that change, Point-Stat ran to completion (in 2
minutes 48 seconds on my desktop machine), but it produced 0 matched
pairs.  They were discarded because of the valid times (seen using -v
3
command line option to Point-Stat).  The ob file you sent is named "
raob_2015020412.nc" but the actual times in that file are for
"20190426_120000":

*ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*

* hdr_vld_table =  "20190426_120000" ;*

So please be aware of that discrepancy.  To just produce some matched
pairs, I told Point-Stat to use the valid times of the data:
*met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
<http://raob_2015020412.nc> relhumConfig \*
* -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
-obs_valid_end 20190426_120000*

But I still get 0 matched pairs.  This time, it's because of bad
forecast
values:
   *DEBUG 3: Rejected: bad fcst value = 55*

Taking a step back... let's run one of these fields through
plot_data_plane, which results in an error:
*met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
'name="./read_NRL_binary.py
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny) = (97,
97),
(x, y) = (97, 0)

While the numpy object is 97x97, the grid is specified as being
118x118 in
the python script ('nx': 118, 'ny': 118).

Just to get something working, I modified the nx and ny in the python
script:
       'nx':97,
       'ny':97,
Rerunning again, I still didn't get any matched pairs.

So I'd suggest...
- Fix the typo in the config file.
- Figure out the discrepancy between the obs file name timestamp and
the
data in that file.
- Make sure the grid information is consistent with the data in the
python
script.

Obviously though, we don't want to code to be segfaulting in any
condition.  So next, I tested using met-8.1 with that empty string.
This
time it does run with no segfault, but prints a warning about the
empty
string.

Hope that helps.

Thanks,
John

On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> Ive put my data in tsu_data_20190815/ under met_help.
>
> I am running  met-8.0/met-8.0-with-grib2-support and have provided
> everything
> on that list you've provided me.  Let me know if you're able to
replicate
> it
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, August 15, 2019 4:08 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Well that doesn't seem to be very helpful of Point-Stat at all.
There
> isn't much jumping out at me from the log messages you sent.  In
fact, I
> hunted around for the DEBUG(7) log message but couldn't find where
in the
> code it's being written.  Are you able to send me some sample data
to
> replicate this behavior?
>
> I'd need to know...
> - What version of MET are you running.
> - A copy of your Point-Stat config file.
> - The python script that you're running.
> - The input file for that python script.
> - The NetCDF point observation file you're passing to Point-Stat.
>
> If I can replicate the behavior here, it should be easy to run it in
the
> debugger and figure it out.
>
> You can post data to our anonymous ftp site as described in "How to
send us
> data":
>
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> >        Queue: met_help
> >      Subject: point_stat seg faulting
> >        Owner: Nobody
> >   Requestors: justin.tsu at nrlmry.navy.mil
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> >
> > Hey John,
> >
> >
> >
> > I'm trying to extrapolate the production of vertical raob
verification
> > plots
> > using point_stat and stat_analysis like we did together for winds
but for
> > relative humidity now.  But when I run point_stat, it seg faults
without
> > much explanation
> >
> >
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Reading data for relhum/pre_001013.
> >
> > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> mean
> > levels, and 0 climatology standard deviation levels.
> >
> > DEBUG 2:
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> >
> > run_stats.sh: line 26: 40818 Segmentation fault      point_stat
> > PYTHON_NUMPY
> > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > ./out/point_stat.log
> > -obs_valid_beg 20010101 -obs_valid_end 20200101
> >
> >
> >
> >
> >
> >
> >
> > From my log file:
> >
> > 607 DEBUG 2:
> >
> > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> >
> >
> >
> > Any help would be much appreciated
> >
> >
> >
> > Justin
> >
> >
> >
> > Justin Tsu
> >
> > Marine Meteorology Division
> >
> > Data Assimilation/Mesoscale Modeling
> >
> > Building 704 Room 212
> >
> > Naval Research Laboratory, Code 7531
> >
> > 7 Grace Hopper Avenue
> >
> > Monterey, CA 93943-5502
> >
> >
> >
> > Ph. (831) 656-4111
> >
> >
> >
> >
> >
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Aug 30 09:46:52 2019

Justin,

We wrote the SL1L2 partial sums from Point-Stat because they can be
aggregated together by the stat-analysis tool over multiple days or
cases.

If you're interested in continuous statistics from Point-Stat, I'd
recommend writing the CNT line type (which has the stats computed for
that
single run) and the SL1L2 line type (so that you can aggregate them
together in stat-analysis or METviewer).

The other alternative is looking at the average of the daily
statistics
scores.  For RMSE, the average of the daily RMSE is equal to the
aggregated
score... as long as the number of matched pairs remains constant day
to
day.  But if one today you have 98 matched pairs and tomorrow you have
105,
then tomorrow's score will have slightly more weight.  The SL1L2 lines
are
aggregated as weighted averages, where the TOTAL column is the weight.
And
then stats (like RMSE and MSE) are recomputed from those aggregated
scores.  Generally, the statisticians recommend this method over the
mean
of the daily scores.  Neither is "wrong", they just give you slightly
different information.

Thanks,
John

On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John.
>
> Sorry it's taken me such a long time to get to this.  It's nearing
the end
> of FY19 so I have been finalizing several transition projects and
haven’t
> had much time to work on MET recently.  I just picked this back up
and have
> loaded a couple new modules.  Here is what I have to work with now:
>
> 1) intel/xe_2013-sp1-u1
> 2) netcdf-local/netcdf-met
> 3) met-8.1/met-8.1a-with-grib2-support
> 4) ncview-2.1.5/ncview-2.1.5
> 5) udunits/udunits-2.1.24
> 6) gcc-6.3.0/gcc-6.3.0
> 7) ImageMagicK/ImageMagick-6.9.0-10
> 8) python/anaconda-7-15-15-save.6.6.2017
>
>
> Running
> > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
>
> I get many matched pairs.  Here is a sample of what the log file
looks
> like for one of the pressure ranges I am verifying on:
>
> 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> observation type radiosonde, over region FULL, for interpolation
method
> NEAREST(1), using 98 pairs.
> 15258 DEBUG 3: Number of matched pairs  = 98
> 15259 DEBUG 3: Observations processed   = 4680328
> 15260 DEBUG 3: Rejected: SID exclusion  = 0
> 15261 DEBUG 3: Rejected: obs type       = 3890030
> 15262 DEBUG 3: Rejected: valid time     = 0
> 15263 DEBUG 3: Rejected: bad obs value  = 0
> 15264 DEBUG 3: Rejected: off the grid   = 786506
> 15265 DEBUG 3: Rejected: topography     = 0
> 15266 DEBUG 3: Rejected: level mismatch = 3694
> 15267 DEBUG 3: Rejected: quality marker = 0
> 15268 DEBUG 3: Rejected: message type   = 0
> 15269 DEBUG 3: Rejected: masking region = 0
> 15270 DEBUG 3: Rejected: bad fcst value = 0
> 15271 DEBUG 3: Rejected: duplicates     = 0
> 15272 DEBUG 2: Computing Continuous Statistics.
> 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15276 DEBUG 2: Computing Scalar Partial Sums.
> 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15280 DEBUG 2:
> 15281 DEBUG 2:
>
--------------------------------------------------------------------------------
>
> I am going to work on processing these point stat files to create
those
> vertical raob plots we had a discussion about.  I remember us
talking about
> the partial sums file.  Why did we choose to go the route of
producing
> partial sums then feeding that into series analysis to generate bias
and
> MSE?  It looks like bias and MSE both exist within the CNT line type
(MBIAS
> and MSE)?
>
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 16, 2019 12:16 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Great, thanks for sending me the sample data.  Yes, I was able to
replicate
> the segfault.  The good news is that this is caused by a simple typo
that's
> easy to fix.  If you look in the "obs.field" entry of the
relhumConfig
> file, you'll see an empty string for the last field listed:
>
> *obs = {    field = [*
>
>
>
> *         ...        {name = "dptd";level = ["P988-1006"];},
{name =
> "";level = ["P1007-1013"];}    ];*
> If you change that empty string to "dptd", the segfault will go
away:*
> {name = "dpdt";level = ["P1007-1013"];}*
> Rerunning met-8.0 with that change, Point-Stat ran to completion (in
2
> minutes 48 seconds on my desktop machine), but it produced 0 matched
> pairs.  They were discarded because of the valid times (seen using
-v 3
> command line option to Point-Stat).  The ob file you sent is named "
> raob_2015020412.nc" but the actual times in that file are for
> "20190426_120000":
>
> *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
>
> * hdr_vld_table =  "20190426_120000" ;*
>
> So please be aware of that discrepancy.  To just produce some
matched
> pairs, I told Point-Stat to use the valid times of the data:
> *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> <http://raob_2015020412.nc> relhumConfig \*
> * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> -obs_valid_end 20190426_120000*
>
> But I still get 0 matched pairs.  This time, it's because of bad
forecast
> values:
>    *DEBUG 3: Rejected: bad fcst value = 55*
>
> Taking a step back... let's run one of these fields through
> plot_data_plane, which results in an error:
> *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> 'name="./read_NRL_binary.py
>
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97, 97),
> (x, y) = (97, 0)
>
> While the numpy object is 97x97, the grid is specified as being
118x118 in
> the python script ('nx': 118, 'ny': 118).
>
> Just to get something working, I modified the nx and ny in the
python
> script:
>        'nx':97,
>        'ny':97,
> Rerunning again, I still didn't get any matched pairs.
>
> So I'd suggest...
> - Fix the typo in the config file.
> - Figure out the discrepancy between the obs file name timestamp and
the
> data in that file.
> - Make sure the grid information is consistent with the data in the
python
> script.
>
> Obviously though, we don't want to code to be segfaulting in any
> condition.  So next, I tested using met-8.1 with that empty string.
This
> time it does run with no segfault, but prints a warning about the
empty
> string.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > Ive put my data in tsu_data_20190815/ under met_help.
> >
> > I am running  met-8.0/met-8.0-with-grib2-support and have provided
> > everything
> > on that list you've provided me.  Let me know if you're able to
replicate
> > it
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Thursday, August 15, 2019 4:08 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > isn't much jumping out at me from the log messages you sent.  In
fact, I
> > hunted around for the DEBUG(7) log message but couldn't find where
in the
> > code it's being written.  Are you able to send me some sample data
to
> > replicate this behavior?
> >
> > I'd need to know...
> > - What version of MET are you running.
> > - A copy of your Point-Stat config file.
> > - The python script that you're running.
> > - The input file for that python script.
> > - The NetCDF point observation file you're passing to Point-Stat.
> >
> > If I can replicate the behavior here, it should be easy to run it
in the
> > debugger and figure it out.
> >
> > You can post data to our anonymous ftp site as described in "How
to send
> us
> > data":
> >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > >        Queue: met_help
> > >      Subject: point_stat seg faulting
> > >        Owner: Nobody
> > >   Requestors: justin.tsu at nrlmry.navy.mil
> > >       Status: new
> > >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > >
> > >
> > > Hey John,
> > >
> > >
> > >
> > > I'm trying to extrapolate the production of vertical raob
verification
> > > plots
> > > using point_stat and stat_analysis like we did together for
winds but
> for
> > > relative humidity now.  But when I run point_stat, it seg faults
> without
> > > much explanation
> > >
> > >
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Reading data for relhum/pre_001013.
> > >
> > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > mean
> > > levels, and 0 climatology standard deviation levels.
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> > >
> > > run_stats.sh: line 26: 40818 Segmentation fault      point_stat
> > > PYTHON_NUMPY
> > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > ./out/point_stat.log
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > From my log file:
> > >
> > > 607 DEBUG 2:
> > >
> > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> valid_time: 1
> > >
> > >
> > >
> > > Any help would be much appreciated
> > >
> > >
> > >
> > > Justin
> > >
> > >
> > >
> > > Justin Tsu
> > >
> > > Marine Meteorology Division
> > >
> > > Data Assimilation/Mesoscale Modeling
> > >
> > > Building 704 Room 212
> > >
> > > Naval Research Laboratory, Code 7531
> > >
> > > 7 Grace Hopper Avenue
> > >
> > > Monterey, CA 93943-5502
> > >
> > >
> > >
> > > Ph. (831) 656-4111
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Aug 30 12:36:07 2019

So if I understand what you're saying correctly, then if I wanted to
an average of 24 hour forecasts over a month long run, then I would
use the SL1L2 output to aggregate and produce this average?  Whereas
if I used CNT, this would just provide me ~30 individual (per day over
a month) 24 hour forecast verifications?

On a side note, did we ever go over how to plot the SL1L2 MSE and
biases? I am forgetting if we used stat_analysis to produce a plot or
if the plot you showed me was just something you guys post processed
using python or whatnot.

Justin

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, August 30, 2019 8:47 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

We wrote the SL1L2 partial sums from Point-Stat because they can be
aggregated together by the stat-analysis tool over multiple days or
cases.

If you're interested in continuous statistics from Point-Stat, I'd
recommend writing the CNT line type (which has the stats computed for
that
single run) and the SL1L2 line type (so that you can aggregate them
together in stat-analysis or METviewer).

The other alternative is looking at the average of the daily
statistics
scores.  For RMSE, the average of the daily RMSE is equal to the
aggregated
score... as long as the number of matched pairs remains constant day
to
day.  But if one today you have 98 matched pairs and tomorrow you have
105,
then tomorrow's score will have slightly more weight.  The SL1L2 lines
are
aggregated as weighted averages, where the TOTAL column is the weight.
And
then stats (like RMSE and MSE) are recomputed from those aggregated
scores.  Generally, the statisticians recommend this method over the
mean
of the daily scores.  Neither is "wrong", they just give you slightly
different information.

Thanks,
John

On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John.
>
> Sorry it's taken me such a long time to get to this.  It's nearing
the end
> of FY19 so I have been finalizing several transition projects and
haven’t
> had much time to work on MET recently.  I just picked this back up
and have
> loaded a couple new modules.  Here is what I have to work with now:
>
> 1) intel/xe_2013-sp1-u1
> 2) netcdf-local/netcdf-met
> 3) met-8.1/met-8.1a-with-grib2-support
> 4) ncview-2.1.5/ncview-2.1.5
> 5) udunits/udunits-2.1.24
> 6) gcc-6.3.0/gcc-6.3.0
> 7) ImageMagicK/ImageMagick-6.9.0-10
> 8) python/anaconda-7-15-15-save.6.6.2017
>
>
> Running
> > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
>
> I get many matched pairs.  Here is a sample of what the log file
looks
> like for one of the pressure ranges I am verifying on:
>
> 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> observation type radiosonde, over region FULL, for interpolation
method
> NEAREST(1), using 98 pairs.
> 15258 DEBUG 3: Number of matched pairs  = 98
> 15259 DEBUG 3: Observations processed   = 4680328
> 15260 DEBUG 3: Rejected: SID exclusion  = 0
> 15261 DEBUG 3: Rejected: obs type       = 3890030
> 15262 DEBUG 3: Rejected: valid time     = 0
> 15263 DEBUG 3: Rejected: bad obs value  = 0
> 15264 DEBUG 3: Rejected: off the grid   = 786506
> 15265 DEBUG 3: Rejected: topography     = 0
> 15266 DEBUG 3: Rejected: level mismatch = 3694
> 15267 DEBUG 3: Rejected: quality marker = 0
> 15268 DEBUG 3: Rejected: message type   = 0
> 15269 DEBUG 3: Rejected: masking region = 0
> 15270 DEBUG 3: Rejected: bad fcst value = 0
> 15271 DEBUG 3: Rejected: duplicates     = 0
> 15272 DEBUG 2: Computing Continuous Statistics.
> 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15276 DEBUG 2: Computing Scalar Partial Sums.
> 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15280 DEBUG 2:
> 15281 DEBUG 2:
>
--------------------------------------------------------------------------------
>
> I am going to work on processing these point stat files to create
those
> vertical raob plots we had a discussion about.  I remember us
talking about
> the partial sums file.  Why did we choose to go the route of
producing
> partial sums then feeding that into series analysis to generate bias
and
> MSE?  It looks like bias and MSE both exist within the CNT line type
(MBIAS
> and MSE)?
>
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 16, 2019 12:16 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Great, thanks for sending me the sample data.  Yes, I was able to
replicate
> the segfault.  The good news is that this is caused by a simple typo
that's
> easy to fix.  If you look in the "obs.field" entry of the
relhumConfig
> file, you'll see an empty string for the last field listed:
>
> *obs = {    field = [*
>
>
>
> *         ...        {name = "dptd";level = ["P988-1006"];},
{name =
> "";level = ["P1007-1013"];}    ];*
> If you change that empty string to "dptd", the segfault will go
away:*
> {name = "dpdt";level = ["P1007-1013"];}*
> Rerunning met-8.0 with that change, Point-Stat ran to completion (in
2
> minutes 48 seconds on my desktop machine), but it produced 0 matched
> pairs.  They were discarded because of the valid times (seen using
-v 3
> command line option to Point-Stat).  The ob file you sent is named "
> raob_2015020412.nc" but the actual times in that file are for
> "20190426_120000":
>
> *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
>
> * hdr_vld_table =  "20190426_120000" ;*
>
> So please be aware of that discrepancy.  To just produce some
matched
> pairs, I told Point-Stat to use the valid times of the data:
> *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> <http://raob_2015020412.nc> relhumConfig \*
> * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> -obs_valid_end 20190426_120000*
>
> But I still get 0 matched pairs.  This time, it's because of bad
forecast
> values:
>    *DEBUG 3: Rejected: bad fcst value = 55*
>
> Taking a step back... let's run one of these fields through
> plot_data_plane, which results in an error:
> *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> 'name="./read_NRL_binary.py
>
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97, 97),
> (x, y) = (97, 0)
>
> While the numpy object is 97x97, the grid is specified as being
118x118 in
> the python script ('nx': 118, 'ny': 118).
>
> Just to get something working, I modified the nx and ny in the
python
> script:
>        'nx':97,
>        'ny':97,
> Rerunning again, I still didn't get any matched pairs.
>
> So I'd suggest...
> - Fix the typo in the config file.
> - Figure out the discrepancy between the obs file name timestamp and
the
> data in that file.
> - Make sure the grid information is consistent with the data in the
python
> script.
>
> Obviously though, we don't want to code to be segfaulting in any
> condition.  So next, I tested using met-8.1 with that empty string.
This
> time it does run with no segfault, but prints a warning about the
empty
> string.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > Ive put my data in tsu_data_20190815/ under met_help.
> >
> > I am running  met-8.0/met-8.0-with-grib2-support and have provided
> > everything
> > on that list you've provided me.  Let me know if you're able to
replicate
> > it
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Thursday, August 15, 2019 4:08 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > isn't much jumping out at me from the log messages you sent.  In
fact, I
> > hunted around for the DEBUG(7) log message but couldn't find where
in the
> > code it's being written.  Are you able to send me some sample data
to
> > replicate this behavior?
> >
> > I'd need to know...
> > - What version of MET are you running.
> > - A copy of your Point-Stat config file.
> > - The python script that you're running.
> > - The input file for that python script.
> > - The NetCDF point observation file you're passing to Point-Stat.
> >
> > If I can replicate the behavior here, it should be easy to run it
in the
> > debugger and figure it out.
> >
> > You can post data to our anonymous ftp site as described in "How
to send
> us
> > data":
> >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > >        Queue: met_help
> > >      Subject: point_stat seg faulting
> > >        Owner: Nobody
> > >   Requestors: justin.tsu at nrlmry.navy.mil
> > >       Status: new
> > >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > >
> > >
> > > Hey John,
> > >
> > >
> > >
> > > I'm trying to extrapolate the production of vertical raob
verification
> > > plots
> > > using point_stat and stat_analysis like we did together for
winds but
> for
> > > relative humidity now.  But when I run point_stat, it seg faults
> without
> > > much explanation
> > >
> > >
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Reading data for relhum/pre_001013.
> > >
> > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > mean
> > > levels, and 0 climatology standard deviation levels.
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> > >
> > > run_stats.sh: line 26: 40818 Segmentation fault      point_stat
> > > PYTHON_NUMPY
> > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > ./out/point_stat.log
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > From my log file:
> > >
> > > 607 DEBUG 2:
> > >
> > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> valid_time: 1
> > >
> > >
> > >
> > > Any help would be much appreciated
> > >
> > >
> > >
> > > Justin
> > >
> > >
> > >
> > > Justin Tsu
> > >
> > > Marine Meteorology Division
> > >
> > > Data Assimilation/Mesoscale Modeling
> > >
> > > Building 704 Room 212
> > >
> > > Naval Research Laboratory, Code 7531
> > >
> > > 7 Grace Hopper Avenue
> > >
> > > Monterey, CA 93943-5502
> > >
> > >
> > >
> > > Ph. (831) 656-4111
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Aug 30 13:45:43 2019

Justin,

Sounds about right.  Each time you run Grid-Stat or Point-Stat you can
write the CNT output line type which contains stats like MSE, ME, MAE,
and
RMSE.  And I'm recommended that you also write the SL1L2 line type as
well.

Then you'd run a stat_analysis job like this:

stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
cnt_out.stat

This job reads any .stat files it finds in "/path/to/stat/data", reads
the
SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
write out the corresponding CNT line type to the output file named
cnt_out.stat.

John

On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> So if I understand what you're saying correctly, then if I wanted to
an
> average of 24 hour forecasts over a month long run, then I would use
the
> SL1L2 output to aggregate and produce this average?  Whereas if I
used CNT,
> this would just provide me ~30 individual (per day over a month) 24
hour
> forecast verifications?
>
> On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> I am forgetting if we used stat_analysis to produce a plot or if the
plot
> you showed me was just something you guys post processed using
python or
> whatnot.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 8:47 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> We wrote the SL1L2 partial sums from Point-Stat because they can be
> aggregated together by the stat-analysis tool over multiple days or
cases.
>
> If you're interested in continuous statistics from Point-Stat, I'd
> recommend writing the CNT line type (which has the stats computed
for that
> single run) and the SL1L2 line type (so that you can aggregate them
> together in stat-analysis or METviewer).
>
> The other alternative is looking at the average of the daily
statistics
> scores.  For RMSE, the average of the daily RMSE is equal to the
aggregated
> score... as long as the number of matched pairs remains constant day
to
> day.  But if one today you have 98 matched pairs and tomorrow you
have 105,
> then tomorrow's score will have slightly more weight.  The SL1L2
lines are
> aggregated as weighted averages, where the TOTAL column is the
weight.  And
> then stats (like RMSE and MSE) are recomputed from those aggregated
> scores.  Generally, the statisticians recommend this method over the
mean
> of the daily scores.  Neither is "wrong", they just give you
slightly
> different information.
>
> Thanks,
> John
>
> On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John.
> >
> > Sorry it's taken me such a long time to get to this.  It's nearing
the
> end
> > of FY19 so I have been finalizing several transition projects and
haven’t
> > had much time to work on MET recently.  I just picked this back up
and
> have
> > loaded a couple new modules.  Here is what I have to work with
now:
> >
> > 1) intel/xe_2013-sp1-u1
> > 2) netcdf-local/netcdf-met
> > 3) met-8.1/met-8.1a-with-grib2-support
> > 4) ncview-2.1.5/ncview-2.1.5
> > 5) udunits/udunits-2.1.24
> > 6) gcc-6.3.0/gcc-6.3.0
> > 7) ImageMagicK/ImageMagick-6.9.0-10
> > 8) python/anaconda-7-15-15-save.6.6.2017
> >
> >
> > Running
> > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> >
> > I get many matched pairs.  Here is a sample of what the log file
looks
> > like for one of the pressure ranges I am verifying on:
> >
> > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> > observation type radiosonde, over region FULL, for interpolation
method
> > NEAREST(1), using 98 pairs.
> > 15258 DEBUG 3: Number of matched pairs  = 98
> > 15259 DEBUG 3: Observations processed   = 4680328
> > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > 15262 DEBUG 3: Rejected: valid time     = 0
> > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > 15265 DEBUG 3: Rejected: topography     = 0
> > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > 15267 DEBUG 3: Rejected: quality marker = 0
> > 15268 DEBUG 3: Rejected: message type   = 0
> > 15269 DEBUG 3: Rejected: masking region = 0
> > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > 15271 DEBUG 3: Rejected: duplicates     = 0
> > 15272 DEBUG 2: Computing Continuous Statistics.
> > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15280 DEBUG 2:
> > 15281 DEBUG 2:
> >
>
--------------------------------------------------------------------------------
> >
> > I am going to work on processing these point stat files to create
those
> > vertical raob plots we had a discussion about.  I remember us
talking
> about
> > the partial sums file.  Why did we choose to go the route of
producing
> > partial sums then feeding that into series analysis to generate
bias and
> > MSE?  It looks like bias and MSE both exist within the CNT line
type
> (MBIAS
> > and MSE)?
> >
> >
> > Justin
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 16, 2019 12:16 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Great, thanks for sending me the sample data.  Yes, I was able to
> replicate
> > the segfault.  The good news is that this is caused by a simple
typo
> that's
> > easy to fix.  If you look in the "obs.field" entry of the
relhumConfig
> > file, you'll see an empty string for the last field listed:
> >
> > *obs = {    field = [*
> >
> >
> >
> > *         ...        {name = "dptd";level = ["P988-1006"];},
> {name =
> > "";level = ["P1007-1013"];}    ];*
> > If you change that empty string to "dptd", the segfault will go
away:*
> > {name = "dpdt";level = ["P1007-1013"];}*
> > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > pairs.  They were discarded because of the valid times (seen using
-v 3
> > command line option to Point-Stat).  The ob file you sent is named
"
> > raob_2015020412.nc" but the actual times in that file are for
> > "20190426_120000":
> >
> > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
> >
> > * hdr_vld_table =  "20190426_120000" ;*
> >
> > So please be aware of that discrepancy.  To just produce some
matched
> > pairs, I told Point-Stat to use the valid times of the data:
> > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > <http://raob_2015020412.nc> relhumConfig \*
> > * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> > -obs_valid_end 20190426_120000*
> >
> > But I still get 0 matched pairs.  This time, it's because of bad
forecast
> > values:
> >    *DEBUG 3: Rejected: bad fcst value = 55*
> >
> > Taking a step back... let's run one of these fields through
> > plot_data_plane, which results in an error:
> > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> > 'name="./read_NRL_binary.py
> >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97,
> 97),
> > (x, y) = (97, 0)
> >
> > While the numpy object is 97x97, the grid is specified as being
118x118
> in
> > the python script ('nx': 118, 'ny': 118).
> >
> > Just to get something working, I modified the nx and ny in the
python
> > script:
> >        'nx':97,
> >        'ny':97,
> > Rerunning again, I still didn't get any matched pairs.
> >
> > So I'd suggest...
> > - Fix the typo in the config file.
> > - Figure out the discrepancy between the obs file name timestamp
and the
> > data in that file.
> > - Make sure the grid information is consistent with the data in
the
> python
> > script.
> >
> > Obviously though, we don't want to code to be segfaulting in any
> > condition.  So next, I tested using met-8.1 with that empty
string.  This
> > time it does run with no segfault, but prints a warning about the
empty
> > string.
> >
> > Hope that helps.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hey John,
> > >
> > > Ive put my data in tsu_data_20190815/ under met_help.
> > >
> > > I am running  met-8.0/met-8.0-with-grib2-support and have
provided
> > > everything
> > > on that list you've provided me.  Let me know if you're able to
> replicate
> > > it
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Thursday, August 15, 2019 4:08 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > > isn't much jumping out at me from the log messages you sent.  In
fact,
> I
> > > hunted around for the DEBUG(7) log message but couldn't find
where in
> the
> > > code it's being written.  Are you able to send me some sample
data to
> > > replicate this behavior?
> > >
> > > I'd need to know...
> > > - What version of MET are you running.
> > > - A copy of your Point-Stat config file.
> > > - The python script that you're running.
> > > - The input file for that python script.
> > > - The NetCDF point observation file you're passing to Point-
Stat.
> > >
> > > If I can replicate the behavior here, it should be easy to run
it in
> the
> > > debugger and figure it out.
> > >
> > > You can post data to our anonymous ftp site as described in "How
to
> send
> > us
> > > data":
> > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > >        Queue: met_help
> > > >      Subject: point_stat seg faulting
> > > >        Owner: Nobody
> > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > >       Status: new
> > > >  Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > >
> > > >
> > > > Hey John,
> > > >
> > > >
> > > >
> > > > I'm trying to extrapolate the production of vertical raob
> verification
> > > > plots
> > > > using point_stat and stat_analysis like we did together for
winds but
> > for
> > > > relative humidity now.  But when I run point_stat, it seg
faults
> > without
> > > > much explanation
> > > >
> > > >
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > >
> > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > > mean
> > > > levels, and 0 climatology standard deviation levels.
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> > > >
> > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > PYTHON_NUMPY
> > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > ./out/point_stat.log
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > From my log file:
> > > >
> > > > 607 DEBUG 2:
> > > >
> > > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > valid_time: 1
> > > >
> > > >
> > > >
> > > > Any help would be much appreciated
> > > >
> > > >
> > > >
> > > > Justin
> > > >
> > > >
> > > >
> > > > Justin Tsu
> > > >
> > > > Marine Meteorology Division
> > > >
> > > > Data Assimilation/Mesoscale Modeling
> > > >
> > > > Building 704 Room 212
> > > >
> > > > Naval Research Laboratory, Code 7531
> > > >
> > > > 7 Grace Hopper Avenue
> > > >
> > > > Monterey, CA 93943-5502
> > > >
> > > >
> > > >
> > > > Ph. (831) 656-4111
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Aug 30 17:10:37 2019

Thanks John,

This all helps me greatly.  One more questions: is there any
information in either the CNT or SL1L2 that could give me  confidence
intervals for each data point?  I'm looking to replicate the attached
plot.  Notice that the individual points could have either a 99, 95 or
90 % confidence.

Justin

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, August 30, 2019 12:46 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

Sounds about right.  Each time you run Grid-Stat or Point-Stat you can
write the CNT output line type which contains stats like MSE, ME, MAE,
and
RMSE.  And I'm recommended that you also write the SL1L2 line type as
well.

Then you'd run a stat_analysis job like this:

stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
cnt_out.stat

This job reads any .stat files it finds in "/path/to/stat/data", reads
the
SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
write out the corresponding CNT line type to the output file named
cnt_out.stat.

John

On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> So if I understand what you're saying correctly, then if I wanted to
an
> average of 24 hour forecasts over a month long run, then I would use
the
> SL1L2 output to aggregate and produce this average?  Whereas if I
used CNT,
> this would just provide me ~30 individual (per day over a month) 24
hour
> forecast verifications?
>
> On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> I am forgetting if we used stat_analysis to produce a plot or if the
plot
> you showed me was just something you guys post processed using
python or
> whatnot.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 8:47 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> We wrote the SL1L2 partial sums from Point-Stat because they can be
> aggregated together by the stat-analysis tool over multiple days or
cases.
>
> If you're interested in continuous statistics from Point-Stat, I'd
> recommend writing the CNT line type (which has the stats computed
for that
> single run) and the SL1L2 line type (so that you can aggregate them
> together in stat-analysis or METviewer).
>
> The other alternative is looking at the average of the daily
statistics
> scores.  For RMSE, the average of the daily RMSE is equal to the
aggregated
> score... as long as the number of matched pairs remains constant day
to
> day.  But if one today you have 98 matched pairs and tomorrow you
have 105,
> then tomorrow's score will have slightly more weight.  The SL1L2
lines are
> aggregated as weighted averages, where the TOTAL column is the
weight.  And
> then stats (like RMSE and MSE) are recomputed from those aggregated
> scores.  Generally, the statisticians recommend this method over the
mean
> of the daily scores.  Neither is "wrong", they just give you
slightly
> different information.
>
> Thanks,
> John
>
> On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John.
> >
> > Sorry it's taken me such a long time to get to this.  It's nearing
the
> end
> > of FY19 so I have been finalizing several transition projects and
haven’t
> > had much time to work on MET recently.  I just picked this back up
and
> have
> > loaded a couple new modules.  Here is what I have to work with
now:
> >
> > 1) intel/xe_2013-sp1-u1
> > 2) netcdf-local/netcdf-met
> > 3) met-8.1/met-8.1a-with-grib2-support
> > 4) ncview-2.1.5/ncview-2.1.5
> > 5) udunits/udunits-2.1.24
> > 6) gcc-6.3.0/gcc-6.3.0
> > 7) ImageMagicK/ImageMagick-6.9.0-10
> > 8) python/anaconda-7-15-15-save.6.6.2017
> >
> >
> > Running
> > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> >
> > I get many matched pairs.  Here is a sample of what the log file
looks
> > like for one of the pressure ranges I am verifying on:
> >
> > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> > observation type radiosonde, over region FULL, for interpolation
method
> > NEAREST(1), using 98 pairs.
> > 15258 DEBUG 3: Number of matched pairs  = 98
> > 15259 DEBUG 3: Observations processed   = 4680328
> > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > 15262 DEBUG 3: Rejected: valid time     = 0
> > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > 15265 DEBUG 3: Rejected: topography     = 0
> > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > 15267 DEBUG 3: Rejected: quality marker = 0
> > 15268 DEBUG 3: Rejected: message type   = 0
> > 15269 DEBUG 3: Rejected: masking region = 0
> > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > 15271 DEBUG 3: Rejected: duplicates     = 0
> > 15272 DEBUG 2: Computing Continuous Statistics.
> > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15280 DEBUG 2:
> > 15281 DEBUG 2:
> >
>
--------------------------------------------------------------------------------
> >
> > I am going to work on processing these point stat files to create
those
> > vertical raob plots we had a discussion about.  I remember us
talking
> about
> > the partial sums file.  Why did we choose to go the route of
producing
> > partial sums then feeding that into series analysis to generate
bias and
> > MSE?  It looks like bias and MSE both exist within the CNT line
type
> (MBIAS
> > and MSE)?
> >
> >
> > Justin
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 16, 2019 12:16 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Great, thanks for sending me the sample data.  Yes, I was able to
> replicate
> > the segfault.  The good news is that this is caused by a simple
typo
> that's
> > easy to fix.  If you look in the "obs.field" entry of the
relhumConfig
> > file, you'll see an empty string for the last field listed:
> >
> > *obs = {    field = [*
> >
> >
> >
> > *         ...        {name = "dptd";level = ["P988-1006"];},
> {name =
> > "";level = ["P1007-1013"];}    ];*
> > If you change that empty string to "dptd", the segfault will go
away:*
> > {name = "dpdt";level = ["P1007-1013"];}*
> > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > pairs.  They were discarded because of the valid times (seen using
-v 3
> > command line option to Point-Stat).  The ob file you sent is named
"
> > raob_2015020412.nc" but the actual times in that file are for
> > "20190426_120000":
> >
> > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
> >
> > * hdr_vld_table =  "20190426_120000" ;*
> >
> > So please be aware of that discrepancy.  To just produce some
matched
> > pairs, I told Point-Stat to use the valid times of the data:
> > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > <http://raob_2015020412.nc> relhumConfig \*
> > * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> > -obs_valid_end 20190426_120000*
> >
> > But I still get 0 matched pairs.  This time, it's because of bad
forecast
> > values:
> >    *DEBUG 3: Rejected: bad fcst value = 55*
> >
> > Taking a step back... let's run one of these fields through
> > plot_data_plane, which results in an error:
> > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> > 'name="./read_NRL_binary.py
> >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97,
> 97),
> > (x, y) = (97, 0)
> >
> > While the numpy object is 97x97, the grid is specified as being
118x118
> in
> > the python script ('nx': 118, 'ny': 118).
> >
> > Just to get something working, I modified the nx and ny in the
python
> > script:
> >        'nx':97,
> >        'ny':97,
> > Rerunning again, I still didn't get any matched pairs.
> >
> > So I'd suggest...
> > - Fix the typo in the config file.
> > - Figure out the discrepancy between the obs file name timestamp
and the
> > data in that file.
> > - Make sure the grid information is consistent with the data in
the
> python
> > script.
> >
> > Obviously though, we don't want to code to be segfaulting in any
> > condition.  So next, I tested using met-8.1 with that empty
string.  This
> > time it does run with no segfault, but prints a warning about the
empty
> > string.
> >
> > Hope that helps.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hey John,
> > >
> > > Ive put my data in tsu_data_20190815/ under met_help.
> > >
> > > I am running  met-8.0/met-8.0-with-grib2-support and have
provided
> > > everything
> > > on that list you've provided me.  Let me know if you're able to
> replicate
> > > it
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Thursday, August 15, 2019 4:08 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > > isn't much jumping out at me from the log messages you sent.  In
fact,
> I
> > > hunted around for the DEBUG(7) log message but couldn't find
where in
> the
> > > code it's being written.  Are you able to send me some sample
data to
> > > replicate this behavior?
> > >
> > > I'd need to know...
> > > - What version of MET are you running.
> > > - A copy of your Point-Stat config file.
> > > - The python script that you're running.
> > > - The input file for that python script.
> > > - The NetCDF point observation file you're passing to Point-
Stat.
> > >
> > > If I can replicate the behavior here, it should be easy to run
it in
> the
> > > debugger and figure it out.
> > >
> > > You can post data to our anonymous ftp site as described in "How
to
> send
> > us
> > > data":
> > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > >        Queue: met_help
> > > >      Subject: point_stat seg faulting
> > > >        Owner: Nobody
> > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > >       Status: new
> > > >  Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > >
> > > >
> > > > Hey John,
> > > >
> > > >
> > > >
> > > > I'm trying to extrapolate the production of vertical raob
> verification
> > > > plots
> > > > using point_stat and stat_analysis like we did together for
winds but
> > for
> > > > relative humidity now.  But when I run point_stat, it seg
faults
> > without
> > > > much explanation
> > > >
> > > >
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > >
> > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > > mean
> > > > levels, and 0 climatology standard deviation levels.
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
valid_time: 1
> > > >
> > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > PYTHON_NUMPY
> > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > ./out/point_stat.log
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > From my log file:
> > > >
> > > > 607 DEBUG 2:
> > > >
> > > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > valid_time: 1
> > > >
> > > >
> > > >
> > > > Any help would be much appreciated
> > > >
> > > >
> > > >
> > > > Justin
> > > >
> > > >
> > > >
> > > > Justin Tsu
> > > >
> > > > Marine Meteorology Division
> > > >
> > > > Data Assimilation/Mesoscale Modeling
> > > >
> > > > Building 704 Room 212
> > > >
> > > > Naval Research Laboratory, Code 7531
> > > >
> > > > 7 Grace Hopper Avenue
> > > >
> > > > Monterey, CA 93943-5502
> > > >
> > > >
> > > >
> > > > Ph. (831) 656-4111
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Tue Sep 03 09:35:40 2019

Justin,

I see that you're plotting RMSE and bias (called ME for Mean Error in
MET)
in the plots you sent.

Table 7.6 of the MET User's Guide (
https://dtcenter.org/sites/default/files/community-code/met/docs/user-
guide/MET_Users_Guide_v8.1.1.pdf)
describes the contents of the CNT line type type. Bot the columns for
RMSE
and ME are followed by _NCL and _NCU columns which give the parametric
approximation of the confidence interval for those scores.  So yes,
you can
run Stat-Analysis to aggregate SL1L2 lines together and write the
corresponding CNT output line type.

The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
for the ME statistic.

You can change the alpha value for those confidence intervals by
setting:
-out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).

Thanks,
John


On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> This all helps me greatly.  One more questions: is there any
information
> in either the CNT or SL1L2 that could give me  confidence intervals
for
> each data point?  I'm looking to replicate the attached plot.
Notice that
> the individual points could have either a 99, 95 or 90 % confidence.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 12:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sounds about right.  Each time you run Grid-Stat or Point-Stat you
can
> write the CNT output line type which contains stats like MSE, ME,
MAE, and
> RMSE.  And I'm recommended that you also write the SL1L2 line type
as well.
>
> Then you'd run a stat_analysis job like this:
>
> stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> cnt_out.stat
>
> This job reads any .stat files it finds in "/path/to/stat/data",
reads the
> SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
> FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
> write out the corresponding CNT line type to the output file named
> cnt_out.stat.
>
> John
>
> On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > So if I understand what you're saying correctly, then if I wanted
to an
> > average of 24 hour forecasts over a month long run, then I would
use the
> > SL1L2 output to aggregate and produce this average?  Whereas if I
used
> CNT,
> > this would just provide me ~30 individual (per day over a month)
24 hour
> > forecast verifications?
> >
> > On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> > I am forgetting if we used stat_analysis to produce a plot or if
the plot
> > you showed me was just something you guys post processed using
python or
> > whatnot.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 8:47 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > aggregated together by the stat-analysis tool over multiple days
or
> cases.
> >
> > If you're interested in continuous statistics from Point-Stat, I'd
> > recommend writing the CNT line type (which has the stats computed
for
> that
> > single run) and the SL1L2 line type (so that you can aggregate
them
> > together in stat-analysis or METviewer).
> >
> > The other alternative is looking at the average of the daily
statistics
> > scores.  For RMSE, the average of the daily RMSE is equal to the
> aggregated
> > score... as long as the number of matched pairs remains constant
day to
> > day.  But if one today you have 98 matched pairs and tomorrow you
have
> 105,
> > then tomorrow's score will have slightly more weight.  The SL1L2
lines
> are
> > aggregated as weighted averages, where the TOTAL column is the
weight.
> And
> > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > scores.  Generally, the statisticians recommend this method over
the mean
> > of the daily scores.  Neither is "wrong", they just give you
slightly
> > different information.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John.
> > >
> > > Sorry it's taken me such a long time to get to this.  It's
nearing the
> > end
> > > of FY19 so I have been finalizing several transition projects
and
> haven’t
> > > had much time to work on MET recently.  I just picked this back
up and
> > have
> > > loaded a couple new modules.  Here is what I have to work with
now:
> > >
> > > 1) intel/xe_2013-sp1-u1
> > > 2) netcdf-local/netcdf-met
> > > 3) met-8.1/met-8.1a-with-grib2-support
> > > 4) ncview-2.1.5/ncview-2.1.5
> > > 5) udunits/udunits-2.1.24
> > > 6) gcc-6.3.0/gcc-6.3.0
> > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > 8) python/anaconda-7-15-15-save.6.6.2017
> > >
> > >
> > > Running
> > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > >
> > > I get many matched pairs.  Here is a sample of what the log file
looks
> > > like for one of the pressure ranges I am verifying on:
> > >
> > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > observation type radiosonde, over region FULL, for interpolation
method
> > > NEAREST(1), using 98 pairs.
> > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > 15259 DEBUG 3: Observations processed   = 4680328
> > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > 15265 DEBUG 3: Rejected: topography     = 0
> > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > 15268 DEBUG 3: Rejected: message type   = 0
> > > 15269 DEBUG 3: Rejected: masking region = 0
> > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15280 DEBUG 2:
> > > 15281 DEBUG 2:
> > >
> >
>
--------------------------------------------------------------------------------
> > >
> > > I am going to work on processing these point stat files to
create those
> > > vertical raob plots we had a discussion about.  I remember us
talking
> > about
> > > the partial sums file.  Why did we choose to go the route of
producing
> > > partial sums then feeding that into series analysis to generate
bias
> and
> > > MSE?  It looks like bias and MSE both exist within the CNT line
type
> > (MBIAS
> > > and MSE)?
> > >
> > >
> > > Justin
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 16, 2019 12:16 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Great, thanks for sending me the sample data.  Yes, I was able
to
> > replicate
> > > the segfault.  The good news is that this is caused by a simple
typo
> > that's
> > > easy to fix.  If you look in the "obs.field" entry of the
relhumConfig
> > > file, you'll see an empty string for the last field listed:
> > >
> > > *obs = {    field = [*
> > >
> > >
> > >
> > > *         ...        {name = "dptd";level = ["P988-1006"];},
> > {name =
> > > "";level = ["P1007-1013"];}    ];*
> > > If you change that empty string to "dptd", the segfault will go
away:*
> > > {name = "dpdt";level = ["P1007-1013"];}*
> > > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > pairs.  They were discarded because of the valid times (seen
using -v 3
> > > command line option to Point-Stat).  The ob file you sent is
named "
> > > raob_2015020412.nc" but the actual times in that file are for
> > > "20190426_120000":
> > >
> > > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc
> >*
> > >
> > > * hdr_vld_table =  "20190426_120000" ;*
> > >
> > > So please be aware of that discrepancy.  To just produce some
matched
> > > pairs, I told Point-Stat to use the valid times of the data:
> > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > <http://raob_2015020412.nc> relhumConfig \*
> > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > -obs_valid_end 20190426_120000*
> > >
> > > But I still get 0 matched pairs.  This time, it's because of bad
> forecast
> > > values:
> > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > >
> > > Taking a step back... let's run one of these fields through
> > > plot_data_plane, which results in an error:
> > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > 'name="./read_NRL_binary.py
> > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny)
= (97,
> > 97),
> > > (x, y) = (97, 0)
> > >
> > > While the numpy object is 97x97, the grid is specified as being
118x118
> > in
> > > the python script ('nx': 118, 'ny': 118).
> > >
> > > Just to get something working, I modified the nx and ny in the
python
> > > script:
> > >        'nx':97,
> > >        'ny':97,
> > > Rerunning again, I still didn't get any matched pairs.
> > >
> > > So I'd suggest...
> > > - Fix the typo in the config file.
> > > - Figure out the discrepancy between the obs file name timestamp
and
> the
> > > data in that file.
> > > - Make sure the grid information is consistent with the data in
the
> > python
> > > script.
> > >
> > > Obviously though, we don't want to code to be segfaulting in any
> > > condition.  So next, I tested using met-8.1 with that empty
string.
> This
> > > time it does run with no segfault, but prints a warning about
the empty
> > > string.
> > >
> > > Hope that helps.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hey John,
> > > >
> > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > >
> > > > I am running  met-8.0/met-8.0-with-grib2-support and have
provided
> > > > everything
> > > > on that list you've provided me.  Let me know if you're able
to
> > replicate
> > > > it
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> There
> > > > isn't much jumping out at me from the log messages you sent.
In
> fact,
> > I
> > > > hunted around for the DEBUG(7) log message but couldn't find
where in
> > the
> > > > code it's being written.  Are you able to send me some sample
data to
> > > > replicate this behavior?
> > > >
> > > > I'd need to know...
> > > > - What version of MET are you running.
> > > > - A copy of your Point-Stat config file.
> > > > - The python script that you're running.
> > > > - The input file for that python script.
> > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > >
> > > > If I can replicate the behavior here, it should be easy to run
it in
> > the
> > > > debugger and figure it out.
> > > >
> > > > You can post data to our anonymous ftp site as described in
"How to
> > send
> > > us
> > > > data":
> > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > >        Queue: met_help
> > > > >      Subject: point_stat seg faulting
> > > > >        Owner: Nobody
> > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > >       Status: new
> > > > >  Ticket <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > >
> > > > >
> > > > > Hey John,
> > > > >
> > > > >
> > > > >
> > > > > I'm trying to extrapolate the production of vertical raob
> > verification
> > > > > plots
> > > > > using point_stat and stat_analysis like we did together for
winds
> but
> > > for
> > > > > relative humidity now.  But when I run point_stat, it seg
faults
> > > without
> > > > > much explanation
> > > > >
> > > > >
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > >
> > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> climatology
> > > > mean
> > > > > levels, and 0 climatology standard deviation levels.
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > >
> > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> valid_time: 1
> > > > >
> > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > PYTHON_NUMPY
> > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > ./out/point_stat.log
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > From my log file:
> > > > >
> > > > > 607 DEBUG 2:
> > > > >
> > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > >
> > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > valid_time: 1
> > > > >
> > > > >
> > > > >
> > > > > Any help would be much appreciated
> > > > >
> > > > >
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > >
> > > > > Justin Tsu
> > > > >
> > > > > Marine Meteorology Division
> > > > >
> > > > > Data Assimilation/Mesoscale Modeling
> > > > >
> > > > > Building 704 Room 212
> > > > >
> > > > > Naval Research Laboratory, Code 7531
> > > > >
> > > > > 7 Grace Hopper Avenue
> > > > >
> > > > > Monterey, CA 93943-5502
> > > > >
> > > > >
> > > > >
> > > > > Ph. (831) 656-4111
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Sep 06 13:02:46 2019

Thanks John,

I managed to scrap together some code to get RAOB stats from CNT
plotted with 95% CI.  Working on Surface stats now.

So my configuration file looks like this right now:

fcst = {
     field = [
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
     ];
}

obs = {
    field = [
        {name = "dptd";level = ["P0.86-1.5"];},
        {name = "dptd";level = ["P1.6-2.5"];},
        {name = "dptd";level = ["P2.6-3.5"];},
        {name = "dptd";level = ["P3.6-4.5"];},
        {name = "dptd";level = ["P4.6-6"];},
        {name = "dptd";level = ["P6.1-8"];},
        {name = "dptd";level = ["P9-15"];},
        {name = "dptd";level = ["P16-25"];},
        {name = "dptd";level = ["P26-40"];},
        {name = "dptd";level = ["P41-65"];},
        {name = "dptd";level = ["P66-85"];},
        {name = "dptd";level = ["P86-125"];},
        {name = "dptd";level = ["P126-175"];},
        {name = "dptd";level = ["P176-225"];},
        {name = "dptd";level = ["P226-275"];},
        {name = "dptd";level = ["P276-325"];},
        {name = "dptd";level = ["P326-375"];},
        {name = "dptd";level = ["P376-425"];},
        {name = "dptd";level = ["P426-475"];},
        {name = "dptd";level = ["P476-525"];},
        {name = "dptd";level = ["P526-575"];},
        {name = "dptd";level = ["P576-625"];},
        {name = "dptd";level = ["P626-675"];},
        {name = "dptd";level = ["P676-725"];},
        {name = "dptd";level = ["P726-775"];},
        {name = "dptd";level = ["P776-825"];},
        {name = "dptd";level = ["P826-875"];},
        {name = "dptd";level = ["P876-912"];},
        {name = "dptd";level = ["P913-936"];},
        {name = "dptd";level = ["P937-962"];},
        {name = "dptd";level = ["P963-987"];},
        {name = "dptd";level = ["P988-1006"];},
        {name = "dptd";level = ["P1007-1013"];}

And I have the data:

dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld

for a particular DTG and vertical level.  If I want to run multiple
lead times, it seems like I'll have to copy that long list of fields
for each lead time in the fcst dict and then duplicate the obs
dictionary so that each forecast entry has a corresponding obs level
matching range.  Is this correct or is there a shorter/better way to
do this?

Justin

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Tuesday, September 3, 2019 8:36 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

I see that you're plotting RMSE and bias (called ME for Mean Error in
MET)
in the plots you sent.

Table 7.6 of the MET User's Guide (
https://dtcenter.org/sites/default/files/community-code/met/docs/user-
guide/MET_Users_Guide_v8.1.1.pdf)
describes the contents of the CNT line type type. Bot the columns for
RMSE
and ME are followed by _NCL and _NCU columns which give the parametric
approximation of the confidence interval for those scores.  So yes,
you can
run Stat-Analysis to aggregate SL1L2 lines together and write the
corresponding CNT output line type.

The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
for the ME statistic.

You can change the alpha value for those confidence intervals by
setting:
-out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).

Thanks,
John


On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> This all helps me greatly.  One more questions: is there any
information
> in either the CNT or SL1L2 that could give me  confidence intervals
for
> each data point?  I'm looking to replicate the attached plot.
Notice that
> the individual points could have either a 99, 95 or 90 % confidence.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 12:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sounds about right.  Each time you run Grid-Stat or Point-Stat you
can
> write the CNT output line type which contains stats like MSE, ME,
MAE, and
> RMSE.  And I'm recommended that you also write the SL1L2 line type
as well.
>
> Then you'd run a stat_analysis job like this:
>
> stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> cnt_out.stat
>
> This job reads any .stat files it finds in "/path/to/stat/data",
reads the
> SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
> FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
> write out the corresponding CNT line type to the output file named
> cnt_out.stat.
>
> John
>
> On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > So if I understand what you're saying correctly, then if I wanted
to an
> > average of 24 hour forecasts over a month long run, then I would
use the
> > SL1L2 output to aggregate and produce this average?  Whereas if I
used
> CNT,
> > this would just provide me ~30 individual (per day over a month)
24 hour
> > forecast verifications?
> >
> > On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> > I am forgetting if we used stat_analysis to produce a plot or if
the plot
> > you showed me was just something you guys post processed using
python or
> > whatnot.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 8:47 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > aggregated together by the stat-analysis tool over multiple days
or
> cases.
> >
> > If you're interested in continuous statistics from Point-Stat, I'd
> > recommend writing the CNT line type (which has the stats computed
for
> that
> > single run) and the SL1L2 line type (so that you can aggregate
them
> > together in stat-analysis or METviewer).
> >
> > The other alternative is looking at the average of the daily
statistics
> > scores.  For RMSE, the average of the daily RMSE is equal to the
> aggregated
> > score... as long as the number of matched pairs remains constant
day to
> > day.  But if one today you have 98 matched pairs and tomorrow you
have
> 105,
> > then tomorrow's score will have slightly more weight.  The SL1L2
lines
> are
> > aggregated as weighted averages, where the TOTAL column is the
weight.
> And
> > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > scores.  Generally, the statisticians recommend this method over
the mean
> > of the daily scores.  Neither is "wrong", they just give you
slightly
> > different information.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John.
> > >
> > > Sorry it's taken me such a long time to get to this.  It's
nearing the
> > end
> > > of FY19 so I have been finalizing several transition projects
and
> haven’t
> > > had much time to work on MET recently.  I just picked this back
up and
> > have
> > > loaded a couple new modules.  Here is what I have to work with
now:
> > >
> > > 1) intel/xe_2013-sp1-u1
> > > 2) netcdf-local/netcdf-met
> > > 3) met-8.1/met-8.1a-with-grib2-support
> > > 4) ncview-2.1.5/ncview-2.1.5
> > > 5) udunits/udunits-2.1.24
> > > 6) gcc-6.3.0/gcc-6.3.0
> > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > 8) python/anaconda-7-15-15-save.6.6.2017
> > >
> > >
> > > Running
> > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > >
> > > I get many matched pairs.  Here is a sample of what the log file
looks
> > > like for one of the pressure ranges I am verifying on:
> > >
> > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > observation type radiosonde, over region FULL, for interpolation
method
> > > NEAREST(1), using 98 pairs.
> > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > 15259 DEBUG 3: Observations processed   = 4680328
> > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > 15265 DEBUG 3: Rejected: topography     = 0
> > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > 15268 DEBUG 3: Rejected: message type   = 0
> > > 15269 DEBUG 3: Rejected: masking region = 0
> > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15280 DEBUG 2:
> > > 15281 DEBUG 2:
> > >
> >
>
--------------------------------------------------------------------------------
> > >
> > > I am going to work on processing these point stat files to
create those
> > > vertical raob plots we had a discussion about.  I remember us
talking
> > about
> > > the partial sums file.  Why did we choose to go the route of
producing
> > > partial sums then feeding that into series analysis to generate
bias
> and
> > > MSE?  It looks like bias and MSE both exist within the CNT line
type
> > (MBIAS
> > > and MSE)?
> > >
> > >
> > > Justin
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 16, 2019 12:16 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Great, thanks for sending me the sample data.  Yes, I was able
to
> > replicate
> > > the segfault.  The good news is that this is caused by a simple
typo
> > that's
> > > easy to fix.  If you look in the "obs.field" entry of the
relhumConfig
> > > file, you'll see an empty string for the last field listed:
> > >
> > > *obs = {    field = [*
> > >
> > >
> > >
> > > *         ...        {name = "dptd";level = ["P988-1006"];},
> > {name =
> > > "";level = ["P1007-1013"];}    ];*
> > > If you change that empty string to "dptd", the segfault will go
away:*
> > > {name = "dpdt";level = ["P1007-1013"];}*
> > > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > pairs.  They were discarded because of the valid times (seen
using -v 3
> > > command line option to Point-Stat).  The ob file you sent is
named "
> > > raob_2015020412.nc" but the actual times in that file are for
> > > "20190426_120000":
> > >
> > > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc
> >*
> > >
> > > * hdr_vld_table =  "20190426_120000" ;*
> > >
> > > So please be aware of that discrepancy.  To just produce some
matched
> > > pairs, I told Point-Stat to use the valid times of the data:
> > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > <http://raob_2015020412.nc> relhumConfig \*
> > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > -obs_valid_end 20190426_120000*
> > >
> > > But I still get 0 matched pairs.  This time, it's because of bad
> forecast
> > > values:
> > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > >
> > > Taking a step back... let's run one of these fields through
> > > plot_data_plane, which results in an error:
> > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > 'name="./read_NRL_binary.py
> > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > ERROR  : DataPlane::two_to_one() -> range check error: (Nx, Ny)
= (97,
> > 97),
> > > (x, y) = (97, 0)
> > >
> > > While the numpy object is 97x97, the grid is specified as being
118x118
> > in
> > > the python script ('nx': 118, 'ny': 118).
> > >
> > > Just to get something working, I modified the nx and ny in the
python
> > > script:
> > >        'nx':97,
> > >        'ny':97,
> > > Rerunning again, I still didn't get any matched pairs.
> > >
> > > So I'd suggest...
> > > - Fix the typo in the config file.
> > > - Figure out the discrepancy between the obs file name timestamp
and
> the
> > > data in that file.
> > > - Make sure the grid information is consistent with the data in
the
> > python
> > > script.
> > >
> > > Obviously though, we don't want to code to be segfaulting in any
> > > condition.  So next, I tested using met-8.1 with that empty
string.
> This
> > > time it does run with no segfault, but prints a warning about
the empty
> > > string.
> > >
> > > Hope that helps.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hey John,
> > > >
> > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > >
> > > > I am running  met-8.0/met-8.0-with-grib2-support and have
provided
> > > > everything
> > > > on that list you've provided me.  Let me know if you're able
to
> > replicate
> > > > it
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> There
> > > > isn't much jumping out at me from the log messages you sent.
In
> fact,
> > I
> > > > hunted around for the DEBUG(7) log message but couldn't find
where in
> > the
> > > > code it's being written.  Are you able to send me some sample
data to
> > > > replicate this behavior?
> > > >
> > > > I'd need to know...
> > > > - What version of MET are you running.
> > > > - A copy of your Point-Stat config file.
> > > > - The python script that you're running.
> > > > - The input file for that python script.
> > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > >
> > > > If I can replicate the behavior here, it should be easy to run
it in
> > the
> > > > debugger and figure it out.
> > > >
> > > > You can post data to our anonymous ftp site as described in
"How to
> > send
> > > us
> > > > data":
> > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > >        Queue: met_help
> > > > >      Subject: point_stat seg faulting
> > > > >        Owner: Nobody
> > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > >       Status: new
> > > > >  Ticket <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > >
> > > > >
> > > > > Hey John,
> > > > >
> > > > >
> > > > >
> > > > > I'm trying to extrapolate the production of vertical raob
> > verification
> > > > > plots
> > > > > using point_stat and stat_analysis like we did together for
winds
> but
> > > for
> > > > > relative humidity now.  But when I run point_stat, it seg
faults
> > > without
> > > > > much explanation
> > > > >
> > > > >
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > >
> > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> climatology
> > > > mean
> > > > > levels, and 0 climatology standard deviation levels.
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > >
> > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> valid_time: 1
> > > > >
> > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > PYTHON_NUMPY
> > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > ./out/point_stat.log
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > From my log file:
> > > > >
> > > > > 607 DEBUG 2:
> > > > >
> > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > >
> > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > valid_time: 1
> > > > >
> > > > >
> > > > >
> > > > > Any help would be much appreciated
> > > > >
> > > > >
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > >
> > > > > Justin Tsu
> > > > >
> > > > > Marine Meteorology Division
> > > > >
> > > > > Data Assimilation/Mesoscale Modeling
> > > > >
> > > > > Building 704 Room 212
> > > > >
> > > > > Naval Research Laboratory, Code 7531
> > > > >
> > > > > 7 Grace Hopper Avenue
> > > > >
> > > > > Monterey, CA 93943-5502
> > > > >
> > > > >
> > > > >
> > > > > Ph. (831) 656-4111
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Sep 06 14:10:30 2019

Justin,

Yes, that is a long list of fields, but I don't see a way obvious way
of
shortening that.  But to do multiple lead times, I'd just call Point-
Stat
multiple times, once for each lead time, and update the config file to
use
environment variables for the current time:

fcst = {
     field = [
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
...

Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
variables.

John

On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> I managed to scrap together some code to get RAOB stats from CNT
plotted
> with 95% CI.  Working on Surface stats now.
>
> So my configuration file looks like this right now:
>
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
>      ];
> }
>
> obs = {
>     field = [
>         {name = "dptd";level = ["P0.86-1.5"];},
>         {name = "dptd";level = ["P1.6-2.5"];},
>         {name = "dptd";level = ["P2.6-3.5"];},
>         {name = "dptd";level = ["P3.6-4.5"];},
>         {name = "dptd";level = ["P4.6-6"];},
>         {name = "dptd";level = ["P6.1-8"];},
>         {name = "dptd";level = ["P9-15"];},
>         {name = "dptd";level = ["P16-25"];},
>         {name = "dptd";level = ["P26-40"];},
>         {name = "dptd";level = ["P41-65"];},
>         {name = "dptd";level = ["P66-85"];},
>         {name = "dptd";level = ["P86-125"];},
>         {name = "dptd";level = ["P126-175"];},
>         {name = "dptd";level = ["P176-225"];},
>         {name = "dptd";level = ["P226-275"];},
>         {name = "dptd";level = ["P276-325"];},
>         {name = "dptd";level = ["P326-375"];},
>         {name = "dptd";level = ["P376-425"];},
>         {name = "dptd";level = ["P426-475"];},
>         {name = "dptd";level = ["P476-525"];},
>         {name = "dptd";level = ["P526-575"];},
>         {name = "dptd";level = ["P576-625"];},
>         {name = "dptd";level = ["P626-675"];},
>         {name = "dptd";level = ["P676-725"];},
>         {name = "dptd";level = ["P726-775"];},
>         {name = "dptd";level = ["P776-825"];},
>         {name = "dptd";level = ["P826-875"];},
>         {name = "dptd";level = ["P876-912"];},
>         {name = "dptd";level = ["P913-936"];},
>         {name = "dptd";level = ["P937-962"];},
>         {name = "dptd";level = ["P963-987"];},
>         {name = "dptd";level = ["P988-1006"];},
>         {name = "dptd";level = ["P1007-1013"];}
>
> And I have the data:
>
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
>
> for a particular DTG and vertical level.  If I want to run multiple
lead
> times, it seems like I'll have to copy that long list of fields for
each
> lead time in the fcst dict and then duplicate the obs dictionary so
that
> each forecast entry has a corresponding obs level matching range.
Is this
> correct or is there a shorter/better way to do this?
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Tuesday, September 3, 2019 8:36 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> I see that you're plotting RMSE and bias (called ME for Mean Error
in MET)
> in the plots you sent.
>
> Table 7.6 of the MET User's Guide (
>
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> )
> describes the contents of the CNT line type type. Bot the columns
for RMSE
> and ME are followed by _NCL and _NCU columns which give the
parametric
> approximation of the confidence interval for those scores.  So yes,
you can
> run Stat-Analysis to aggregate SL1L2 lines together and write the
> corresponding CNT output line type.
>
> The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> for the ME statistic.
>
> You can change the alpha value for those confidence intervals by
setting:
> -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
>
> Thanks,
> John
>
>
> On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > This all helps me greatly.  One more questions: is there any
information
> > in either the CNT or SL1L2 that could give me  confidence
intervals for
> > each data point?  I'm looking to replicate the attached plot.
Notice
> that
> > the individual points could have either a 99, 95 or 90 %
confidence.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 12:46 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Sounds about right.  Each time you run Grid-Stat or Point-Stat you
can
> > write the CNT output line type which contains stats like MSE, ME,
MAE,
> and
> > RMSE.  And I'm recommended that you also write the SL1L2 line type
as
> well.
> >
> > Then you'd run a stat_analysis job like this:
> >
> > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> > cnt_out.stat
> >
> > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> the
> > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> and
> > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together and
> > write out the corresponding CNT line type to the output file named
> > cnt_out.stat.
> >
> > John
> >
> > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > So if I understand what you're saying correctly, then if I
wanted to an
> > > average of 24 hour forecasts over a month long run, then I would
use
> the
> > > SL1L2 output to aggregate and produce this average?  Whereas if
I used
> > CNT,
> > > this would just provide me ~30 individual (per day over a month)
24
> hour
> > > forecast verifications?
> > >
> > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> biases?
> > > I am forgetting if we used stat_analysis to produce a plot or if
the
> plot
> > > you showed me was just something you guys post processed using
python
> or
> > > whatnot.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 8:47 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > > aggregated together by the stat-analysis tool over multiple days
or
> > cases.
> > >
> > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > recommend writing the CNT line type (which has the stats
computed for
> > that
> > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > together in stat-analysis or METviewer).
> > >
> > > The other alternative is looking at the average of the daily
statistics
> > > scores.  For RMSE, the average of the daily RMSE is equal to the
> > aggregated
> > > score... as long as the number of matched pairs remains constant
day to
> > > day.  But if one today you have 98 matched pairs and tomorrow
you have
> > 105,
> > > then tomorrow's score will have slightly more weight.  The SL1L2
lines
> > are
> > > aggregated as weighted averages, where the TOTAL column is the
weight.
> > And
> > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > scores.  Generally, the statisticians recommend this method over
the
> mean
> > > of the daily scores.  Neither is "wrong", they just give you
slightly
> > > different information.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John.
> > > >
> > > > Sorry it's taken me such a long time to get to this.  It's
nearing
> the
> > > end
> > > > of FY19 so I have been finalizing several transition projects
and
> > haven’t
> > > > had much time to work on MET recently.  I just picked this
back up
> and
> > > have
> > > > loaded a couple new modules.  Here is what I have to work with
now:
> > > >
> > > > 1) intel/xe_2013-sp1-u1
> > > > 2) netcdf-local/netcdf-met
> > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > 5) udunits/udunits-2.1.24
> > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > >
> > > >
> > > > Running
> > > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v
3
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > >
> > > > I get many matched pairs.  Here is a sample of what the log
file
> looks
> > > > like for one of the pressure ranges I am verifying on:
> > > >
> > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > > observation type radiosonde, over region FULL, for
interpolation
> method
> > > > NEAREST(1), using 98 pairs.
> > > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > > 15259 DEBUG 3: Observations processed   = 4680328
> > > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > > 15265 DEBUG 3: Rejected: topography     = 0
> > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > 15268 DEBUG 3: Rejected: message type   = 0
> > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15280 DEBUG 2:
> > > > 15281 DEBUG 2:
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > >
> > > > I am going to work on processing these point stat files to
create
> those
> > > > vertical raob plots we had a discussion about.  I remember us
talking
> > > about
> > > > the partial sums file.  Why did we choose to go the route of
> producing
> > > > partial sums then feeding that into series analysis to
generate bias
> > and
> > > > MSE?  It looks like bias and MSE both exist within the CNT
line type
> > > (MBIAS
> > > > and MSE)?
> > > >
> > > >
> > > > Justin
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Great, thanks for sending me the sample data.  Yes, I was able
to
> > > replicate
> > > > the segfault.  The good news is that this is caused by a
simple typo
> > > that's
> > > > easy to fix.  If you look in the "obs.field" entry of the
> relhumConfig
> > > > file, you'll see an empty string for the last field listed:
> > > >
> > > > *obs = {    field = [*
> > > >
> > > >
> > > >
> > > > *         ...        {name = "dptd";level = ["P988-1006"];},
> > > {name =
> > > > "";level = ["P1007-1013"];}    ];*
> > > > If you change that empty string to "dptd", the segfault will
go
> away:*
> > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion (in
> 2
> > > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > > pairs.  They were discarded because of the valid times (seen
using
> -v 3
> > > > command line option to Point-Stat).  The ob file you sent is
named "
> > > > raob_2015020412.nc" but the actual times in that file are for
> > > > "20190426_120000":
> > > >
> > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> http://raob_2015020412.nc
> > >*
> > > >
> > > > * hdr_vld_table =  "20190426_120000" ;*
> > > >
> > > > So please be aware of that discrepancy.  To just produce some
matched
> > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > -obs_valid_end 20190426_120000*
> > > >
> > > > But I still get 0 matched pairs.  This time, it's because of
bad
> > forecast
> > > > values:
> > > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > > >
> > > > Taking a step back... let's run one of these fields through
> > > > plot_data_plane, which results in an error:
> > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > 'name="./read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > ERROR  : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> (97,
> > > 97),
> > > > (x, y) = (97, 0)
> > > >
> > > > While the numpy object is 97x97, the grid is specified as
being
> 118x118
> > > in
> > > > the python script ('nx': 118, 'ny': 118).
> > > >
> > > > Just to get something working, I modified the nx and ny in the
python
> > > > script:
> > > >        'nx':97,
> > > >        'ny':97,
> > > > Rerunning again, I still didn't get any matched pairs.
> > > >
> > > > So I'd suggest...
> > > > - Fix the typo in the config file.
> > > > - Figure out the discrepancy between the obs file name
timestamp and
> > the
> > > > data in that file.
> > > > - Make sure the grid information is consistent with the data
in the
> > > python
> > > > script.
> > > >
> > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > condition.  So next, I tested using met-8.1 with that empty
string.
> > This
> > > > time it does run with no segfault, but prints a warning about
the
> empty
> > > > string.
> > > >
> > > > Hope that helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Hey John,
> > > > >
> > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > >
> > > > > I am running  met-8.0/met-8.0-with-grib2-support and have
provided
> > > > > everything
> > > > > on that list you've provided me.  Let me know if you're able
to
> > > replicate
> > > > > it
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > There
> > > > > isn't much jumping out at me from the log messages you sent.
In
> > fact,
> > > I
> > > > > hunted around for the DEBUG(7) log message but couldn't find
where
> in
> > > the
> > > > > code it's being written.  Are you able to send me some
sample data
> to
> > > > > replicate this behavior?
> > > > >
> > > > > I'd need to know...
> > > > > - What version of MET are you running.
> > > > > - A copy of your Point-Stat config file.
> > > > > - The python script that you're running.
> > > > > - The input file for that python script.
> > > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > > >
> > > > > If I can replicate the behavior here, it should be easy to
run it
> in
> > > the
> > > > > debugger and figure it out.
> > > > >
> > > > > You can post data to our anonymous ftp site as described in
"How to
> > > send
> > > > us
> > > > > data":
> > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > > >        Queue: met_help
> > > > > >      Subject: point_stat seg faulting
> > > > > >        Owner: Nobody
> > > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > > >       Status: new
> > > > > >  Ticket <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > >
> > > > > >
> > > > > > I'm trying to extrapolate the production of vertical raob
> > > verification
> > > > > > plots
> > > > > > using point_stat and stat_analysis like we did together
for winds
> > but
> > > > for
> > > > > > relative humidity now.  But when I run point_stat, it seg
faults
> > > > without
> > > > > > much explanation
> > > > > >
> > > > > >
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > >
> > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> > climatology
> > > > > mean
> > > > > > levels, and 0 climatology standard deviation levels.
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > > >
> > > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > valid_time: 1
> > > > > >
> > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > PYTHON_NUMPY
> > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > ./out/point_stat.log
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > From my log file:
> > > > > >
> > > > > > 607 DEBUG 2:
> > > > > >
> > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > >
> > > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > > valid_time: 1
> > > > > >
> > > > > >
> > > > > >
> > > > > > Any help would be much appreciated
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin Tsu
> > > > > >
> > > > > > Marine Meteorology Division
> > > > > >
> > > > > > Data Assimilation/Mesoscale Modeling
> > > > > >
> > > > > > Building 704 Room 212
> > > > > >
> > > > > > Naval Research Laboratory, Code 7531
> > > > > >
> > > > > > 7 Grace Hopper Avenue
> > > > > >
> > > > > > Monterey, CA 93943-5502
> > > > > >
> > > > > >
> > > > > >
> > > > > > Ph. (831) 656-4111
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Sep 06 14:15:47 2019

Invoking point_stat multiple times will create and replace the old
_cnt and _sl1l2 files right?  At that point, I'll have a bunch of CNT
and SL1L2	 files and then use stat_analysis to aggregate them?

Justin


-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, September 6, 2019 1:11 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

Yes, that is a long list of fields, but I don't see a way obvious way
of
shortening that.  But to do multiple lead times, I'd just call Point-
Stat
multiple times, once for each lead time, and update the config file to
use
environment variables for the current time:

fcst = {
     field = [
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
...

Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
variables.

John

On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> I managed to scrap together some code to get RAOB stats from CNT
plotted
> with 95% CI.  Working on Surface stats now.
>
> So my configuration file looks like this right now:
>
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
>      ];
> }
>
> obs = {
>     field = [
>         {name = "dptd";level = ["P0.86-1.5"];},
>         {name = "dptd";level = ["P1.6-2.5"];},
>         {name = "dptd";level = ["P2.6-3.5"];},
>         {name = "dptd";level = ["P3.6-4.5"];},
>         {name = "dptd";level = ["P4.6-6"];},
>         {name = "dptd";level = ["P6.1-8"];},
>         {name = "dptd";level = ["P9-15"];},
>         {name = "dptd";level = ["P16-25"];},
>         {name = "dptd";level = ["P26-40"];},
>         {name = "dptd";level = ["P41-65"];},
>         {name = "dptd";level = ["P66-85"];},
>         {name = "dptd";level = ["P86-125"];},
>         {name = "dptd";level = ["P126-175"];},
>         {name = "dptd";level = ["P176-225"];},
>         {name = "dptd";level = ["P226-275"];},
>         {name = "dptd";level = ["P276-325"];},
>         {name = "dptd";level = ["P326-375"];},
>         {name = "dptd";level = ["P376-425"];},
>         {name = "dptd";level = ["P426-475"];},
>         {name = "dptd";level = ["P476-525"];},
>         {name = "dptd";level = ["P526-575"];},
>         {name = "dptd";level = ["P576-625"];},
>         {name = "dptd";level = ["P626-675"];},
>         {name = "dptd";level = ["P676-725"];},
>         {name = "dptd";level = ["P726-775"];},
>         {name = "dptd";level = ["P776-825"];},
>         {name = "dptd";level = ["P826-875"];},
>         {name = "dptd";level = ["P876-912"];},
>         {name = "dptd";level = ["P913-936"];},
>         {name = "dptd";level = ["P937-962"];},
>         {name = "dptd";level = ["P963-987"];},
>         {name = "dptd";level = ["P988-1006"];},
>         {name = "dptd";level = ["P1007-1013"];}
>
> And I have the data:
>
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
>
> for a particular DTG and vertical level.  If I want to run multiple
lead
> times, it seems like I'll have to copy that long list of fields for
each
> lead time in the fcst dict and then duplicate the obs dictionary so
that
> each forecast entry has a corresponding obs level matching range.
Is this
> correct or is there a shorter/better way to do this?
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Tuesday, September 3, 2019 8:36 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> I see that you're plotting RMSE and bias (called ME for Mean Error
in MET)
> in the plots you sent.
>
> Table 7.6 of the MET User's Guide (
>
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> )
> describes the contents of the CNT line type type. Bot the columns
for RMSE
> and ME are followed by _NCL and _NCU columns which give the
parametric
> approximation of the confidence interval for those scores.  So yes,
you can
> run Stat-Analysis to aggregate SL1L2 lines together and write the
> corresponding CNT output line type.
>
> The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> for the ME statistic.
>
> You can change the alpha value for those confidence intervals by
setting:
> -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
>
> Thanks,
> John
>
>
> On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > This all helps me greatly.  One more questions: is there any
information
> > in either the CNT or SL1L2 that could give me  confidence
intervals for
> > each data point?  I'm looking to replicate the attached plot.
Notice
> that
> > the individual points could have either a 99, 95 or 90 %
confidence.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 12:46 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Sounds about right.  Each time you run Grid-Stat or Point-Stat you
can
> > write the CNT output line type which contains stats like MSE, ME,
MAE,
> and
> > RMSE.  And I'm recommended that you also write the SL1L2 line type
as
> well.
> >
> > Then you'd run a stat_analysis job like this:
> >
> > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> > cnt_out.stat
> >
> > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> the
> > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> and
> > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together and
> > write out the corresponding CNT line type to the output file named
> > cnt_out.stat.
> >
> > John
> >
> > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > So if I understand what you're saying correctly, then if I
wanted to an
> > > average of 24 hour forecasts over a month long run, then I would
use
> the
> > > SL1L2 output to aggregate and produce this average?  Whereas if
I used
> > CNT,
> > > this would just provide me ~30 individual (per day over a month)
24
> hour
> > > forecast verifications?
> > >
> > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> biases?
> > > I am forgetting if we used stat_analysis to produce a plot or if
the
> plot
> > > you showed me was just something you guys post processed using
python
> or
> > > whatnot.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 8:47 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > > aggregated together by the stat-analysis tool over multiple days
or
> > cases.
> > >
> > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > recommend writing the CNT line type (which has the stats
computed for
> > that
> > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > together in stat-analysis or METviewer).
> > >
> > > The other alternative is looking at the average of the daily
statistics
> > > scores.  For RMSE, the average of the daily RMSE is equal to the
> > aggregated
> > > score... as long as the number of matched pairs remains constant
day to
> > > day.  But if one today you have 98 matched pairs and tomorrow
you have
> > 105,
> > > then tomorrow's score will have slightly more weight.  The SL1L2
lines
> > are
> > > aggregated as weighted averages, where the TOTAL column is the
weight.
> > And
> > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > scores.  Generally, the statisticians recommend this method over
the
> mean
> > > of the daily scores.  Neither is "wrong", they just give you
slightly
> > > different information.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John.
> > > >
> > > > Sorry it's taken me such a long time to get to this.  It's
nearing
> the
> > > end
> > > > of FY19 so I have been finalizing several transition projects
and
> > haven’t
> > > > had much time to work on MET recently.  I just picked this
back up
> and
> > > have
> > > > loaded a couple new modules.  Here is what I have to work with
now:
> > > >
> > > > 1) intel/xe_2013-sp1-u1
> > > > 2) netcdf-local/netcdf-met
> > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > 5) udunits/udunits-2.1.24
> > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > >
> > > >
> > > > Running
> > > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v
3
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > >
> > > > I get many matched pairs.  Here is a sample of what the log
file
> looks
> > > > like for one of the pressure ranges I am verifying on:
> > > >
> > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > > observation type radiosonde, over region FULL, for
interpolation
> method
> > > > NEAREST(1), using 98 pairs.
> > > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > > 15259 DEBUG 3: Observations processed   = 4680328
> > > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > > 15265 DEBUG 3: Rejected: topography     = 0
> > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > 15268 DEBUG 3: Rejected: message type   = 0
> > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15280 DEBUG 2:
> > > > 15281 DEBUG 2:
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > >
> > > > I am going to work on processing these point stat files to
create
> those
> > > > vertical raob plots we had a discussion about.  I remember us
talking
> > > about
> > > > the partial sums file.  Why did we choose to go the route of
> producing
> > > > partial sums then feeding that into series analysis to
generate bias
> > and
> > > > MSE?  It looks like bias and MSE both exist within the CNT
line type
> > > (MBIAS
> > > > and MSE)?
> > > >
> > > >
> > > > Justin
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Great, thanks for sending me the sample data.  Yes, I was able
to
> > > replicate
> > > > the segfault.  The good news is that this is caused by a
simple typo
> > > that's
> > > > easy to fix.  If you look in the "obs.field" entry of the
> relhumConfig
> > > > file, you'll see an empty string for the last field listed:
> > > >
> > > > *obs = {    field = [*
> > > >
> > > >
> > > >
> > > > *         ...        {name = "dptd";level = ["P988-1006"];},
> > > {name =
> > > > "";level = ["P1007-1013"];}    ];*
> > > > If you change that empty string to "dptd", the segfault will
go
> away:*
> > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion (in
> 2
> > > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > > pairs.  They were discarded because of the valid times (seen
using
> -v 3
> > > > command line option to Point-Stat).  The ob file you sent is
named "
> > > > raob_2015020412.nc" but the actual times in that file are for
> > > > "20190426_120000":
> > > >
> > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> http://raob_2015020412.nc
> > >*
> > > >
> > > > * hdr_vld_table =  "20190426_120000" ;*
> > > >
> > > > So please be aware of that discrepancy.  To just produce some
matched
> > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > -obs_valid_end 20190426_120000*
> > > >
> > > > But I still get 0 matched pairs.  This time, it's because of
bad
> > forecast
> > > > values:
> > > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > > >
> > > > Taking a step back... let's run one of these fields through
> > > > plot_data_plane, which results in an error:
> > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > 'name="./read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > ERROR  : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> (97,
> > > 97),
> > > > (x, y) = (97, 0)
> > > >
> > > > While the numpy object is 97x97, the grid is specified as
being
> 118x118
> > > in
> > > > the python script ('nx': 118, 'ny': 118).
> > > >
> > > > Just to get something working, I modified the nx and ny in the
python
> > > > script:
> > > >        'nx':97,
> > > >        'ny':97,
> > > > Rerunning again, I still didn't get any matched pairs.
> > > >
> > > > So I'd suggest...
> > > > - Fix the typo in the config file.
> > > > - Figure out the discrepancy between the obs file name
timestamp and
> > the
> > > > data in that file.
> > > > - Make sure the grid information is consistent with the data
in the
> > > python
> > > > script.
> > > >
> > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > condition.  So next, I tested using met-8.1 with that empty
string.
> > This
> > > > time it does run with no segfault, but prints a warning about
the
> empty
> > > > string.
> > > >
> > > > Hope that helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Hey John,
> > > > >
> > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > >
> > > > > I am running  met-8.0/met-8.0-with-grib2-support and have
provided
> > > > > everything
> > > > > on that list you've provided me.  Let me know if you're able
to
> > > replicate
> > > > > it
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > There
> > > > > isn't much jumping out at me from the log messages you sent.
In
> > fact,
> > > I
> > > > > hunted around for the DEBUG(7) log message but couldn't find
where
> in
> > > the
> > > > > code it's being written.  Are you able to send me some
sample data
> to
> > > > > replicate this behavior?
> > > > >
> > > > > I'd need to know...
> > > > > - What version of MET are you running.
> > > > > - A copy of your Point-Stat config file.
> > > > > - The python script that you're running.
> > > > > - The input file for that python script.
> > > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > > >
> > > > > If I can replicate the behavior here, it should be easy to
run it
> in
> > > the
> > > > > debugger and figure it out.
> > > > >
> > > > > You can post data to our anonymous ftp site as described in
"How to
> > > send
> > > > us
> > > > > data":
> > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > > >        Queue: met_help
> > > > > >      Subject: point_stat seg faulting
> > > > > >        Owner: Nobody
> > > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > > >       Status: new
> > > > > >  Ticket <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > >
> > > > > >
> > > > > > I'm trying to extrapolate the production of vertical raob
> > > verification
> > > > > > plots
> > > > > > using point_stat and stat_analysis like we did together
for winds
> > but
> > > > for
> > > > > > relative humidity now.  But when I run point_stat, it seg
faults
> > > > without
> > > > > > much explanation
> > > > > >
> > > > > >
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > >
> > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> > climatology
> > > > > mean
> > > > > > levels, and 0 climatology standard deviation levels.
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > > >
> > > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > valid_time: 1
> > > > > >
> > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > PYTHON_NUMPY
> > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > ./out/point_stat.log
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > From my log file:
> > > > > >
> > > > > > 607 DEBUG 2:
> > > > > >
> > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > >
> > > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > > valid_time: 1
> > > > > >
> > > > > >
> > > > > >
> > > > > > Any help would be much appreciated
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin Tsu
> > > > > >
> > > > > > Marine Meteorology Division
> > > > > >
> > > > > > Data Assimilation/Mesoscale Modeling
> > > > > >
> > > > > > Building 704 Room 212
> > > > > >
> > > > > > Naval Research Laboratory, Code 7531
> > > > > >
> > > > > > 7 Grace Hopper Avenue
> > > > > >
> > > > > > Monterey, CA 93943-5502
> > > > > >
> > > > > >
> > > > > >
> > > > > > Ph. (831) 656-4111
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Sep 06 14:40:04 2019

Justin,

Here's a sample Point-Stat output file name:
 point_stat_360000L_20070331_120000V.stat

The "360000L" indicates that this is output for a 36-hour forecast.
And
the "20070331_120000V" timestamp is the valid time.

If you run Point-Stat once for each forecast lead time, the timestamps
should be different and they should not clobber eachother.

But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
with the same timing info.  The "output_prefix" config file entry is
used
to customize the output file names to prevent them from clobbering
eachother.  For example, setting:
  output_prefix="RUN1";
Would result in files named "
point_stat_RUN1_360000L_20070331_120000V.stat".

Make sense?

Thanks,
John

On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Invoking point_stat multiple times will create and replace the old
_cnt
> and _sl1l2 files right?  At that point, I'll have a bunch of CNT and
SL1L2
>      files and then use stat_analysis to aggregate them?
>
> Justin
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:11 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Yes, that is a long list of fields, but I don't see a way obvious
way of
> shortening that.  But to do multiple lead times, I'd just call
Point-Stat
> multiple times, once for each lead time, and update the config file
to use
> environment variables for the current time:
>
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> ...
>
> Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
> variables.
>
> John
>
> On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > I managed to scrap together some code to get RAOB stats from CNT
plotted
> > with 95% CI.  Working on Surface stats now.
> >
> > So my configuration file looks like this right now:
> >
> > fcst = {
> >      field = [
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> >      ];
> > }
> >
> > obs = {
> >     field = [
> >         {name = "dptd";level = ["P0.86-1.5"];},
> >         {name = "dptd";level = ["P1.6-2.5"];},
> >         {name = "dptd";level = ["P2.6-3.5"];},
> >         {name = "dptd";level = ["P3.6-4.5"];},
> >         {name = "dptd";level = ["P4.6-6"];},
> >         {name = "dptd";level = ["P6.1-8"];},
> >         {name = "dptd";level = ["P9-15"];},
> >         {name = "dptd";level = ["P16-25"];},
> >         {name = "dptd";level = ["P26-40"];},
> >         {name = "dptd";level = ["P41-65"];},
> >         {name = "dptd";level = ["P66-85"];},
> >         {name = "dptd";level = ["P86-125"];},
> >         {name = "dptd";level = ["P126-175"];},
> >         {name = "dptd";level = ["P176-225"];},
> >         {name = "dptd";level = ["P226-275"];},
> >         {name = "dptd";level = ["P276-325"];},
> >         {name = "dptd";level = ["P326-375"];},
> >         {name = "dptd";level = ["P376-425"];},
> >         {name = "dptd";level = ["P426-475"];},
> >         {name = "dptd";level = ["P476-525"];},
> >         {name = "dptd";level = ["P526-575"];},
> >         {name = "dptd";level = ["P576-625"];},
> >         {name = "dptd";level = ["P626-675"];},
> >         {name = "dptd";level = ["P676-725"];},
> >         {name = "dptd";level = ["P726-775"];},
> >         {name = "dptd";level = ["P776-825"];},
> >         {name = "dptd";level = ["P826-875"];},
> >         {name = "dptd";level = ["P876-912"];},
> >         {name = "dptd";level = ["P913-936"];},
> >         {name = "dptd";level = ["P937-962"];},
> >         {name = "dptd";level = ["P963-987"];},
> >         {name = "dptd";level = ["P988-1006"];},
> >         {name = "dptd";level = ["P1007-1013"];}
> >
> > And I have the data:
> >
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> >
> > for a particular DTG and vertical level.  If I want to run
multiple lead
> > times, it seems like I'll have to copy that long list of fields
for each
> > lead time in the fcst dict and then duplicate the obs dictionary
so that
> > each forecast entry has a corresponding obs level matching range.
Is
> this
> > correct or is there a shorter/better way to do this?
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Tuesday, September 3, 2019 8:36 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > I see that you're plotting RMSE and bias (called ME for Mean Error
in
> MET)
> > in the plots you sent.
> >
> > Table 7.6 of the MET User's Guide (
> >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > )
> > describes the contents of the CNT line type type. Bot the columns
for
> RMSE
> > and ME are followed by _NCL and _NCU columns which give the
parametric
> > approximation of the confidence interval for those scores.  So
yes, you
> can
> > run Stat-Analysis to aggregate SL1L2 lines together and write the
> > corresponding CNT output line type.
> >
> > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> > confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> > for the ME statistic.
> >
> > You can change the alpha value for those confidence intervals by
setting:
> > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> >
> > Thanks,
> > John
> >
> >
> > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > This all helps me greatly.  One more questions: is there any
> information
> > > in either the CNT or SL1L2 that could give me  confidence
intervals for
> > > each data point?  I'm looking to replicate the attached plot.
Notice
> > that
> > > the individual points could have either a 99, 95 or 90 %
confidence.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 12:46 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Sounds about right.  Each time you run Grid-Stat or Point-Stat
you can
> > > write the CNT output line type which contains stats like MSE,
ME, MAE,
> > and
> > > RMSE.  And I'm recommended that you also write the SL1L2 line
type as
> > well.
> > >
> > > Then you'd run a stat_analysis job like this:
> > >
> > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > cnt_out.stat
> > >
> > > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> > the
> > > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> > and
> > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> and
> > > write out the corresponding CNT line type to the output file
named
> > > cnt_out.stat.
> > >
> > > John
> > >
> > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > So if I understand what you're saying correctly, then if I
wanted to
> an
> > > > average of 24 hour forecasts over a month long run, then I
would use
> > the
> > > > SL1L2 output to aggregate and produce this average?  Whereas
if I
> used
> > > CNT,
> > > > this would just provide me ~30 individual (per day over a
month) 24
> > hour
> > > > forecast verifications?
> > > >
> > > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> > biases?
> > > > I am forgetting if we used stat_analysis to produce a plot or
if the
> > plot
> > > > you showed me was just something you guys post processed using
python
> > or
> > > > whatnot.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > aggregated together by the stat-analysis tool over multiple
days or
> > > cases.
> > > >
> > > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > > recommend writing the CNT line type (which has the stats
computed for
> > > that
> > > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > > together in stat-analysis or METviewer).
> > > >
> > > > The other alternative is looking at the average of the daily
> statistics
> > > > scores.  For RMSE, the average of the daily RMSE is equal to
the
> > > aggregated
> > > > score... as long as the number of matched pairs remains
constant day
> to
> > > > day.  But if one today you have 98 matched pairs and tomorrow
you
> have
> > > 105,
> > > > then tomorrow's score will have slightly more weight.  The
SL1L2
> lines
> > > are
> > > > aggregated as weighted averages, where the TOTAL column is the
> weight.
> > > And
> > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > scores.  Generally, the statisticians recommend this method
over the
> > mean
> > > > of the daily scores.  Neither is "wrong", they just give you
slightly
> > > > different information.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John.
> > > > >
> > > > > Sorry it's taken me such a long time to get to this.  It's
nearing
> > the
> > > > end
> > > > > of FY19 so I have been finalizing several transition
projects and
> > > haven’t
> > > > > had much time to work on MET recently.  I just picked this
back up
> > and
> > > > have
> > > > > loaded a couple new modules.  Here is what I have to work
with now:
> > > > >
> > > > > 1) intel/xe_2013-sp1-u1
> > > > > 2) netcdf-local/netcdf-met
> > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > 5) udunits/udunits-2.1.24
> > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > >
> > > > >
> > > > > Running
> > > > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > >
> > > > > I get many matched pairs.  Here is a sample of what the log
file
> > looks
> > > > > like for one of the pressure ranges I am verifying on:
> > > > >
> > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> for
> > > > > observation type radiosonde, over region FULL, for
interpolation
> > method
> > > > > NEAREST(1), using 98 pairs.
> > > > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > > > 15259 DEBUG 3: Observations processed   = 4680328
> > > > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > > > 15265 DEBUG 3: Rejected: topography     = 0
> > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > 15268 DEBUG 3: Rejected: message type   = 0
> > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15280 DEBUG 2:
> > > > > 15281 DEBUG 2:
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > >
> > > > > I am going to work on processing these point stat files to
create
> > those
> > > > > vertical raob plots we had a discussion about.  I remember
us
> talking
> > > > about
> > > > > the partial sums file.  Why did we choose to go the route of
> > producing
> > > > > partial sums then feeding that into series analysis to
generate
> bias
> > > and
> > > > > MSE?  It looks like bias and MSE both exist within the CNT
line
> type
> > > > (MBIAS
> > > > > and MSE)?
> > > > >
> > > > >
> > > > > Justin
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Great, thanks for sending me the sample data.  Yes, I was
able to
> > > > replicate
> > > > > the segfault.  The good news is that this is caused by a
simple
> typo
> > > > that's
> > > > > easy to fix.  If you look in the "obs.field" entry of the
> > relhumConfig
> > > > > file, you'll see an empty string for the last field listed:
> > > > >
> > > > > *obs = {    field = [*
> > > > >
> > > > >
> > > > >
> > > > > *         ...        {name = "dptd";level = ["P988-1006"];},
> > > > {name =
> > > > > "";level = ["P1007-1013"];}    ];*
> > > > > If you change that empty string to "dptd", the segfault will
go
> > away:*
> > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> (in
> > 2
> > > > > minutes 48 seconds on my desktop machine), but it produced 0
> matched
> > > > > pairs.  They were discarded because of the valid times (seen
using
> > -v 3
> > > > > command line option to Point-Stat).  The ob file you sent is
named
> "
> > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > "20190426_120000":
> > > > >
> > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > http://raob_2015020412.nc
> > > >*
> > > > >
> > > > > * hdr_vld_table =  "20190426_120000" ;*
> > > > >
> > > > > So please be aware of that discrepancy.  To just produce
some
> matched
> > > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > -obs_valid_end 20190426_120000*
> > > > >
> > > > > But I still get 0 matched pairs.  This time, it's because of
bad
> > > forecast
> > > > > values:
> > > > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > > > >
> > > > > Taking a step back... let's run one of these fields through
> > > > > plot_data_plane, which results in an error:
> > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > > 'name="./read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > ERROR  : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> > (97,
> > > > 97),
> > > > > (x, y) = (97, 0)
> > > > >
> > > > > While the numpy object is 97x97, the grid is specified as
being
> > 118x118
> > > > in
> > > > > the python script ('nx': 118, 'ny': 118).
> > > > >
> > > > > Just to get something working, I modified the nx and ny in
the
> python
> > > > > script:
> > > > >        'nx':97,
> > > > >        'ny':97,
> > > > > Rerunning again, I still didn't get any matched pairs.
> > > > >
> > > > > So I'd suggest...
> > > > > - Fix the typo in the config file.
> > > > > - Figure out the discrepancy between the obs file name
timestamp
> and
> > > the
> > > > > data in that file.
> > > > > - Make sure the grid information is consistent with the data
in the
> > > > python
> > > > > script.
> > > > >
> > > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > > condition.  So next, I tested using met-8.1 with that empty
string.
> > > This
> > > > > time it does run with no segfault, but prints a warning
about the
> > empty
> > > > > string.
> > > > >
> > > > > Hope that helps.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > >
> > > > > > I am running  met-8.0/met-8.0-with-grib2-support and have
> provided
> > > > > > everything
> > > > > > on that list you've provided me.  Let me know if you're
able to
> > > > replicate
> > > > > > it
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > > There
> > > > > > isn't much jumping out at me from the log messages you
sent.  In
> > > fact,
> > > > I
> > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> where
> > in
> > > > the
> > > > > > code it's being written.  Are you able to send me some
sample
> data
> > to
> > > > > > replicate this behavior?
> > > > > >
> > > > > > I'd need to know...
> > > > > > - What version of MET are you running.
> > > > > > - A copy of your Point-Stat config file.
> > > > > > - The python script that you're running.
> > > > > > - The input file for that python script.
> > > > > > - The NetCDF point observation file you're passing to
Point-Stat.
> > > > > >
> > > > > > If I can replicate the behavior here, it should be easy to
run it
> > in
> > > > the
> > > > > > debugger and figure it out.
> > > > > >
> > > > > > You can post data to our anonymous ftp site as described
in "How
> to
> > > > send
> > > > > us
> > > > > > data":
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > >        Queue: met_help
> > > > > > >      Subject: point_stat seg faulting
> > > > > > >        Owner: Nobody
> > > > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > >       Status: new
> > > > > > >  Ticket <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > verification
> > > > > > > plots
> > > > > > > using point_stat and stat_analysis like we did together
for
> winds
> > > but
> > > > > for
> > > > > > > relative humidity now.  But when I run point_stat, it
seg
> faults
> > > > > without
> > > > > > > much explanation
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > >
> > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels,
0
> > > climatology
> > > > > > mean
> > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > valid_time: 1
> > > > > > >
> > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > > PYTHON_NUMPY
> > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > > ./out/point_stat.log
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > From my log file:
> > > > > > >
> > > > > > > 607 DEBUG 2:
> > > > > > >
> > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id:
617
> > > > > valid_time: 1
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Any help would be much appreciated
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin Tsu
> > > > > > >
> > > > > > > Marine Meteorology Division
> > > > > > >
> > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > >
> > > > > > > Building 704 Room 212
> > > > > > >
> > > > > > > Naval Research Laboratory, Code 7531
> > > > > > >
> > > > > > > 7 Grace Hopper Avenue
> > > > > > >
> > > > > > > Monterey, CA 93943-5502
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Ph. (831) 656-4111
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Mon Sep 09 16:56:17 2019

Hey John,

That makes sense.  The way that I've set up my config file is as
follows:
fcst = {
     field = [
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
     ];
}
obs = {
    field = [
        {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
    ];
}
message_type   = [ "${MSG_TYPE}" ];

The environmental variables I'm setting in the wrapper script are LEV,
INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE.  In this way, it seems
like I will only be able to run point_Stat for a single elevation and
a single lead time.  Do you recommend this? Or Should I put all the
elevations for a single lead time in one pass of point_stat?

So my config file will look like something like this...
fcst = {
     field = [
        {name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
... etc.
     ];
}

Also, I am not sure what happened by when I run point_stat now I am
getting that error
ERROR  : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
abbreviation 'dptd' for table version 2
Again.  This makes me think that the obs_var name is wrong, but
ncdump -v obs_var raob_*.nc gives me  obs_var =
  "ws",
  "wdir",
  "t",
  "dptd",
  "pres",
  "ght" ;
So clearly dptd exists.

Justin



-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, September 6, 2019 1:40 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

Here's a sample Point-Stat output file name:
 point_stat_360000L_20070331_120000V.stat

The "360000L" indicates that this is output for a 36-hour forecast.
And
the "20070331_120000V" timestamp is the valid time.

If you run Point-Stat once for each forecast lead time, the timestamps
should be different and they should not clobber eachother.

But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
with the same timing info.  The "output_prefix" config file entry is
used
to customize the output file names to prevent them from clobbering
eachother.  For example, setting:
  output_prefix="RUN1";
Would result in files named "
point_stat_RUN1_360000L_20070331_120000V.stat".

Make sense?

Thanks,
John

On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Invoking point_stat multiple times will create and replace the old
_cnt
> and _sl1l2 files right?  At that point, I'll have a bunch of CNT and
SL1L2
>      files and then use stat_analysis to aggregate them?
>
> Justin
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:11 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Yes, that is a long list of fields, but I don't see a way obvious
way of
> shortening that.  But to do multiple lead times, I'd just call
Point-Stat
> multiple times, once for each lead time, and update the config file
to use
> environment variables for the current time:
>
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> ...
>
> Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
> variables.
>
> John
>
> On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > I managed to scrap together some code to get RAOB stats from CNT
plotted
> > with 95% CI.  Working on Surface stats now.
> >
> > So my configuration file looks like this right now:
> >
> > fcst = {
> >      field = [
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> >      ];
> > }
> >
> > obs = {
> >     field = [
> >         {name = "dptd";level = ["P0.86-1.5"];},
> >         {name = "dptd";level = ["P1.6-2.5"];},
> >         {name = "dptd";level = ["P2.6-3.5"];},
> >         {name = "dptd";level = ["P3.6-4.5"];},
> >         {name = "dptd";level = ["P4.6-6"];},
> >         {name = "dptd";level = ["P6.1-8"];},
> >         {name = "dptd";level = ["P9-15"];},
> >         {name = "dptd";level = ["P16-25"];},
> >         {name = "dptd";level = ["P26-40"];},
> >         {name = "dptd";level = ["P41-65"];},
> >         {name = "dptd";level = ["P66-85"];},
> >         {name = "dptd";level = ["P86-125"];},
> >         {name = "dptd";level = ["P126-175"];},
> >         {name = "dptd";level = ["P176-225"];},
> >         {name = "dptd";level = ["P226-275"];},
> >         {name = "dptd";level = ["P276-325"];},
> >         {name = "dptd";level = ["P326-375"];},
> >         {name = "dptd";level = ["P376-425"];},
> >         {name = "dptd";level = ["P426-475"];},
> >         {name = "dptd";level = ["P476-525"];},
> >         {name = "dptd";level = ["P526-575"];},
> >         {name = "dptd";level = ["P576-625"];},
> >         {name = "dptd";level = ["P626-675"];},
> >         {name = "dptd";level = ["P676-725"];},
> >         {name = "dptd";level = ["P726-775"];},
> >         {name = "dptd";level = ["P776-825"];},
> >         {name = "dptd";level = ["P826-875"];},
> >         {name = "dptd";level = ["P876-912"];},
> >         {name = "dptd";level = ["P913-936"];},
> >         {name = "dptd";level = ["P937-962"];},
> >         {name = "dptd";level = ["P963-987"];},
> >         {name = "dptd";level = ["P988-1006"];},
> >         {name = "dptd";level = ["P1007-1013"];}
> >
> > And I have the data:
> >
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> >
> > for a particular DTG and vertical level.  If I want to run
multiple lead
> > times, it seems like I'll have to copy that long list of fields
for each
> > lead time in the fcst dict and then duplicate the obs dictionary
so that
> > each forecast entry has a corresponding obs level matching range.
Is
> this
> > correct or is there a shorter/better way to do this?
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Tuesday, September 3, 2019 8:36 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > I see that you're plotting RMSE and bias (called ME for Mean Error
in
> MET)
> > in the plots you sent.
> >
> > Table 7.6 of the MET User's Guide (
> >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > )
> > describes the contents of the CNT line type type. Bot the columns
for
> RMSE
> > and ME are followed by _NCL and _NCU columns which give the
parametric
> > approximation of the confidence interval for those scores.  So
yes, you
> can
> > run Stat-Analysis to aggregate SL1L2 lines together and write the
> > corresponding CNT output line type.
> >
> > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> > confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> > for the ME statistic.
> >
> > You can change the alpha value for those confidence intervals by
setting:
> > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> >
> > Thanks,
> > John
> >
> >
> > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > This all helps me greatly.  One more questions: is there any
> information
> > > in either the CNT or SL1L2 that could give me  confidence
intervals for
> > > each data point?  I'm looking to replicate the attached plot.
Notice
> > that
> > > the individual points could have either a 99, 95 or 90 %
confidence.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 12:46 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Sounds about right.  Each time you run Grid-Stat or Point-Stat
you can
> > > write the CNT output line type which contains stats like MSE,
ME, MAE,
> > and
> > > RMSE.  And I'm recommended that you also write the SL1L2 line
type as
> > well.
> > >
> > > Then you'd run a stat_analysis job like this:
> > >
> > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > cnt_out.stat
> > >
> > > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> > the
> > > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> > and
> > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> and
> > > write out the corresponding CNT line type to the output file
named
> > > cnt_out.stat.
> > >
> > > John
> > >
> > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > So if I understand what you're saying correctly, then if I
wanted to
> an
> > > > average of 24 hour forecasts over a month long run, then I
would use
> > the
> > > > SL1L2 output to aggregate and produce this average?  Whereas
if I
> used
> > > CNT,
> > > > this would just provide me ~30 individual (per day over a
month) 24
> > hour
> > > > forecast verifications?
> > > >
> > > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> > biases?
> > > > I am forgetting if we used stat_analysis to produce a plot or
if the
> > plot
> > > > you showed me was just something you guys post processed using
python
> > or
> > > > whatnot.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > aggregated together by the stat-analysis tool over multiple
days or
> > > cases.
> > > >
> > > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > > recommend writing the CNT line type (which has the stats
computed for
> > > that
> > > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > > together in stat-analysis or METviewer).
> > > >
> > > > The other alternative is looking at the average of the daily
> statistics
> > > > scores.  For RMSE, the average of the daily RMSE is equal to
the
> > > aggregated
> > > > score... as long as the number of matched pairs remains
constant day
> to
> > > > day.  But if one today you have 98 matched pairs and tomorrow
you
> have
> > > 105,
> > > > then tomorrow's score will have slightly more weight.  The
SL1L2
> lines
> > > are
> > > > aggregated as weighted averages, where the TOTAL column is the
> weight.
> > > And
> > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > scores.  Generally, the statisticians recommend this method
over the
> > mean
> > > > of the daily scores.  Neither is "wrong", they just give you
slightly
> > > > different information.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John.
> > > > >
> > > > > Sorry it's taken me such a long time to get to this.  It's
nearing
> > the
> > > > end
> > > > > of FY19 so I have been finalizing several transition
projects and
> > > haven’t
> > > > > had much time to work on MET recently.  I just picked this
back up
> > and
> > > > have
> > > > > loaded a couple new modules.  Here is what I have to work
with now:
> > > > >
> > > > > 1) intel/xe_2013-sp1-u1
> > > > > 2) netcdf-local/netcdf-met
> > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > 5) udunits/udunits-2.1.24
> > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > >
> > > > >
> > > > > Running
> > > > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > >
> > > > > I get many matched pairs.  Here is a sample of what the log
file
> > looks
> > > > > like for one of the pressure ranges I am verifying on:
> > > > >
> > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> for
> > > > > observation type radiosonde, over region FULL, for
interpolation
> > method
> > > > > NEAREST(1), using 98 pairs.
> > > > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > > > 15259 DEBUG 3: Observations processed   = 4680328
> > > > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > > > 15265 DEBUG 3: Rejected: topography     = 0
> > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > 15268 DEBUG 3: Rejected: message type   = 0
> > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15280 DEBUG 2:
> > > > > 15281 DEBUG 2:
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > >
> > > > > I am going to work on processing these point stat files to
create
> > those
> > > > > vertical raob plots we had a discussion about.  I remember
us
> talking
> > > > about
> > > > > the partial sums file.  Why did we choose to go the route of
> > producing
> > > > > partial sums then feeding that into series analysis to
generate
> bias
> > > and
> > > > > MSE?  It looks like bias and MSE both exist within the CNT
line
> type
> > > > (MBIAS
> > > > > and MSE)?
> > > > >
> > > > >
> > > > > Justin
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Great, thanks for sending me the sample data.  Yes, I was
able to
> > > > replicate
> > > > > the segfault.  The good news is that this is caused by a
simple
> typo
> > > > that's
> > > > > easy to fix.  If you look in the "obs.field" entry of the
> > relhumConfig
> > > > > file, you'll see an empty string for the last field listed:
> > > > >
> > > > > *obs = {    field = [*
> > > > >
> > > > >
> > > > >
> > > > > *         ...        {name = "dptd";level = ["P988-1006"];},
> > > > {name =
> > > > > "";level = ["P1007-1013"];}    ];*
> > > > > If you change that empty string to "dptd", the segfault will
go
> > away:*
> > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> (in
> > 2
> > > > > minutes 48 seconds on my desktop machine), but it produced 0
> matched
> > > > > pairs.  They were discarded because of the valid times (seen
using
> > -v 3
> > > > > command line option to Point-Stat).  The ob file you sent is
named
> "
> > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > "20190426_120000":
> > > > >
> > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > http://raob_2015020412.nc
> > > >*
> > > > >
> > > > > * hdr_vld_table =  "20190426_120000" ;*
> > > > >
> > > > > So please be aware of that discrepancy.  To just produce
some
> matched
> > > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > -obs_valid_end 20190426_120000*
> > > > >
> > > > > But I still get 0 matched pairs.  This time, it's because of
bad
> > > forecast
> > > > > values:
> > > > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > > > >
> > > > > Taking a step back... let's run one of these fields through
> > > > > plot_data_plane, which results in an error:
> > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > > 'name="./read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > ERROR  : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> > (97,
> > > > 97),
> > > > > (x, y) = (97, 0)
> > > > >
> > > > > While the numpy object is 97x97, the grid is specified as
being
> > 118x118
> > > > in
> > > > > the python script ('nx': 118, 'ny': 118).
> > > > >
> > > > > Just to get something working, I modified the nx and ny in
the
> python
> > > > > script:
> > > > >        'nx':97,
> > > > >        'ny':97,
> > > > > Rerunning again, I still didn't get any matched pairs.
> > > > >
> > > > > So I'd suggest...
> > > > > - Fix the typo in the config file.
> > > > > - Figure out the discrepancy between the obs file name
timestamp
> and
> > > the
> > > > > data in that file.
> > > > > - Make sure the grid information is consistent with the data
in the
> > > > python
> > > > > script.
> > > > >
> > > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > > condition.  So next, I tested using met-8.1 with that empty
string.
> > > This
> > > > > time it does run with no segfault, but prints a warning
about the
> > empty
> > > > > string.
> > > > >
> > > > > Hope that helps.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > >
> > > > > > I am running  met-8.0/met-8.0-with-grib2-support and have
> provided
> > > > > > everything
> > > > > > on that list you've provided me.  Let me know if you're
able to
> > > > replicate
> > > > > > it
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > > There
> > > > > > isn't much jumping out at me from the log messages you
sent.  In
> > > fact,
> > > > I
> > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> where
> > in
> > > > the
> > > > > > code it's being written.  Are you able to send me some
sample
> data
> > to
> > > > > > replicate this behavior?
> > > > > >
> > > > > > I'd need to know...
> > > > > > - What version of MET are you running.
> > > > > > - A copy of your Point-Stat config file.
> > > > > > - The python script that you're running.
> > > > > > - The input file for that python script.
> > > > > > - The NetCDF point observation file you're passing to
Point-Stat.
> > > > > >
> > > > > > If I can replicate the behavior here, it should be easy to
run it
> > in
> > > > the
> > > > > > debugger and figure it out.
> > > > > >
> > > > > > You can post data to our anonymous ftp site as described
in "How
> to
> > > > send
> > > > > us
> > > > > > data":
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > >        Queue: met_help
> > > > > > >      Subject: point_stat seg faulting
> > > > > > >        Owner: Nobody
> > > > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > >       Status: new
> > > > > > >  Ticket <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > verification
> > > > > > > plots
> > > > > > > using point_stat and stat_analysis like we did together
for
> winds
> > > but
> > > > > for
> > > > > > > relative humidity now.  But when I run point_stat, it
seg
> faults
> > > > > without
> > > > > > > much explanation
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > >
> > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels,
0
> > > climatology
> > > > > > mean
> > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > valid_time: 1
> > > > > > >
> > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > > PYTHON_NUMPY
> > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > > ./out/point_stat.log
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > From my log file:
> > > > > > >
> > > > > > > 607 DEBUG 2:
> > > > > > >
> > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id:
617
> > > > > valid_time: 1
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Any help would be much appreciated
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin Tsu
> > > > > > >
> > > > > > > Marine Meteorology Division
> > > > > > >
> > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > >
> > > > > > > Building 704 Room 212
> > > > > > >
> > > > > > > Naval Research Laboratory, Code 7531
> > > > > > >
> > > > > > > 7 Grace Hopper Avenue
> > > > > > >
> > > > > > > Monterey, CA 93943-5502
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Ph. (831) 656-4111
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Sep 13 16:46:25 2019

Justin,

Sorry for the delay.  I was in DC on travel this week until today.

It's really up to you how you'd like to configure it.  Unless it's too
unwieldy, I do think I'd try verifying all levels at once in a single
call
to Point-Stat.  All those observations are contained in the same point
observation file.  If you verify each level in a separate call to
Point-Stat, you'll be looping through and processing those obs many,
many
times, which will be relatively slow.  From a processing perspective,
it'd
be more efficient to process them all at once, in a single call to
Point-Stat.

But you balance runtime efficiency versus ease of scripting and
configuration.  And that's why it's up to you to decide which you
prefer.

Hope that helps.

Thanks,
John

On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> That makes sense.  The way that I've set up my config file is as
follows:
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
>      ];
> }
> obs = {
>     field = [
>         {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
>     ];
> }
> message_type   = [ "${MSG_TYPE}" ];
>
> The environmental variables I'm setting in the wrapper script are
LEV,
> INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE.  In this way, it seems
like I
> will only be able to run point_Stat for a single elevation and a
single
> lead time.  Do you recommend this? Or Should I put all the
elevations for a
> single lead time in one pass of point_stat?
>
> So my config file will look like something like this...
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> ... etc.
>      ];
> }
>
> Also, I am not sure what happened by when I run point_stat now I am
> getting that error
> ERROR  : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> Again.  This makes me think that the obs_var name is wrong, but
ncdump -v
> obs_var raob_*.nc gives me  obs_var =
>   "ws",
>   "wdir",
>   "t",
>   "dptd",
>   "pres",
>   "ght" ;
> So clearly dptd exists.
>
> Justin
>
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:40 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Here's a sample Point-Stat output file name:
>  point_stat_360000L_20070331_120000V.stat
>
> The "360000L" indicates that this is output for a 36-hour forecast.
And
> the "20070331_120000V" timestamp is the valid time.
>
> If you run Point-Stat once for each forecast lead time, the
timestamps
> should be different and they should not clobber eachother.
>
> But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
> with the same timing info.  The "output_prefix" config file entry is
used
> to customize the output file names to prevent them from clobbering
> eachother.  For example, setting:
>   output_prefix="RUN1";
> Would result in files named "
> point_stat_RUN1_360000L_20070331_120000V.stat".
>
> Make sense?
>
> Thanks,
> John
>
> On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Invoking point_stat multiple times will create and replace the old
_cnt
> > and _sl1l2 files right?  At that point, I'll have a bunch of CNT
and
> SL1L2
> >      files and then use stat_analysis to aggregate them?
> >
> > Justin
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:11 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Yes, that is a long list of fields, but I don't see a way obvious
way of
> > shortening that.  But to do multiple lead times, I'd just call
Point-Stat
> > multiple times, once for each lead time, and update the config
file to
> use
> > environment variables for the current time:
> >
> > fcst = {
> >      field = [
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > ...
> >
> > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> environment
> > variables.
> >
> > John
> >
> > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > I managed to scrap together some code to get RAOB stats from CNT
> plotted
> > > with 95% CI.  Working on Surface stats now.
> > >
> > > So my configuration file looks like this right now:
> > >
> > > fcst = {
> > >      field = [
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > >      ];
> > > }
> > >
> > > obs = {
> > >     field = [
> > >         {name = "dptd";level = ["P0.86-1.5"];},
> > >         {name = "dptd";level = ["P1.6-2.5"];},
> > >         {name = "dptd";level = ["P2.6-3.5"];},
> > >         {name = "dptd";level = ["P3.6-4.5"];},
> > >         {name = "dptd";level = ["P4.6-6"];},
> > >         {name = "dptd";level = ["P6.1-8"];},
> > >         {name = "dptd";level = ["P9-15"];},
> > >         {name = "dptd";level = ["P16-25"];},
> > >         {name = "dptd";level = ["P26-40"];},
> > >         {name = "dptd";level = ["P41-65"];},
> > >         {name = "dptd";level = ["P66-85"];},
> > >         {name = "dptd";level = ["P86-125"];},
> > >         {name = "dptd";level = ["P126-175"];},
> > >         {name = "dptd";level = ["P176-225"];},
> > >         {name = "dptd";level = ["P226-275"];},
> > >         {name = "dptd";level = ["P276-325"];},
> > >         {name = "dptd";level = ["P326-375"];},
> > >         {name = "dptd";level = ["P376-425"];},
> > >         {name = "dptd";level = ["P426-475"];},
> > >         {name = "dptd";level = ["P476-525"];},
> > >         {name = "dptd";level = ["P526-575"];},
> > >         {name = "dptd";level = ["P576-625"];},
> > >         {name = "dptd";level = ["P626-675"];},
> > >         {name = "dptd";level = ["P676-725"];},
> > >         {name = "dptd";level = ["P726-775"];},
> > >         {name = "dptd";level = ["P776-825"];},
> > >         {name = "dptd";level = ["P826-875"];},
> > >         {name = "dptd";level = ["P876-912"];},
> > >         {name = "dptd";level = ["P913-936"];},
> > >         {name = "dptd";level = ["P937-962"];},
> > >         {name = "dptd";level = ["P963-987"];},
> > >         {name = "dptd";level = ["P988-1006"];},
> > >         {name = "dptd";level = ["P1007-1013"];}
> > >
> > > And I have the data:
> > >
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > >
> > > for a particular DTG and vertical level.  If I want to run
multiple
> lead
> > > times, it seems like I'll have to copy that long list of fields
for
> each
> > > lead time in the fcst dict and then duplicate the obs dictionary
so
> that
> > > each forecast entry has a corresponding obs level matching
range.  Is
> > this
> > > correct or is there a shorter/better way to do this?
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > MET)
> > > in the plots you sent.
> > >
> > > Table 7.6 of the MET User's Guide (
> > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > )
> > > describes the contents of the CNT line type type. Bot the
columns for
> > RMSE
> > > and ME are followed by _NCL and _NCU columns which give the
parametric
> > > approximation of the confidence interval for those scores.  So
yes, you
> > can
> > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > corresponding CNT output line type.
> > >
> > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> parametric
> > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> columns
> > > for the ME statistic.
> > >
> > > You can change the alpha value for those confidence intervals by
> setting:
> > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > This all helps me greatly.  One more questions: is there any
> > information
> > > > in either the CNT or SL1L2 that could give me  confidence
intervals
> for
> > > > each data point?  I'm looking to replicate the attached plot.
Notice
> > > that
> > > > the individual points could have either a 99, 95 or 90 %
confidence.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Sounds about right.  Each time you run Grid-Stat or Point-Stat
you
> can
> > > > write the CNT output line type which contains stats like MSE,
ME,
> MAE,
> > > and
> > > > RMSE.  And I'm recommended that you also write the SL1L2 line
type as
> > > well.
> > > >
> > > > Then you'd run a stat_analysis job like this:
> > > >
> > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> -line_type
> > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > cnt_out.stat
> > > >
> > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> reads
> > > the
> > > > SL1L2 line type, and for each unique combination of FCST_VAR,
> FCST_LEV,
> > > and
> > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> > and
> > > > write out the corresponding CNT line type to the output file
named
> > > > cnt_out.stat.
> > > >
> > > > John
> > > >
> > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > So if I understand what you're saying correctly, then if I
wanted
> to
> > an
> > > > > average of 24 hour forecasts over a month long run, then I
would
> use
> > > the
> > > > > SL1L2 output to aggregate and produce this average?  Whereas
if I
> > used
> > > > CNT,
> > > > > this would just provide me ~30 individual (per day over a
month) 24
> > > hour
> > > > > forecast verifications?
> > > > >
> > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > biases?
> > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> the
> > > plot
> > > > > you showed me was just something you guys post processed
using
> python
> > > or
> > > > > whatnot.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > > aggregated together by the stat-analysis tool over multiple
days or
> > > > cases.
> > > > >
> > > > > If you're interested in continuous statistics from Point-
Stat, I'd
> > > > > recommend writing the CNT line type (which has the stats
computed
> for
> > > > that
> > > > > single run) and the SL1L2 line type (so that you can
aggregate them
> > > > > together in stat-analysis or METviewer).
> > > > >
> > > > > The other alternative is looking at the average of the daily
> > statistics
> > > > > scores.  For RMSE, the average of the daily RMSE is equal to
the
> > > > aggregated
> > > > > score... as long as the number of matched pairs remains
constant
> day
> > to
> > > > > day.  But if one today you have 98 matched pairs and
tomorrow you
> > have
> > > > 105,
> > > > > then tomorrow's score will have slightly more weight.  The
SL1L2
> > lines
> > > > are
> > > > > aggregated as weighted averages, where the TOTAL column is
the
> > weight.
> > > > And
> > > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > > scores.  Generally, the statisticians recommend this method
over
> the
> > > mean
> > > > > of the daily scores.  Neither is "wrong", they just give you
> slightly
> > > > > different information.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John.
> > > > > >
> > > > > > Sorry it's taken me such a long time to get to this.  It's
> nearing
> > > the
> > > > > end
> > > > > > of FY19 so I have been finalizing several transition
projects and
> > > > haven’t
> > > > > > had much time to work on MET recently.  I just picked this
back
> up
> > > and
> > > > > have
> > > > > > loaded a couple new modules.  Here is what I have to work
with
> now:
> > > > > >
> > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > 2) netcdf-local/netcdf-met
> > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > 5) udunits/udunits-2.1.24
> > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > >
> > > > > >
> > > > > > Running
> > > > > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > > >
> > > > > > I get many matched pairs.  Here is a sample of what the
log file
> > > looks
> > > > > > like for one of the pressure ranges I am verifying on:
> > > > > >
> > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> > for
> > > > > > observation type radiosonde, over region FULL, for
interpolation
> > > method
> > > > > > NEAREST(1), using 98 pairs.
> > > > > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > > > > 15259 DEBUG 3: Observations processed   = 4680328
> > > > > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > > > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > > > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > > > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > > > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > > > > 15265 DEBUG 3: Rejected: topography     = 0
> > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > 15268 DEBUG 3: Rejected: message type   = 0
> > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15280 DEBUG 2:
> > > > > > 15281 DEBUG 2:
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > >
> > > > > > I am going to work on processing these point stat files to
create
> > > those
> > > > > > vertical raob plots we had a discussion about.  I remember
us
> > talking
> > > > > about
> > > > > > the partial sums file.  Why did we choose to go the route
of
> > > producing
> > > > > > partial sums then feeding that into series analysis to
generate
> > bias
> > > > and
> > > > > > MSE?  It looks like bias and MSE both exist within the CNT
line
> > type
> > > > > (MBIAS
> > > > > > and MSE)?
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Great, thanks for sending me the sample data.  Yes, I was
able to
> > > > > replicate
> > > > > > the segfault.  The good news is that this is caused by a
simple
> > typo
> > > > > that's
> > > > > > easy to fix.  If you look in the "obs.field" entry of the
> > > relhumConfig
> > > > > > file, you'll see an empty string for the last field
listed:
> > > > > >
> > > > > > *obs = {    field = [*
> > > > > >
> > > > > >
> > > > > >
> > > > > > *         ...        {name = "dptd";level = ["P988-
1006"];},
> > > > > {name =
> > > > > > "";level = ["P1007-1013"];}    ];*
> > > > > > If you change that empty string to "dptd", the segfault
will go
> > > away:*
> > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> > (in
> > > 2
> > > > > > minutes 48 seconds on my desktop machine), but it produced
0
> > matched
> > > > > > pairs.  They were discarded because of the valid times
(seen
> using
> > > -v 3
> > > > > > command line option to Point-Stat).  The ob file you sent
is
> named
> > "
> > > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > > "20190426_120000":
> > > > > >
> > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > http://raob_2015020412.nc
> > > > >*
> > > > > >
> > > > > > * hdr_vld_table =  "20190426_120000" ;*
> > > > > >
> > > > > > So please be aware of that discrepancy.  To just produce
some
> > matched
> > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > > -obs_valid_end 20190426_120000*
> > > > > >
> > > > > > But I still get 0 matched pairs.  This time, it's because
of bad
> > > > forecast
> > > > > > values:
> > > > > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > >
> > > > > > Taking a step back... let's run one of these fields
through
> > > > > > plot_data_plane, which results in an error:
> > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> http://plot.ps>
> > > > > > 'name="./read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > ERROR  : DataPlane::two_to_one() -> range check error:
(Nx, Ny) =
> > > (97,
> > > > > 97),
> > > > > > (x, y) = (97, 0)
> > > > > >
> > > > > > While the numpy object is 97x97, the grid is specified as
being
> > > 118x118
> > > > > in
> > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > >
> > > > > > Just to get something working, I modified the nx and ny in
the
> > python
> > > > > > script:
> > > > > >        'nx':97,
> > > > > >        'ny':97,
> > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > >
> > > > > > So I'd suggest...
> > > > > > - Fix the typo in the config file.
> > > > > > - Figure out the discrepancy between the obs file name
timestamp
> > and
> > > > the
> > > > > > data in that file.
> > > > > > - Make sure the grid information is consistent with the
data in
> the
> > > > > python
> > > > > > script.
> > > > > >
> > > > > > Obviously though, we don't want to code to be segfaulting
in any
> > > > > > condition.  So next, I tested using met-8.1 with that
empty
> string.
> > > > This
> > > > > > time it does run with no segfault, but prints a warning
about the
> > > empty
> > > > > > string.
> > > > > >
> > > > > > Hope that helps.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > >
> > > > > > > I am running  met-8.0/met-8.0-with-grib2-support and
have
> > provided
> > > > > > > everything
> > > > > > > on that list you've provided me.  Let me know if you're
able to
> > > > > replicate
> > > > > > > it
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Well that doesn't seem to be very helpful of Point-Stat
at all.
> > > > There
> > > > > > > isn't much jumping out at me from the log messages you
sent.
> In
> > > > fact,
> > > > > I
> > > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> > where
> > > in
> > > > > the
> > > > > > > code it's being written.  Are you able to send me some
sample
> > data
> > > to
> > > > > > > replicate this behavior?
> > > > > > >
> > > > > > > I'd need to know...
> > > > > > > - What version of MET are you running.
> > > > > > > - A copy of your Point-Stat config file.
> > > > > > > - The python script that you're running.
> > > > > > > - The input file for that python script.
> > > > > > > - The NetCDF point observation file you're passing to
> Point-Stat.
> > > > > > >
> > > > > > > If I can replicate the behavior here, it should be easy
to run
> it
> > > in
> > > > > the
> > > > > > > debugger and figure it out.
> > > > > > >
> > > > > > > You can post data to our anonymous ftp site as described
in
> "How
> > to
> > > > > send
> > > > > > us
> > > > > > > data":
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > >        Queue: met_help
> > > > > > > >      Subject: point_stat seg faulting
> > > > > > > >        Owner: Nobody
> > > > > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > >       Status: new
> > > > > > > >  Ticket <URL:
> > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > verification
> > > > > > > > plots
> > > > > > > > using point_stat and stat_analysis like we did
together for
> > winds
> > > > but
> > > > > > for
> > > > > > > > relative humidity now.  But when I run point_stat, it
seg
> > faults
> > > > > > without
> > > > > > > > much explanation
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > >
> > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > climatology
> > > > > > > mean
> > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > >
> > > > > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > > valid_time: 1
> > > > > > > >
> > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> point_stat
> > > > > > > > PYTHON_NUMPY
> > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > ./out/point_stat.log
> > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > From my log file:
> > > > > > > >
> > > > > > > > 607 DEBUG 2:
> > > > > > > >
> > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> messages.
> > > > > > > >
> > > > > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id:
617
> > > > > > valid_time: 1
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Any help would be much appreciated
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin Tsu
> > > > > > > >
> > > > > > > > Marine Meteorology Division
> > > > > > > >
> > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > >
> > > > > > > > Building 704 Room 212
> > > > > > > >
> > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > >
> > > > > > > > 7 Grace Hopper Avenue
> > > > > > > >
> > > > > > > > Monterey, CA 93943-5502
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Ph. (831) 656-4111
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Tue Oct 01 14:34:01 2019

Hi John,

Apologies for taking such a long time getting back to you.  End of
fiscal year things have consumed much of my time and I have not had
much time to work on any of this.

Before proceeding to the planning process of determining how to call
point_stat to deal with the vertical levels, I need to fix what is
going on with my GRIB1 variables.  When I run point_stat, I keep
getting this error:

DEBUG 1: Default Config File: /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
DEBUG 1: User Config File: dwptdpConfig
ERROR  :
ERROR  : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
abbreviation 'dptd' for table version 2
ERROR  :

I remember getting this before but don't remember how we fixed it.
I am using met-8.1/met-8.1a-with-grib2-support

Justin

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, September 13, 2019 3:46 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting

Justin,

Sorry for the delay.  I was in DC on travel this week until today.

It's really up to you how you'd like to configure it.  Unless it's too
unwieldy, I do think I'd try verifying all levels at once in a single
call
to Point-Stat.  All those observations are contained in the same point
observation file.  If you verify each level in a separate call to
Point-Stat, you'll be looping through and processing those obs many,
many
times, which will be relatively slow.  From a processing perspective,
it'd
be more efficient to process them all at once, in a single call to
Point-Stat.

But you balance runtime efficiency versus ease of scripting and
configuration.  And that's why it's up to you to decide which you
prefer.

Hope that helps.

Thanks,
John

On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> That makes sense.  The way that I've set up my config file is as
follows:
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
>      ];
> }
> obs = {
>     field = [
>         {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
>     ];
> }
> message_type   = [ "${MSG_TYPE}" ];
>
> The environmental variables I'm setting in the wrapper script are
LEV,
> INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE.  In this way, it seems
like I
> will only be able to run point_Stat for a single elevation and a
single
> lead time.  Do you recommend this? Or Should I put all the
elevations for a
> single lead time in one pass of point_stat?
>
> So my config file will look like something like this...
> fcst = {
>      field = [
>         {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> ... etc.
>      ];
> }
>
> Also, I am not sure what happened by when I run point_stat now I am
> getting that error
> ERROR  : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> Again.  This makes me think that the obs_var name is wrong, but
ncdump -v
> obs_var raob_*.nc gives me  obs_var =
>   "ws",
>   "wdir",
>   "t",
>   "dptd",
>   "pres",
>   "ght" ;
> So clearly dptd exists.
>
> Justin
>
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:40 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Here's a sample Point-Stat output file name:
>  point_stat_360000L_20070331_120000V.stat
>
> The "360000L" indicates that this is output for a 36-hour forecast.
And
> the "20070331_120000V" timestamp is the valid time.
>
> If you run Point-Stat once for each forecast lead time, the
timestamps
> should be different and they should not clobber eachother.
>
> But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
> with the same timing info.  The "output_prefix" config file entry is
used
> to customize the output file names to prevent them from clobbering
> eachother.  For example, setting:
>   output_prefix="RUN1";
> Would result in files named "
> point_stat_RUN1_360000L_20070331_120000V.stat".
>
> Make sense?
>
> Thanks,
> John
>
> On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Invoking point_stat multiple times will create and replace the old
_cnt
> > and _sl1l2 files right?  At that point, I'll have a bunch of CNT
and
> SL1L2
> >      files and then use stat_analysis to aggregate them?
> >
> > Justin
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:11 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Yes, that is a long list of fields, but I don't see a way obvious
way of
> > shortening that.  But to do multiple lead times, I'd just call
Point-Stat
> > multiple times, once for each lead time, and update the config
file to
> use
> > environment variables for the current time:
> >
> > fcst = {
> >      field = [
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > ...
> >
> > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> environment
> > variables.
> >
> > John
> >
> > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > I managed to scrap together some code to get RAOB stats from CNT
> plotted
> > > with 95% CI.  Working on Surface stats now.
> > >
> > > So my configuration file looks like this right now:
> > >
> > > fcst = {
> > >      field = [
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > >      ];
> > > }
> > >
> > > obs = {
> > >     field = [
> > >         {name = "dptd";level = ["P0.86-1.5"];},
> > >         {name = "dptd";level = ["P1.6-2.5"];},
> > >         {name = "dptd";level = ["P2.6-3.5"];},
> > >         {name = "dptd";level = ["P3.6-4.5"];},
> > >         {name = "dptd";level = ["P4.6-6"];},
> > >         {name = "dptd";level = ["P6.1-8"];},
> > >         {name = "dptd";level = ["P9-15"];},
> > >         {name = "dptd";level = ["P16-25"];},
> > >         {name = "dptd";level = ["P26-40"];},
> > >         {name = "dptd";level = ["P41-65"];},
> > >         {name = "dptd";level = ["P66-85"];},
> > >         {name = "dptd";level = ["P86-125"];},
> > >         {name = "dptd";level = ["P126-175"];},
> > >         {name = "dptd";level = ["P176-225"];},
> > >         {name = "dptd";level = ["P226-275"];},
> > >         {name = "dptd";level = ["P276-325"];},
> > >         {name = "dptd";level = ["P326-375"];},
> > >         {name = "dptd";level = ["P376-425"];},
> > >         {name = "dptd";level = ["P426-475"];},
> > >         {name = "dptd";level = ["P476-525"];},
> > >         {name = "dptd";level = ["P526-575"];},
> > >         {name = "dptd";level = ["P576-625"];},
> > >         {name = "dptd";level = ["P626-675"];},
> > >         {name = "dptd";level = ["P676-725"];},
> > >         {name = "dptd";level = ["P726-775"];},
> > >         {name = "dptd";level = ["P776-825"];},
> > >         {name = "dptd";level = ["P826-875"];},
> > >         {name = "dptd";level = ["P876-912"];},
> > >         {name = "dptd";level = ["P913-936"];},
> > >         {name = "dptd";level = ["P937-962"];},
> > >         {name = "dptd";level = ["P963-987"];},
> > >         {name = "dptd";level = ["P988-1006"];},
> > >         {name = "dptd";level = ["P1007-1013"];}
> > >
> > > And I have the data:
> > >
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > >
> > > for a particular DTG and vertical level.  If I want to run
multiple
> lead
> > > times, it seems like I'll have to copy that long list of fields
for
> each
> > > lead time in the fcst dict and then duplicate the obs dictionary
so
> that
> > > each forecast entry has a corresponding obs level matching
range.  Is
> > this
> > > correct or is there a shorter/better way to do this?
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > MET)
> > > in the plots you sent.
> > >
> > > Table 7.6 of the MET User's Guide (
> > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > )
> > > describes the contents of the CNT line type type. Bot the
columns for
> > RMSE
> > > and ME are followed by _NCL and _NCU columns which give the
parametric
> > > approximation of the confidence interval for those scores.  So
yes, you
> > can
> > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > corresponding CNT output line type.
> > >
> > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> parametric
> > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> columns
> > > for the ME statistic.
> > >
> > > You can change the alpha value for those confidence intervals by
> setting:
> > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > This all helps me greatly.  One more questions: is there any
> > information
> > > > in either the CNT or SL1L2 that could give me  confidence
intervals
> for
> > > > each data point?  I'm looking to replicate the attached plot.
Notice
> > > that
> > > > the individual points could have either a 99, 95 or 90 %
confidence.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Sounds about right.  Each time you run Grid-Stat or Point-Stat
you
> can
> > > > write the CNT output line type which contains stats like MSE,
ME,
> MAE,
> > > and
> > > > RMSE.  And I'm recommended that you also write the SL1L2 line
type as
> > > well.
> > > >
> > > > Then you'd run a stat_analysis job like this:
> > > >
> > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> -line_type
> > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > cnt_out.stat
> > > >
> > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> reads
> > > the
> > > > SL1L2 line type, and for each unique combination of FCST_VAR,
> FCST_LEV,
> > > and
> > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> > and
> > > > write out the corresponding CNT line type to the output file
named
> > > > cnt_out.stat.
> > > >
> > > > John
> > > >
> > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > So if I understand what you're saying correctly, then if I
wanted
> to
> > an
> > > > > average of 24 hour forecasts over a month long run, then I
would
> use
> > > the
> > > > > SL1L2 output to aggregate and produce this average?  Whereas
if I
> > used
> > > > CNT,
> > > > > this would just provide me ~30 individual (per day over a
month) 24
> > > hour
> > > > > forecast verifications?
> > > > >
> > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > biases?
> > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> the
> > > plot
> > > > > you showed me was just something you guys post processed
using
> python
> > > or
> > > > > whatnot.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > > aggregated together by the stat-analysis tool over multiple
days or
> > > > cases.
> > > > >
> > > > > If you're interested in continuous statistics from Point-
Stat, I'd
> > > > > recommend writing the CNT line type (which has the stats
computed
> for
> > > > that
> > > > > single run) and the SL1L2 line type (so that you can
aggregate them
> > > > > together in stat-analysis or METviewer).
> > > > >
> > > > > The other alternative is looking at the average of the daily
> > statistics
> > > > > scores.  For RMSE, the average of the daily RMSE is equal to
the
> > > > aggregated
> > > > > score... as long as the number of matched pairs remains
constant
> day
> > to
> > > > > day.  But if one today you have 98 matched pairs and
tomorrow you
> > have
> > > > 105,
> > > > > then tomorrow's score will have slightly more weight.  The
SL1L2
> > lines
> > > > are
> > > > > aggregated as weighted averages, where the TOTAL column is
the
> > weight.
> > > > And
> > > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > > scores.  Generally, the statisticians recommend this method
over
> the
> > > mean
> > > > > of the daily scores.  Neither is "wrong", they just give you
> slightly
> > > > > different information.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John.
> > > > > >
> > > > > > Sorry it's taken me such a long time to get to this.  It's
> nearing
> > > the
> > > > > end
> > > > > > of FY19 so I have been finalizing several transition
projects and
> > > > haven’t
> > > > > > had much time to work on MET recently.  I just picked this
back
> up
> > > and
> > > > > have
> > > > > > loaded a couple new modules.  Here is what I have to work
with
> now:
> > > > > >
> > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > 2) netcdf-local/netcdf-met
> > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > 5) udunits/udunits-2.1.24
> > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > >
> > > > > >
> > > > > > Running
> > > > > > > point_stat  PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > > >
> > > > > > I get many matched pairs.  Here is a sample of what the
log file
> > > looks
> > > > > > like for one of the pressure ranges I am verifying on:
> > > > > >
> > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> > for
> > > > > > observation type radiosonde, over region FULL, for
interpolation
> > > method
> > > > > > NEAREST(1), using 98 pairs.
> > > > > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > > > > 15259 DEBUG 3: Observations processed   = 4680328
> > > > > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > > > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > > > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > > > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > > > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > > > > 15265 DEBUG 3: Rejected: topography     = 0
> > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > 15268 DEBUG 3: Rejected: message type   = 0
> > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15280 DEBUG 2:
> > > > > > 15281 DEBUG 2:
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > >
> > > > > > I am going to work on processing these point stat files to
create
> > > those
> > > > > > vertical raob plots we had a discussion about.  I remember
us
> > talking
> > > > > about
> > > > > > the partial sums file.  Why did we choose to go the route
of
> > > producing
> > > > > > partial sums then feeding that into series analysis to
generate
> > bias
> > > > and
> > > > > > MSE?  It looks like bias and MSE both exist within the CNT
line
> > type
> > > > > (MBIAS
> > > > > > and MSE)?
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Great, thanks for sending me the sample data.  Yes, I was
able to
> > > > > replicate
> > > > > > the segfault.  The good news is that this is caused by a
simple
> > typo
> > > > > that's
> > > > > > easy to fix.  If you look in the "obs.field" entry of the
> > > relhumConfig
> > > > > > file, you'll see an empty string for the last field
listed:
> > > > > >
> > > > > > *obs = {    field = [*
> > > > > >
> > > > > >
> > > > > >
> > > > > > *         ...        {name = "dptd";level = ["P988-
1006"];},
> > > > > {name =
> > > > > > "";level = ["P1007-1013"];}    ];*
> > > > > > If you change that empty string to "dptd", the segfault
will go
> > > away:*
> > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> > (in
> > > 2
> > > > > > minutes 48 seconds on my desktop machine), but it produced
0
> > matched
> > > > > > pairs.  They were discarded because of the valid times
(seen
> using
> > > -v 3
> > > > > > command line option to Point-Stat).  The ob file you sent
is
> named
> > "
> > > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > > "20190426_120000":
> > > > > >
> > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > http://raob_2015020412.nc
> > > > >*
> > > > > >
> > > > > > * hdr_vld_table =  "20190426_120000" ;*
> > > > > >
> > > > > > So please be aware of that discrepancy.  To just produce
some
> > matched
> > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > > -obs_valid_end 20190426_120000*
> > > > > >
> > > > > > But I still get 0 matched pairs.  This time, it's because
of bad
> > > > forecast
> > > > > > values:
> > > > > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > >
> > > > > > Taking a step back... let's run one of these fields
through
> > > > > > plot_data_plane, which results in an error:
> > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> http://plot.ps>
> > > > > > 'name="./read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > ERROR  : DataPlane::two_to_one() -> range check error:
(Nx, Ny) =
> > > (97,
> > > > > 97),
> > > > > > (x, y) = (97, 0)
> > > > > >
> > > > > > While the numpy object is 97x97, the grid is specified as
being
> > > 118x118
> > > > > in
> > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > >
> > > > > > Just to get something working, I modified the nx and ny in
the
> > python
> > > > > > script:
> > > > > >        'nx':97,
> > > > > >        'ny':97,
> > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > >
> > > > > > So I'd suggest...
> > > > > > - Fix the typo in the config file.
> > > > > > - Figure out the discrepancy between the obs file name
timestamp
> > and
> > > > the
> > > > > > data in that file.
> > > > > > - Make sure the grid information is consistent with the
data in
> the
> > > > > python
> > > > > > script.
> > > > > >
> > > > > > Obviously though, we don't want to code to be segfaulting
in any
> > > > > > condition.  So next, I tested using met-8.1 with that
empty
> string.
> > > > This
> > > > > > time it does run with no segfault, but prints a warning
about the
> > > empty
> > > > > > string.
> > > > > >
> > > > > > Hope that helps.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > >
> > > > > > > I am running  met-8.0/met-8.0-with-grib2-support and
have
> > provided
> > > > > > > everything
> > > > > > > on that list you've provided me.  Let me know if you're
able to
> > > > > replicate
> > > > > > > it
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Well that doesn't seem to be very helpful of Point-Stat
at all.
> > > > There
> > > > > > > isn't much jumping out at me from the log messages you
sent.
> In
> > > > fact,
> > > > > I
> > > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> > where
> > > in
> > > > > the
> > > > > > > code it's being written.  Are you able to send me some
sample
> > data
> > > to
> > > > > > > replicate this behavior?
> > > > > > >
> > > > > > > I'd need to know...
> > > > > > > - What version of MET are you running.
> > > > > > > - A copy of your Point-Stat config file.
> > > > > > > - The python script that you're running.
> > > > > > > - The input file for that python script.
> > > > > > > - The NetCDF point observation file you're passing to
> Point-Stat.
> > > > > > >
> > > > > > > If I can replicate the behavior here, it should be easy
to run
> it
> > > in
> > > > > the
> > > > > > > debugger and figure it out.
> > > > > > >
> > > > > > > You can post data to our anonymous ftp site as described
in
> "How
> > to
> > > > > send
> > > > > > us
> > > > > > > data":
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > >        Queue: met_help
> > > > > > > >      Subject: point_stat seg faulting
> > > > > > > >        Owner: Nobody
> > > > > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > >       Status: new
> > > > > > > >  Ticket <URL:
> > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > verification
> > > > > > > > plots
> > > > > > > > using point_stat and stat_analysis like we did
together for
> > winds
> > > > but
> > > > > > for
> > > > > > > > relative humidity now.  But when I run point_stat, it
seg
> > faults
> > > > > > without
> > > > > > > > much explanation
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > >
> > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > climatology
> > > > > > > mean
> > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > >
> > > > > > > > DEBUG 7:     tbl dims: messge_type: 1  station id: 617
> > > > valid_time: 1
> > > > > > > >
> > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> point_stat
> > > > > > > > PYTHON_NUMPY
> > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > ./out/point_stat.log
> > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > From my log file:
> > > > > > > >
> > > > > > > > 607 DEBUG 2:
> > > > > > > >
> > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> messages.
> > > > > > > >
> > > > > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station id:
617
> > > > > > valid_time: 1
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Any help would be much appreciated
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin Tsu
> > > > > > > >
> > > > > > > > Marine Meteorology Division
> > > > > > > >
> > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > >
> > > > > > > > Building 704 Room 212
> > > > > > > >
> > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > >
> > > > > > > > 7 Grace Hopper Avenue
> > > > > > > >
> > > > > > > > Monterey, CA 93943-5502
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Ph. (831) 656-4111
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>


------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Wed Oct 02 12:13:38 2019

Justin,

This means that you're requesting a variable named "dpdt" in the
Point-Stat
config file.  MET looks for a definition of that string in it's
default
GRIB1 tables:
   grep dpdt met-8.1/share/met/table_files/*

But that returns 0 matches.  So this error message is telling you that
MET
doesn't know how to interpret that variable name.

Here's what I'd suggest:
(1) Run the input GRIB1 file through the "wgrib" utility.  If "wgrib"
knows
about this variable, it will report the name... and most likely,
that's the
same name that MET will know.  If so, switch from using "dpdt" to
using
whatever name wgrib reports.

(2) If "wgrib" does NOT know about this variable, it'll just list out
the
corresponding GRIB1 codes instead.  That means we'll need to go create
a
small GRIB table to define these strings.  Take a look in:
   met-8.1/share/met/table_files

We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt" where
CENTER is the number encoded in your GRIB file to define NRL and PTV
is the
parameter table version number used in your GRIB file.  In that,
you'll
define the mapping of GRIB1 codes to strings (like "dpdt").  And for
now,
we'll need to set the "MET_GRIB_TABLES" environment variable to the
location of that file.  But in the long run, you can send me that
file, and
we'll add it to "table_files" directory to be included in the next
release
of MET.

If you have trouble creating a new GRIB table file, just let me know
and
send me a sample GRIB file.

Thanks,
John


On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hi John,
>
> Apologies for taking such a long time getting back to you.  End of
fiscal
> year things have consumed much of my time and I have not had much
time to
> work on any of this.
>
> Before proceeding to the planning process of determining how to call
> point_stat to deal with the vertical levels, I need to fix what is
going on
> with my GRIB1 variables.  When I run point_stat, I keep getting this
error:
>
> DEBUG 1: Default Config File:
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> DEBUG 1: User Config File: dwptdpConfig
> ERROR  :
> ERROR  : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> ERROR  :
>
> I remember getting this before but don't remember how we fixed it.
> I am using met-8.1/met-8.1a-with-grib2-support
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 13, 2019 3:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sorry for the delay.  I was in DC on travel this week until today.
>
> It's really up to you how you'd like to configure it.  Unless it's
too
> unwieldy, I do think I'd try verifying all levels at once in a
single call
> to Point-Stat.  All those observations are contained in the same
point
> observation file.  If you verify each level in a separate call to
> Point-Stat, you'll be looping through and processing those obs many,
many
> times, which will be relatively slow.  From a processing
perspective, it'd
> be more efficient to process them all at once, in a single call to
> Point-Stat.
>
> But you balance runtime efficiency versus ease of scripting and
> configuration.  And that's why it's up to you to decide which you
prefer.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > That makes sense.  The way that I've set up my config file is as
follows:
> > fcst = {
> >      field = [
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> >      ];
> > }
> > obs = {
> >     field = [
> >         {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> >     ];
> > }
> > message_type   = [ "${MSG_TYPE}" ];
> >
> > The environmental variables I'm setting in the wrapper script are
LEV,
> > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE.  In this way, it
seems
> like I
> > will only be able to run point_Stat for a single elevation and a
single
> > lead time.  Do you recommend this? Or Should I put all the
elevations
> for a
> > single lead time in one pass of point_stat?
> >
> > So my config file will look like something like this...
> > fcst = {
> >      field = [
> >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > ... etc.
> >      ];
> > }
> >
> > Also, I am not sure what happened by when I run point_stat now I
am
> > getting that error
> > ERROR  : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > Again.  This makes me think that the obs_var name is wrong, but
ncdump
> -v
> > obs_var raob_*.nc gives me  obs_var =
> >   "ws",
> >   "wdir",
> >   "t",
> >   "dptd",
> >   "pres",
> >   "ght" ;
> > So clearly dptd exists.
> >
> > Justin
> >
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:40 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Here's a sample Point-Stat output file name:
> >  point_stat_360000L_20070331_120000V.stat
> >
> > The "360000L" indicates that this is output for a 36-hour
forecast.  And
> > the "20070331_120000V" timestamp is the valid time.
> >
> > If you run Point-Stat once for each forecast lead time, the
timestamps
> > should be different and they should not clobber eachother.
> >
> > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> times
> > with the same timing info.  The "output_prefix" config file entry
is used
> > to customize the output file names to prevent them from clobbering
> > eachother.  For example, setting:
> >   output_prefix="RUN1";
> > Would result in files named "
> > point_stat_RUN1_360000L_20070331_120000V.stat".
> >
> > Make sense?
> >
> > Thanks,
> > John
> >
> > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Invoking point_stat multiple times will create and replace the
old _cnt
> > > and _sl1l2 files right?  At that point, I'll have a bunch of CNT
and
> > SL1L2
> > >      files and then use stat_analysis to aggregate them?
> > >
> > > Justin
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 6, 2019 1:11 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Yes, that is a long list of fields, but I don't see a way
obvious way
> of
> > > shortening that.  But to do multiple lead times, I'd just call
> Point-Stat
> > > multiple times, once for each lead time, and update the config
file to
> > use
> > > environment variables for the current time:
> > >
> > > fcst = {
> > >      field = [
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > ...
> > >
> > > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> > environment
> > > variables.
> > >
> > > John
> > >
> > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > I managed to scrap together some code to get RAOB stats from
CNT
> > plotted
> > > > with 95% CI.  Working on Surface stats now.
> > > >
> > > > So my configuration file looks like this right now:
> > > >
> > > > fcst = {
> > > >      field = [
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >         {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > >      ];
> > > > }
> > > >
> > > > obs = {
> > > >     field = [
> > > >         {name = "dptd";level = ["P0.86-1.5"];},
> > > >         {name = "dptd";level = ["P1.6-2.5"];},
> > > >         {name = "dptd";level = ["P2.6-3.5"];},
> > > >         {name = "dptd";level = ["P3.6-4.5"];},
> > > >         {name = "dptd";level = ["P4.6-6"];},
> > > >         {name = "dptd";level = ["P6.1-8"];},
> > > >         {name = "dptd";level = ["P9-15"];},
> > > >         {name = "dptd";level = ["P16-25"];},
> > > >         {name = "dptd";level = ["P26-40"];},
> > > >         {name = "dptd";level = ["P41-65"];},
> > > >         {name = "dptd";level = ["P66-85"];},
> > > >         {name = "dptd";level = ["P86-125"];},
> > > >         {name = "dptd";level = ["P126-175"];},
> > > >         {name = "dptd";level = ["P176-225"];},
> > > >         {name = "dptd";level = ["P226-275"];},
> > > >         {name = "dptd";level = ["P276-325"];},
> > > >         {name = "dptd";level = ["P326-375"];},
> > > >         {name = "dptd";level = ["P376-425"];},
> > > >         {name = "dptd";level = ["P426-475"];},
> > > >         {name = "dptd";level = ["P476-525"];},
> > > >         {name = "dptd";level = ["P526-575"];},
> > > >         {name = "dptd";level = ["P576-625"];},
> > > >         {name = "dptd";level = ["P626-675"];},
> > > >         {name = "dptd";level = ["P676-725"];},
> > > >         {name = "dptd";level = ["P726-775"];},
> > > >         {name = "dptd";level = ["P776-825"];},
> > > >         {name = "dptd";level = ["P826-875"];},
> > > >         {name = "dptd";level = ["P876-912"];},
> > > >         {name = "dptd";level = ["P913-936"];},
> > > >         {name = "dptd";level = ["P937-962"];},
> > > >         {name = "dptd";level = ["P963-987"];},
> > > >         {name = "dptd";level = ["P988-1006"];},
> > > >         {name = "dptd";level = ["P1007-1013"];}
> > > >
> > > > And I have the data:
> > > >
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > >
> > > > for a particular DTG and vertical level.  If I want to run
multiple
> > lead
> > > > times, it seems like I'll have to copy that long list of
fields for
> > each
> > > > lead time in the fcst dict and then duplicate the obs
dictionary so
> > that
> > > > each forecast entry has a corresponding obs level matching
range.  Is
> > > this
> > > > correct or is there a shorter/better way to do this?
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > > MET)
> > > > in the plots you sent.
> > > >
> > > > Table 7.6 of the MET User's Guide (
> > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > )
> > > > describes the contents of the CNT line type type. Bot the
columns for
> > > RMSE
> > > > and ME are followed by _NCL and _NCU columns which give the
> parametric
> > > > approximation of the confidence interval for those scores.  So
yes,
> you
> > > can
> > > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > > corresponding CNT output line type.
> > > >
> > > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> > parametric
> > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > columns
> > > > for the ME statistic.
> > > >
> > > > You can change the alpha value for those confidence intervals
by
> > setting:
> > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John,
> > > > >
> > > > > This all helps me greatly.  One more questions: is there any
> > > information
> > > > > in either the CNT or SL1L2 that could give me  confidence
intervals
> > for
> > > > > each data point?  I'm looking to replicate the attached
plot.
> Notice
> > > > that
> > > > > the individual points could have either a 99, 95 or 90 %
> confidence.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Sounds about right.  Each time you run Grid-Stat or Point-
Stat you
> > can
> > > > > write the CNT output line type which contains stats like
MSE, ME,
> > MAE,
> > > > and
> > > > > RMSE.  And I'm recommended that you also write the SL1L2
line type
> as
> > > > well.
> > > > >
> > > > > Then you'd run a stat_analysis job like this:
> > > > >
> > > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> > -line_type
> > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > > cnt_out.stat
> > > > >
> > > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> > reads
> > > > the
> > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > FCST_LEV,
> > > > and
> > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
> together
> > > and
> > > > > write out the corresponding CNT line type to the output file
named
> > > > > cnt_out.stat.
> > > > >
> > > > > John
> > > > >
> > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu
> > > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > So if I understand what you're saying correctly, then if I
wanted
> > to
> > > an
> > > > > > average of 24 hour forecasts over a month long run, then I
would
> > use
> > > > the
> > > > > > SL1L2 output to aggregate and produce this average?
Whereas if I
> > > used
> > > > > CNT,
> > > > > > this would just provide me ~30 individual (per day over a
month)
> 24
> > > > hour
> > > > > > forecast verifications?
> > > > > >
> > > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > > biases?
> > > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> > the
> > > > plot
> > > > > > you showed me was just something you guys post processed
using
> > python
> > > > or
> > > > > > whatnot.
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > We wrote the SL1L2 partial sums from Point-Stat because
they can
> be
> > > > > > aggregated together by the stat-analysis tool over
multiple days
> or
> > > > > cases.
> > > > > >
> > > > > > If you're interested in continuous statistics from Point-
Stat,
> I'd
> > > > > > recommend writing the CNT line type (which has the stats
computed
> > for
> > > > > that
> > > > > > single run) and the SL1L2 line type (so that you can
aggregate
> them
> > > > > > together in stat-analysis or METviewer).
> > > > > >
> > > > > > The other alternative is looking at the average of the
daily
> > > statistics
> > > > > > scores.  For RMSE, the average of the daily RMSE is equal
to the
> > > > > aggregated
> > > > > > score... as long as the number of matched pairs remains
constant
> > day
> > > to
> > > > > > day.  But if one today you have 98 matched pairs and
tomorrow you
> > > have
> > > > > 105,
> > > > > > then tomorrow's score will have slightly more weight.  The
SL1L2
> > > lines
> > > > > are
> > > > > > aggregated as weighted averages, where the TOTAL column is
the
> > > weight.
> > > > > And
> > > > > > then stats (like RMSE and MSE) are recomputed from those
> aggregated
> > > > > > scores.  Generally, the statisticians recommend this
method over
> > the
> > > > mean
> > > > > > of the daily scores.  Neither is "wrong", they just give
you
> > slightly
> > > > > > different information.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John.
> > > > > > >
> > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > nearing
> > > > the
> > > > > > end
> > > > > > > of FY19 so I have been finalizing several transition
projects
> and
> > > > > haven’t
> > > > > > > had much time to work on MET recently.  I just picked
this back
> > up
> > > > and
> > > > > > have
> > > > > > > loaded a couple new modules.  Here is what I have to
work with
> > now:
> > > > > > >
> > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > >
> > > > > > >
> > > > > > > Running
> > > > > > > > point_stat  PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig -v
> 3
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > >
> > > > > > > I get many matched pairs.  Here is a sample of what the
log
> file
> > > > looks
> > > > > > > like for one of the pressure ranges I am verifying on:
> > > > > > >
> > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> dptd/P425-376,
> > > for
> > > > > > > observation type radiosonde, over region FULL, for
> interpolation
> > > > method
> > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > 15258 DEBUG 3: Number of matched pairs  = 98
> > > > > > > 15259 DEBUG 3: Observations processed   = 4680328
> > > > > > > 15260 DEBUG 3: Rejected: SID exclusion  = 0
> > > > > > > 15261 DEBUG 3: Rejected: obs type       = 3890030
> > > > > > > 15262 DEBUG 3: Rejected: valid time     = 0
> > > > > > > 15263 DEBUG 3: Rejected: bad obs value  = 0
> > > > > > > 15264 DEBUG 3: Rejected: off the grid   = 786506
> > > > > > > 15265 DEBUG 3: Rejected: topography     = 0
> > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > 15268 DEBUG 3: Rejected: message type   = 0
> > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > 15271 DEBUG 3: Rejected: duplicates     = 0
> > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15280 DEBUG 2:
> > > > > > > 15281 DEBUG 2:
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > >
> > > > > > > I am going to work on processing these point stat files
to
> create
> > > > those
> > > > > > > vertical raob plots we had a discussion about.  I
remember us
> > > talking
> > > > > > about
> > > > > > > the partial sums file.  Why did we choose to go the
route of
> > > > producing
> > > > > > > partial sums then feeding that into series analysis to
generate
> > > bias
> > > > > and
> > > > > > > MSE?  It looks like bias and MSE both exist within the
CNT line
> > > type
> > > > > > (MBIAS
> > > > > > > and MSE)?
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Great, thanks for sending me the sample data.  Yes, I
was able
> to
> > > > > > replicate
> > > > > > > the segfault.  The good news is that this is caused by a
simple
> > > typo
> > > > > > that's
> > > > > > > easy to fix.  If you look in the "obs.field" entry of
the
> > > > relhumConfig
> > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > >
> > > > > > > *obs = {    field = [*
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > *         ...        {name = "dptd";level = ["P988-
1006"];},
> > > > > > {name =
> > > > > > > "";level = ["P1007-1013"];}    ];*
> > > > > > > If you change that empty string to "dptd", the segfault
will go
> > > > away:*
> > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
> completion
> > > (in
> > > > 2
> > > > > > > minutes 48 seconds on my desktop machine), but it
produced 0
> > > matched
> > > > > > > pairs.  They were discarded because of the valid times
(seen
> > using
> > > > -v 3
> > > > > > > command line option to Point-Stat).  The ob file you
sent is
> > named
> > > "
> > > > > > > raob_2015020412.nc" but the actual times in that file
are for
> > > > > > > "20190426_120000":
> > > > > > >
> > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > http://raob_2015020412.nc
> > > > > >*
> > > > > > >
> > > > > > > * hdr_vld_table =  "20190426_120000" ;*
> > > > > > >
> > > > > > > So please be aware of that discrepancy.  To just produce
some
> > > matched
> > > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> 20190426_120000
> > > > > > > -obs_valid_end 20190426_120000*
> > > > > > >
> > > > > > > But I still get 0 matched pairs.  This time, it's
because of
> bad
> > > > > forecast
> > > > > > > values:
> > > > > > >    *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > >
> > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > plot_data_plane, which results in an error:
> > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > http://plot.ps>
> > > > > > > 'name="./read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > ERROR  : DataPlane::two_to_one() -> range check error:
(Nx,
> Ny) =
> > > > (97,
> > > > > > 97),
> > > > > > > (x, y) = (97, 0)
> > > > > > >
> > > > > > > While the numpy object is 97x97, the grid is specified
as being
> > > > 118x118
> > > > > > in
> > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > >
> > > > > > > Just to get something working, I modified the nx and ny
in the
> > > python
> > > > > > > script:
> > > > > > >        'nx':97,
> > > > > > >        'ny':97,
> > > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > > >
> > > > > > > So I'd suggest...
> > > > > > > - Fix the typo in the config file.
> > > > > > > - Figure out the discrepancy between the obs file name
> timestamp
> > > and
> > > > > the
> > > > > > > data in that file.
> > > > > > > - Make sure the grid information is consistent with the
data in
> > the
> > > > > > python
> > > > > > > script.
> > > > > > >
> > > > > > > Obviously though, we don't want to code to be
segfaulting in
> any
> > > > > > > condition.  So next, I tested using met-8.1 with that
empty
> > string.
> > > > > This
> > > > > > > time it does run with no segfault, but prints a warning
about
> the
> > > > empty
> > > > > > > string.
> > > > > > >
> > > > > > > Hope that helps.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > > >
> > > > > > > > I am running  met-8.0/met-8.0-with-grib2-support and
have
> > > provided
> > > > > > > > everything
> > > > > > > > on that list you've provided me.  Let me know if
you're able
> to
> > > > > > replicate
> > > > > > > > it
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Well that doesn't seem to be very helpful of Point-
Stat at
> all.
> > > > > There
> > > > > > > > isn't much jumping out at me from the log messages you
sent.
> > In
> > > > > fact,
> > > > > > I
> > > > > > > > hunted around for the DEBUG(7) log message but
couldn't find
> > > where
> > > > in
> > > > > > the
> > > > > > > > code it's being written.  Are you able to send me some
sample
> > > data
> > > > to
> > > > > > > > replicate this behavior?
> > > > > > > >
> > > > > > > > I'd need to know...
> > > > > > > > - What version of MET are you running.
> > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > - The python script that you're running.
> > > > > > > > - The input file for that python script.
> > > > > > > > - The NetCDF point observation file you're passing to
> > Point-Stat.
> > > > > > > >
> > > > > > > > If I can replicate the behavior here, it should be
easy to
> run
> > it
> > > > in
> > > > > > the
> > > > > > > > debugger and figure it out.
> > > > > > > >
> > > > > > > > You can post data to our anonymous ftp site as
described in
> > "How
> > > to
> > > > > > send
> > > > > > > us
> > > > > > > > data":
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > > >        Queue: met_help
> > > > > > > > >      Subject: point_stat seg faulting
> > > > > > > > >        Owner: Nobody
> > > > > > > > >   Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > >       Status: new
> > > > > > > > >  Ticket <URL:
> > > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Hey John,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > > verification
> > > > > > > > > plots
> > > > > > > > > using point_stat and stat_analysis like we did
together for
> > > winds
> > > > > but
> > > > > > > for
> > > > > > > > > relative humidity now.  But when I run point_stat,
it seg
> > > faults
> > > > > > > without
> > > > > > > > > much explanation
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > >
> > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > > climatology
> > > > > > > > mean
> > > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > > >
> > > > > > > > > DEBUG 7:     tbl dims: messge_type: 1  station id:
617
> > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > point_stat
> > > > > > > > > PYTHON_NUMPY
> > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > > ./out/point_stat.log
> > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > From my log file:
> > > > > > > > >
> > > > > > > > > 607 DEBUG 2:
> > > > > > > > >
> > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> > messages.
> > > > > > > > >
> > > > > > > > > 609 DEBUG 7:     tbl dims: messge_type: 1  station
id: 617
> > > > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Any help would be much appreciated
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin Tsu
> > > > > > > > >
> > > > > > > > > Marine Meteorology Division
> > > > > > > > >
> > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > >
> > > > > > > > > Building 704 Room 212
> > > > > > > > >
> > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > >
> > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > >
> > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Ph. (831) 656-4111
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>

------------------------------------------------


More information about the Met_help mailing list