[Met_help] [rt.rap.ucar.edu #91544] History for point_stat seg faulting
John Halley Gotway via RT
met_help at ucar.edu
Thu Nov 7 09:52:19 MST 2019
----------------------------------------------------------------
Initial Request
----------------------------------------------------------------
Hey John,
I'm trying to extrapolate the production of vertical raob verification plots
using point_stat and stat_analysis like we did together for winds but for
relative humidity now. But when I run point_stat, it seg faults without
much explanation
DEBUG 2:
----------------------------------------------------------------------------
----
DEBUG 2:
DEBUG 2: Reading data for relhum/pre_001013.
DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0 climatology mean
levels, and 0 climatology standard deviation levels.
DEBUG 2:
DEBUG 2:
----------------------------------------------------------------------------
----
DEBUG 2:
DEBUG 2: Searching 4680328 observations from 617 messages.
DEBUG 7: tbl dims: messge_type: 1 station id: 617 valid_time: 1
run_stats.sh: line 26: 40818 Segmentation fault point_stat PYTHON_NUMPY
${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log ./out/point_stat.log
-obs_valid_beg 20010101 -obs_valid_end 20200101
>From my log file:
607 DEBUG 2:
608 DEBUG 2: Searching 4680328 observations from 617 messages.
609 DEBUG 7: tbl dims: messge_type: 1 station id: 617 valid_time: 1
Any help would be much appreciated
Justin
Justin Tsu
Marine Meteorology Division
Data Assimilation/Mesoscale Modeling
Building 704 Room 212
Naval Research Laboratory, Code 7531
7 Grace Hopper Avenue
Monterey, CA 93943-5502
Ph. (831) 656-4111
----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Thu Aug 15 17:07:30 2019
Justin,
Well that doesn't seem to be very helpful of Point-Stat at all. There
isn't much jumping out at me from the log messages you sent. In fact,
I
hunted around for the DEBUG(7) log message but couldn't find where in
the
code it's being written. Are you able to send me some sample data to
replicate this behavior?
I'd need to know...
- What version of MET are you running.
- A copy of your Point-Stat config file.
- The python script that you're running.
- The input file for that python script.
- The NetCDF point observation file you're passing to Point-Stat.
If I can replicate the behavior here, it should be easy to run it in
the
debugger and figure it out.
You can post data to our anonymous ftp site as described in "How to
send us
data":
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
Thanks,
John
On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> Queue: met_help
> Subject: point_stat seg faulting
> Owner: Nobody
> Requestors: justin.tsu at nrlmry.navy.mil
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
>
> Hey John,
>
>
>
> I'm trying to extrapolate the production of vertical raob
verification
> plots
> using point_stat and stat_analysis like we did together for winds
but for
> relative humidity now. But when I run point_stat, it seg faults
without
> much explanation
>
>
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Reading data for relhum/pre_001013.
>
> DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology mean
> levels, and 0 climatology standard deviation levels.
>
> DEBUG 2:
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Searching 4680328 observations from 617 messages.
>
> DEBUG 7: tbl dims: messge_type: 1 station id: 617 valid_time:
1
>
> run_stats.sh: line 26: 40818 Segmentation fault point_stat
> PYTHON_NUMPY
> ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> ./out/point_stat.log
> -obs_valid_beg 20010101 -obs_valid_end 20200101
>
>
>
>
>
>
>
> From my log file:
>
> 607 DEBUG 2:
>
> 608 DEBUG 2: Searching 4680328 observations from 617 messages.
>
> 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
>
>
>
> Any help would be much appreciated
>
>
>
> Justin
>
>
>
> Justin Tsu
>
> Marine Meteorology Division
>
> Data Assimilation/Mesoscale Modeling
>
> Building 704 Room 212
>
> Naval Research Laboratory, Code 7531
>
> 7 Grace Hopper Avenue
>
> Monterey, CA 93943-5502
>
>
>
> Ph. (831) 656-4111
>
>
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Thu Aug 15 19:00:13 2019
Hey John,
Ive put my data in tsu_data_20190815/ under met_help.
I am running met-8.0/met-8.0-with-grib2-support and have provided
everything
on that list you've provided me. Let me know if you're able to
replicate it
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, August 15, 2019 4:08 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Well that doesn't seem to be very helpful of Point-Stat at all. There
isn't much jumping out at me from the log messages you sent. In fact,
I
hunted around for the DEBUG(7) log message but couldn't find where in
the
code it's being written. Are you able to send me some sample data to
replicate this behavior?
I'd need to know...
- What version of MET are you running.
- A copy of your Point-Stat config file.
- The python script that you're running.
- The input file for that python script.
- The NetCDF point observation file you're passing to Point-Stat.
If I can replicate the behavior here, it should be easy to run it in
the
debugger and figure it out.
You can post data to our anonymous ftp site as described in "How to
send us
data":
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
Thanks,
John
On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> Queue: met_help
> Subject: point_stat seg faulting
> Owner: Nobody
> Requestors: justin.tsu at nrlmry.navy.mil
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
>
> Hey John,
>
>
>
> I'm trying to extrapolate the production of vertical raob
verification
> plots
> using point_stat and stat_analysis like we did together for winds
but for
> relative humidity now. But when I run point_stat, it seg faults
without
> much explanation
>
>
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Reading data for relhum/pre_001013.
>
> DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology mean
> levels, and 0 climatology standard deviation levels.
>
> DEBUG 2:
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Searching 4680328 observations from 617 messages.
>
> DEBUG 7: tbl dims: messge_type: 1 station id: 617 valid_time:
1
>
> run_stats.sh: line 26: 40818 Segmentation fault point_stat
> PYTHON_NUMPY
> ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> ./out/point_stat.log
> -obs_valid_beg 20010101 -obs_valid_end 20200101
>
>
>
>
>
>
>
> From my log file:
>
> 607 DEBUG 2:
>
> 608 DEBUG 2: Searching 4680328 observations from 617 messages.
>
> 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
>
>
>
> Any help would be much appreciated
>
>
>
> Justin
>
>
>
> Justin Tsu
>
> Marine Meteorology Division
>
> Data Assimilation/Mesoscale Modeling
>
> Building 704 Room 212
>
> Naval Research Laboratory, Code 7531
>
> 7 Grace Hopper Avenue
>
> Monterey, CA 93943-5502
>
>
>
> Ph. (831) 656-4111
>
>
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Aug 16 12:38:10 2019
Hey John,
Figured out that the seg fault had to do with an incorrect version of
met I
was using. Running point_stat now without any seg faults. It is
failing
because I am missing some default values in the message_type_group_map
dictionary that I am not necessarily using such as "WATERSF".
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, August 15, 2019 4:08 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Well that doesn't seem to be very helpful of Point-Stat at all. There
isn't much jumping out at me from the log messages you sent. In fact,
I
hunted around for the DEBUG(7) log message but couldn't find where in
the
code it's being written. Are you able to send me some sample data to
replicate this behavior?
I'd need to know...
- What version of MET are you running.
- A copy of your Point-Stat config file.
- The python script that you're running.
- The input file for that python script.
- The NetCDF point observation file you're passing to Point-Stat.
If I can replicate the behavior here, it should be easy to run it in
the
debugger and figure it out.
You can post data to our anonymous ftp site as described in "How to
send us
data":
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
Thanks,
John
On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> Queue: met_help
> Subject: point_stat seg faulting
> Owner: Nobody
> Requestors: justin.tsu at nrlmry.navy.mil
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
>
> Hey John,
>
>
>
> I'm trying to extrapolate the production of vertical raob
verification
> plots
> using point_stat and stat_analysis like we did together for winds
but for
> relative humidity now. But when I run point_stat, it seg faults
without
> much explanation
>
>
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Reading data for relhum/pre_001013.
>
> DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology mean
> levels, and 0 climatology standard deviation levels.
>
> DEBUG 2:
>
> DEBUG 2:
>
>
----------------------------------------------------------------------------
> ----
>
> DEBUG 2:
>
> DEBUG 2: Searching 4680328 observations from 617 messages.
>
> DEBUG 7: tbl dims: messge_type: 1 station id: 617 valid_time:
1
>
> run_stats.sh: line 26: 40818 Segmentation fault point_stat
> PYTHON_NUMPY
> ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> ./out/point_stat.log
> -obs_valid_beg 20010101 -obs_valid_end 20200101
>
>
>
>
>
>
>
> From my log file:
>
> 607 DEBUG 2:
>
> 608 DEBUG 2: Searching 4680328 observations from 617 messages.
>
> 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
>
>
>
> Any help would be much appreciated
>
>
>
> Justin
>
>
>
> Justin Tsu
>
> Marine Meteorology Division
>
> Data Assimilation/Mesoscale Modeling
>
> Building 704 Room 212
>
> Naval Research Laboratory, Code 7531
>
> 7 Grace Hopper Avenue
>
> Monterey, CA 93943-5502
>
>
>
> Ph. (831) 656-4111
>
>
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Aug 16 13:15:42 2019
Justin,
Great, thanks for sending me the sample data. Yes, I was able to
replicate
the segfault. The good news is that this is caused by a simple typo
that's
easy to fix. If you look in the "obs.field" entry of the relhumConfig
file, you'll see an empty string for the last field listed:
*obs = { field = [*
* ... {name = "dptd";level = ["P988-1006"];},
{name =
"";level = ["P1007-1013"];} ];*
If you change that empty string to "dptd", the segfault will go away:*
{name = "dpdt";level = ["P1007-1013"];}*
Rerunning met-8.0 with that change, Point-Stat ran to completion (in 2
minutes 48 seconds on my desktop machine), but it produced 0 matched
pairs. They were discarded because of the valid times (seen using -v
3
command line option to Point-Stat). The ob file you sent is named "
raob_2015020412.nc" but the actual times in that file are for
"20190426_120000":
*ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
* hdr_vld_table = "20190426_120000" ;*
So please be aware of that discrepancy. To just produce some matched
pairs, I told Point-Stat to use the valid times of the data:
*met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
<http://raob_2015020412.nc> relhumConfig \*
* -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
-obs_valid_end 20190426_120000*
But I still get 0 matched pairs. This time, it's because of bad
forecast
values:
*DEBUG 3: Rejected: bad fcst value = 55*
Taking a step back... let's run one of these fields through
plot_data_plane, which results in an error:
*met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
'name="./read_NRL_binary.py
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny) = (97,
97),
(x, y) = (97, 0)
While the numpy object is 97x97, the grid is specified as being
118x118 in
the python script ('nx': 118, 'ny': 118).
Just to get something working, I modified the nx and ny in the python
script:
'nx':97,
'ny':97,
Rerunning again, I still didn't get any matched pairs.
So I'd suggest...
- Fix the typo in the config file.
- Figure out the discrepancy between the obs file name timestamp and
the
data in that file.
- Make sure the grid information is consistent with the data in the
python
script.
Obviously though, we don't want to code to be segfaulting in any
condition. So next, I tested using met-8.1 with that empty string.
This
time it does run with no segfault, but prints a warning about the
empty
string.
Hope that helps.
Thanks,
John
On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> Ive put my data in tsu_data_20190815/ under met_help.
>
> I am running met-8.0/met-8.0-with-grib2-support and have provided
> everything
> on that list you've provided me. Let me know if you're able to
replicate
> it
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, August 15, 2019 4:08 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Well that doesn't seem to be very helpful of Point-Stat at all.
There
> isn't much jumping out at me from the log messages you sent. In
fact, I
> hunted around for the DEBUG(7) log message but couldn't find where
in the
> code it's being written. Are you able to send me some sample data
to
> replicate this behavior?
>
> I'd need to know...
> - What version of MET are you running.
> - A copy of your Point-Stat config file.
> - The python script that you're running.
> - The input file for that python script.
> - The NetCDF point observation file you're passing to Point-Stat.
>
> If I can replicate the behavior here, it should be easy to run it in
the
> debugger and figure it out.
>
> You can post data to our anonymous ftp site as described in "How to
send us
> data":
>
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > Queue: met_help
> > Subject: point_stat seg faulting
> > Owner: Nobody
> > Requestors: justin.tsu at nrlmry.navy.mil
> > Status: new
> > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> >
> > Hey John,
> >
> >
> >
> > I'm trying to extrapolate the production of vertical raob
verification
> > plots
> > using point_stat and stat_analysis like we did together for winds
but for
> > relative humidity now. But when I run point_stat, it seg faults
without
> > much explanation
> >
> >
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Reading data for relhum/pre_001013.
> >
> > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> mean
> > levels, and 0 climatology standard deviation levels.
> >
> > DEBUG 2:
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> >
> > run_stats.sh: line 26: 40818 Segmentation fault point_stat
> > PYTHON_NUMPY
> > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > ./out/point_stat.log
> > -obs_valid_beg 20010101 -obs_valid_end 20200101
> >
> >
> >
> >
> >
> >
> >
> > From my log file:
> >
> > 607 DEBUG 2:
> >
> > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> >
> >
> >
> > Any help would be much appreciated
> >
> >
> >
> > Justin
> >
> >
> >
> > Justin Tsu
> >
> > Marine Meteorology Division
> >
> > Data Assimilation/Mesoscale Modeling
> >
> > Building 704 Room 212
> >
> > Naval Research Laboratory, Code 7531
> >
> > 7 Grace Hopper Avenue
> >
> > Monterey, CA 93943-5502
> >
> >
> >
> > Ph. (831) 656-4111
> >
> >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Thu Aug 29 17:06:28 2019
Thanks John.
Sorry it's taken me such a long time to get to this. It's nearing the
end of FY19 so I have been finalizing several transition projects and
haven’t had much time to work on MET recently. I just picked this
back up and have loaded a couple new modules. Here is what I have to
work with now:
1) intel/xe_2013-sp1-u1
2) netcdf-local/netcdf-met
3) met-8.1/met-8.1a-with-grib2-support
4) ncview-2.1.5/ncview-2.1.5
5) udunits/udunits-2.1.24
6) gcc-6.3.0/gcc-6.3.0
7) ImageMagicK/ImageMagick-6.9.0-10
8) python/anaconda-7-15-15-save.6.6.2017
Running
> point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
-obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
I get many matched pairs. Here is a sample of what the log file looks
like for one of the pressure ranges I am verifying on:
15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376, for
observation type radiosonde, over region FULL, for interpolation
method NEAREST(1), using 98 pairs.
15258 DEBUG 3: Number of matched pairs = 98
15259 DEBUG 3: Observations processed = 4680328
15260 DEBUG 3: Rejected: SID exclusion = 0
15261 DEBUG 3: Rejected: obs type = 3890030
15262 DEBUG 3: Rejected: valid time = 0
15263 DEBUG 3: Rejected: bad obs value = 0
15264 DEBUG 3: Rejected: off the grid = 786506
15265 DEBUG 3: Rejected: topography = 0
15266 DEBUG 3: Rejected: level mismatch = 3694
15267 DEBUG 3: Rejected: quality marker = 0
15268 DEBUG 3: Rejected: message type = 0
15269 DEBUG 3: Rejected: masking region = 0
15270 DEBUG 3: Rejected: bad fcst value = 0
15271 DEBUG 3: Rejected: duplicates = 0
15272 DEBUG 2: Computing Continuous Statistics.
15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0, observation filtering threshold >=0, and field logic UNION.
15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=5.0, observation filtering threshold >=5.0, and field logic UNION.
15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=10.0, observation filtering threshold >=10.0, and field logic UNION.
15276 DEBUG 2: Computing Scalar Partial Sums.
15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0, observation filtering threshold >=0, and field logic UNION.
15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=5.0, observation filtering threshold >=5.0, and field logic UNION.
15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=10.0, observation filtering threshold >=10.0, and field logic UNION.
15280 DEBUG 2:
15281 DEBUG 2:
--------------------------------------------------------------------------------
I am going to work on processing these point stat files to create
those vertical raob plots we had a discussion about. I remember us
talking about the partial sums file. Why did we choose to go the
route of producing partial sums then feeding that into series analysis
to generate bias and MSE? It looks like bias and MSE both exist
within the CNT line type (MBIAS and MSE)?
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, August 16, 2019 12:16 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Great, thanks for sending me the sample data. Yes, I was able to
replicate
the segfault. The good news is that this is caused by a simple typo
that's
easy to fix. If you look in the "obs.field" entry of the relhumConfig
file, you'll see an empty string for the last field listed:
*obs = { field = [*
* ... {name = "dptd";level = ["P988-1006"];},
{name =
"";level = ["P1007-1013"];} ];*
If you change that empty string to "dptd", the segfault will go away:*
{name = "dpdt";level = ["P1007-1013"];}*
Rerunning met-8.0 with that change, Point-Stat ran to completion (in 2
minutes 48 seconds on my desktop machine), but it produced 0 matched
pairs. They were discarded because of the valid times (seen using -v
3
command line option to Point-Stat). The ob file you sent is named "
raob_2015020412.nc" but the actual times in that file are for
"20190426_120000":
*ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
* hdr_vld_table = "20190426_120000" ;*
So please be aware of that discrepancy. To just produce some matched
pairs, I told Point-Stat to use the valid times of the data:
*met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
<http://raob_2015020412.nc> relhumConfig \*
* -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
-obs_valid_end 20190426_120000*
But I still get 0 matched pairs. This time, it's because of bad
forecast
values:
*DEBUG 3: Rejected: bad fcst value = 55*
Taking a step back... let's run one of these fields through
plot_data_plane, which results in an error:
*met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
'name="./read_NRL_binary.py
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny) = (97,
97),
(x, y) = (97, 0)
While the numpy object is 97x97, the grid is specified as being
118x118 in
the python script ('nx': 118, 'ny': 118).
Just to get something working, I modified the nx and ny in the python
script:
'nx':97,
'ny':97,
Rerunning again, I still didn't get any matched pairs.
So I'd suggest...
- Fix the typo in the config file.
- Figure out the discrepancy between the obs file name timestamp and
the
data in that file.
- Make sure the grid information is consistent with the data in the
python
script.
Obviously though, we don't want to code to be segfaulting in any
condition. So next, I tested using met-8.1 with that empty string.
This
time it does run with no segfault, but prints a warning about the
empty
string.
Hope that helps.
Thanks,
John
On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> Ive put my data in tsu_data_20190815/ under met_help.
>
> I am running met-8.0/met-8.0-with-grib2-support and have provided
> everything
> on that list you've provided me. Let me know if you're able to
replicate
> it
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, August 15, 2019 4:08 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Well that doesn't seem to be very helpful of Point-Stat at all.
There
> isn't much jumping out at me from the log messages you sent. In
fact, I
> hunted around for the DEBUG(7) log message but couldn't find where
in the
> code it's being written. Are you able to send me some sample data
to
> replicate this behavior?
>
> I'd need to know...
> - What version of MET are you running.
> - A copy of your Point-Stat config file.
> - The python script that you're running.
> - The input file for that python script.
> - The NetCDF point observation file you're passing to Point-Stat.
>
> If I can replicate the behavior here, it should be easy to run it in
the
> debugger and figure it out.
>
> You can post data to our anonymous ftp site as described in "How to
send us
> data":
>
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > Queue: met_help
> > Subject: point_stat seg faulting
> > Owner: Nobody
> > Requestors: justin.tsu at nrlmry.navy.mil
> > Status: new
> > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> >
> > Hey John,
> >
> >
> >
> > I'm trying to extrapolate the production of vertical raob
verification
> > plots
> > using point_stat and stat_analysis like we did together for winds
but for
> > relative humidity now. But when I run point_stat, it seg faults
without
> > much explanation
> >
> >
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Reading data for relhum/pre_001013.
> >
> > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> mean
> > levels, and 0 climatology standard deviation levels.
> >
> > DEBUG 2:
> >
> > DEBUG 2:
> >
> >
>
----------------------------------------------------------------------------
> > ----
> >
> > DEBUG 2:
> >
> > DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> >
> > run_stats.sh: line 26: 40818 Segmentation fault point_stat
> > PYTHON_NUMPY
> > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > ./out/point_stat.log
> > -obs_valid_beg 20010101 -obs_valid_end 20200101
> >
> >
> >
> >
> >
> >
> >
> > From my log file:
> >
> > 607 DEBUG 2:
> >
> > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> >
> > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> >
> >
> >
> > Any help would be much appreciated
> >
> >
> >
> > Justin
> >
> >
> >
> > Justin Tsu
> >
> > Marine Meteorology Division
> >
> > Data Assimilation/Mesoscale Modeling
> >
> > Building 704 Room 212
> >
> > Naval Research Laboratory, Code 7531
> >
> > 7 Grace Hopper Avenue
> >
> > Monterey, CA 93943-5502
> >
> >
> >
> > Ph. (831) 656-4111
> >
> >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Aug 30 09:46:52 2019
Justin,
We wrote the SL1L2 partial sums from Point-Stat because they can be
aggregated together by the stat-analysis tool over multiple days or
cases.
If you're interested in continuous statistics from Point-Stat, I'd
recommend writing the CNT line type (which has the stats computed for
that
single run) and the SL1L2 line type (so that you can aggregate them
together in stat-analysis or METviewer).
The other alternative is looking at the average of the daily
statistics
scores. For RMSE, the average of the daily RMSE is equal to the
aggregated
score... as long as the number of matched pairs remains constant day
to
day. But if one today you have 98 matched pairs and tomorrow you have
105,
then tomorrow's score will have slightly more weight. The SL1L2 lines
are
aggregated as weighted averages, where the TOTAL column is the weight.
And
then stats (like RMSE and MSE) are recomputed from those aggregated
scores. Generally, the statisticians recommend this method over the
mean
of the daily scores. Neither is "wrong", they just give you slightly
different information.
Thanks,
John
On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John.
>
> Sorry it's taken me such a long time to get to this. It's nearing
the end
> of FY19 so I have been finalizing several transition projects and
haven’t
> had much time to work on MET recently. I just picked this back up
and have
> loaded a couple new modules. Here is what I have to work with now:
>
> 1) intel/xe_2013-sp1-u1
> 2) netcdf-local/netcdf-met
> 3) met-8.1/met-8.1a-with-grib2-support
> 4) ncview-2.1.5/ncview-2.1.5
> 5) udunits/udunits-2.1.24
> 6) gcc-6.3.0/gcc-6.3.0
> 7) ImageMagicK/ImageMagick-6.9.0-10
> 8) python/anaconda-7-15-15-save.6.6.2017
>
>
> Running
> > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
>
> I get many matched pairs. Here is a sample of what the log file
looks
> like for one of the pressure ranges I am verifying on:
>
> 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> observation type radiosonde, over region FULL, for interpolation
method
> NEAREST(1), using 98 pairs.
> 15258 DEBUG 3: Number of matched pairs = 98
> 15259 DEBUG 3: Observations processed = 4680328
> 15260 DEBUG 3: Rejected: SID exclusion = 0
> 15261 DEBUG 3: Rejected: obs type = 3890030
> 15262 DEBUG 3: Rejected: valid time = 0
> 15263 DEBUG 3: Rejected: bad obs value = 0
> 15264 DEBUG 3: Rejected: off the grid = 786506
> 15265 DEBUG 3: Rejected: topography = 0
> 15266 DEBUG 3: Rejected: level mismatch = 3694
> 15267 DEBUG 3: Rejected: quality marker = 0
> 15268 DEBUG 3: Rejected: message type = 0
> 15269 DEBUG 3: Rejected: masking region = 0
> 15270 DEBUG 3: Rejected: bad fcst value = 0
> 15271 DEBUG 3: Rejected: duplicates = 0
> 15272 DEBUG 2: Computing Continuous Statistics.
> 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15276 DEBUG 2: Computing Scalar Partial Sums.
> 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15280 DEBUG 2:
> 15281 DEBUG 2:
>
--------------------------------------------------------------------------------
>
> I am going to work on processing these point stat files to create
those
> vertical raob plots we had a discussion about. I remember us
talking about
> the partial sums file. Why did we choose to go the route of
producing
> partial sums then feeding that into series analysis to generate bias
and
> MSE? It looks like bias and MSE both exist within the CNT line type
(MBIAS
> and MSE)?
>
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 16, 2019 12:16 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Great, thanks for sending me the sample data. Yes, I was able to
replicate
> the segfault. The good news is that this is caused by a simple typo
that's
> easy to fix. If you look in the "obs.field" entry of the
relhumConfig
> file, you'll see an empty string for the last field listed:
>
> *obs = { field = [*
>
>
>
> * ... {name = "dptd";level = ["P988-1006"];},
{name =
> "";level = ["P1007-1013"];} ];*
> If you change that empty string to "dptd", the segfault will go
away:*
> {name = "dpdt";level = ["P1007-1013"];}*
> Rerunning met-8.0 with that change, Point-Stat ran to completion (in
2
> minutes 48 seconds on my desktop machine), but it produced 0 matched
> pairs. They were discarded because of the valid times (seen using
-v 3
> command line option to Point-Stat). The ob file you sent is named "
> raob_2015020412.nc" but the actual times in that file are for
> "20190426_120000":
>
> *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
>
> * hdr_vld_table = "20190426_120000" ;*
>
> So please be aware of that discrepancy. To just produce some
matched
> pairs, I told Point-Stat to use the valid times of the data:
> *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> <http://raob_2015020412.nc> relhumConfig \*
> * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> -obs_valid_end 20190426_120000*
>
> But I still get 0 matched pairs. This time, it's because of bad
forecast
> values:
> *DEBUG 3: Rejected: bad fcst value = 55*
>
> Taking a step back... let's run one of these fields through
> plot_data_plane, which results in an error:
> *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> 'name="./read_NRL_binary.py
>
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97, 97),
> (x, y) = (97, 0)
>
> While the numpy object is 97x97, the grid is specified as being
118x118 in
> the python script ('nx': 118, 'ny': 118).
>
> Just to get something working, I modified the nx and ny in the
python
> script:
> 'nx':97,
> 'ny':97,
> Rerunning again, I still didn't get any matched pairs.
>
> So I'd suggest...
> - Fix the typo in the config file.
> - Figure out the discrepancy between the obs file name timestamp and
the
> data in that file.
> - Make sure the grid information is consistent with the data in the
python
> script.
>
> Obviously though, we don't want to code to be segfaulting in any
> condition. So next, I tested using met-8.1 with that empty string.
This
> time it does run with no segfault, but prints a warning about the
empty
> string.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > Ive put my data in tsu_data_20190815/ under met_help.
> >
> > I am running met-8.0/met-8.0-with-grib2-support and have provided
> > everything
> > on that list you've provided me. Let me know if you're able to
replicate
> > it
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Thursday, August 15, 2019 4:08 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > isn't much jumping out at me from the log messages you sent. In
fact, I
> > hunted around for the DEBUG(7) log message but couldn't find where
in the
> > code it's being written. Are you able to send me some sample data
to
> > replicate this behavior?
> >
> > I'd need to know...
> > - What version of MET are you running.
> > - A copy of your Point-Stat config file.
> > - The python script that you're running.
> > - The input file for that python script.
> > - The NetCDF point observation file you're passing to Point-Stat.
> >
> > If I can replicate the behavior here, it should be easy to run it
in the
> > debugger and figure it out.
> >
> > You can post data to our anonymous ftp site as described in "How
to send
> us
> > data":
> >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > Queue: met_help
> > > Subject: point_stat seg faulting
> > > Owner: Nobody
> > > Requestors: justin.tsu at nrlmry.navy.mil
> > > Status: new
> > > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > >
> > >
> > > Hey John,
> > >
> > >
> > >
> > > I'm trying to extrapolate the production of vertical raob
verification
> > > plots
> > > using point_stat and stat_analysis like we did together for
winds but
> for
> > > relative humidity now. But when I run point_stat, it seg faults
> without
> > > much explanation
> > >
> > >
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Reading data for relhum/pre_001013.
> > >
> > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > mean
> > > levels, and 0 climatology standard deviation levels.
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> > >
> > > run_stats.sh: line 26: 40818 Segmentation fault point_stat
> > > PYTHON_NUMPY
> > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > ./out/point_stat.log
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > From my log file:
> > >
> > > 607 DEBUG 2:
> > >
> > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> valid_time: 1
> > >
> > >
> > >
> > > Any help would be much appreciated
> > >
> > >
> > >
> > > Justin
> > >
> > >
> > >
> > > Justin Tsu
> > >
> > > Marine Meteorology Division
> > >
> > > Data Assimilation/Mesoscale Modeling
> > >
> > > Building 704 Room 212
> > >
> > > Naval Research Laboratory, Code 7531
> > >
> > > 7 Grace Hopper Avenue
> > >
> > > Monterey, CA 93943-5502
> > >
> > >
> > >
> > > Ph. (831) 656-4111
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Aug 30 12:36:07 2019
So if I understand what you're saying correctly, then if I wanted to
an average of 24 hour forecasts over a month long run, then I would
use the SL1L2 output to aggregate and produce this average? Whereas
if I used CNT, this would just provide me ~30 individual (per day over
a month) 24 hour forecast verifications?
On a side note, did we ever go over how to plot the SL1L2 MSE and
biases? I am forgetting if we used stat_analysis to produce a plot or
if the plot you showed me was just something you guys post processed
using python or whatnot.
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, August 30, 2019 8:47 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
We wrote the SL1L2 partial sums from Point-Stat because they can be
aggregated together by the stat-analysis tool over multiple days or
cases.
If you're interested in continuous statistics from Point-Stat, I'd
recommend writing the CNT line type (which has the stats computed for
that
single run) and the SL1L2 line type (so that you can aggregate them
together in stat-analysis or METviewer).
The other alternative is looking at the average of the daily
statistics
scores. For RMSE, the average of the daily RMSE is equal to the
aggregated
score... as long as the number of matched pairs remains constant day
to
day. But if one today you have 98 matched pairs and tomorrow you have
105,
then tomorrow's score will have slightly more weight. The SL1L2 lines
are
aggregated as weighted averages, where the TOTAL column is the weight.
And
then stats (like RMSE and MSE) are recomputed from those aggregated
scores. Generally, the statisticians recommend this method over the
mean
of the daily scores. Neither is "wrong", they just give you slightly
different information.
Thanks,
John
On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John.
>
> Sorry it's taken me such a long time to get to this. It's nearing
the end
> of FY19 so I have been finalizing several transition projects and
haven’t
> had much time to work on MET recently. I just picked this back up
and have
> loaded a couple new modules. Here is what I have to work with now:
>
> 1) intel/xe_2013-sp1-u1
> 2) netcdf-local/netcdf-met
> 3) met-8.1/met-8.1a-with-grib2-support
> 4) ncview-2.1.5/ncview-2.1.5
> 5) udunits/udunits-2.1.24
> 6) gcc-6.3.0/gcc-6.3.0
> 7) ImageMagicK/ImageMagick-6.9.0-10
> 8) python/anaconda-7-15-15-save.6.6.2017
>
>
> Running
> > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
>
> I get many matched pairs. Here is a sample of what the log file
looks
> like for one of the pressure ranges I am verifying on:
>
> 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> observation type radiosonde, over region FULL, for interpolation
method
> NEAREST(1), using 98 pairs.
> 15258 DEBUG 3: Number of matched pairs = 98
> 15259 DEBUG 3: Observations processed = 4680328
> 15260 DEBUG 3: Rejected: SID exclusion = 0
> 15261 DEBUG 3: Rejected: obs type = 3890030
> 15262 DEBUG 3: Rejected: valid time = 0
> 15263 DEBUG 3: Rejected: bad obs value = 0
> 15264 DEBUG 3: Rejected: off the grid = 786506
> 15265 DEBUG 3: Rejected: topography = 0
> 15266 DEBUG 3: Rejected: level mismatch = 3694
> 15267 DEBUG 3: Rejected: quality marker = 0
> 15268 DEBUG 3: Rejected: message type = 0
> 15269 DEBUG 3: Rejected: masking region = 0
> 15270 DEBUG 3: Rejected: bad fcst value = 0
> 15271 DEBUG 3: Rejected: duplicates = 0
> 15272 DEBUG 2: Computing Continuous Statistics.
> 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15276 DEBUG 2: Computing Scalar Partial Sums.
> 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
>=0,
> observation filtering threshold >=0, and field logic UNION.
> 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=5.0, observation filtering threshold >=5.0, and field logic UNION.
> 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering threshold
> >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> 15280 DEBUG 2:
> 15281 DEBUG 2:
>
--------------------------------------------------------------------------------
>
> I am going to work on processing these point stat files to create
those
> vertical raob plots we had a discussion about. I remember us
talking about
> the partial sums file. Why did we choose to go the route of
producing
> partial sums then feeding that into series analysis to generate bias
and
> MSE? It looks like bias and MSE both exist within the CNT line type
(MBIAS
> and MSE)?
>
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 16, 2019 12:16 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Great, thanks for sending me the sample data. Yes, I was able to
replicate
> the segfault. The good news is that this is caused by a simple typo
that's
> easy to fix. If you look in the "obs.field" entry of the
relhumConfig
> file, you'll see an empty string for the last field listed:
>
> *obs = { field = [*
>
>
>
> * ... {name = "dptd";level = ["P988-1006"];},
{name =
> "";level = ["P1007-1013"];} ];*
> If you change that empty string to "dptd", the segfault will go
away:*
> {name = "dpdt";level = ["P1007-1013"];}*
> Rerunning met-8.0 with that change, Point-Stat ran to completion (in
2
> minutes 48 seconds on my desktop machine), but it produced 0 matched
> pairs. They were discarded because of the valid times (seen using
-v 3
> command line option to Point-Stat). The ob file you sent is named "
> raob_2015020412.nc" but the actual times in that file are for
> "20190426_120000":
>
> *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
>
> * hdr_vld_table = "20190426_120000" ;*
>
> So please be aware of that discrepancy. To just produce some
matched
> pairs, I told Point-Stat to use the valid times of the data:
> *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> <http://raob_2015020412.nc> relhumConfig \*
> * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> -obs_valid_end 20190426_120000*
>
> But I still get 0 matched pairs. This time, it's because of bad
forecast
> values:
> *DEBUG 3: Rejected: bad fcst value = 55*
>
> Taking a step back... let's run one of these fields through
> plot_data_plane, which results in an error:
> *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> 'name="./read_NRL_binary.py
>
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97, 97),
> (x, y) = (97, 0)
>
> While the numpy object is 97x97, the grid is specified as being
118x118 in
> the python script ('nx': 118, 'ny': 118).
>
> Just to get something working, I modified the nx and ny in the
python
> script:
> 'nx':97,
> 'ny':97,
> Rerunning again, I still didn't get any matched pairs.
>
> So I'd suggest...
> - Fix the typo in the config file.
> - Figure out the discrepancy between the obs file name timestamp and
the
> data in that file.
> - Make sure the grid information is consistent with the data in the
python
> script.
>
> Obviously though, we don't want to code to be segfaulting in any
> condition. So next, I tested using met-8.1 with that empty string.
This
> time it does run with no segfault, but prints a warning about the
empty
> string.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > Ive put my data in tsu_data_20190815/ under met_help.
> >
> > I am running met-8.0/met-8.0-with-grib2-support and have provided
> > everything
> > on that list you've provided me. Let me know if you're able to
replicate
> > it
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Thursday, August 15, 2019 4:08 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > isn't much jumping out at me from the log messages you sent. In
fact, I
> > hunted around for the DEBUG(7) log message but couldn't find where
in the
> > code it's being written. Are you able to send me some sample data
to
> > replicate this behavior?
> >
> > I'd need to know...
> > - What version of MET are you running.
> > - A copy of your Point-Stat config file.
> > - The python script that you're running.
> > - The input file for that python script.
> > - The NetCDF point observation file you're passing to Point-Stat.
> >
> > If I can replicate the behavior here, it should be easy to run it
in the
> > debugger and figure it out.
> >
> > You can post data to our anonymous ftp site as described in "How
to send
> us
> > data":
> >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > Queue: met_help
> > > Subject: point_stat seg faulting
> > > Owner: Nobody
> > > Requestors: justin.tsu at nrlmry.navy.mil
> > > Status: new
> > > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > >
> > >
> > > Hey John,
> > >
> > >
> > >
> > > I'm trying to extrapolate the production of vertical raob
verification
> > > plots
> > > using point_stat and stat_analysis like we did together for
winds but
> for
> > > relative humidity now. But when I run point_stat, it seg faults
> without
> > > much explanation
> > >
> > >
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Reading data for relhum/pre_001013.
> > >
> > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > mean
> > > levels, and 0 climatology standard deviation levels.
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2:
> > >
> > >
> >
>
----------------------------------------------------------------------------
> > > ----
> > >
> > > DEBUG 2:
> > >
> > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> > >
> > > run_stats.sh: line 26: 40818 Segmentation fault point_stat
> > > PYTHON_NUMPY
> > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > ./out/point_stat.log
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > From my log file:
> > >
> > > 607 DEBUG 2:
> > >
> > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > >
> > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> valid_time: 1
> > >
> > >
> > >
> > > Any help would be much appreciated
> > >
> > >
> > >
> > > Justin
> > >
> > >
> > >
> > > Justin Tsu
> > >
> > > Marine Meteorology Division
> > >
> > > Data Assimilation/Mesoscale Modeling
> > >
> > > Building 704 Room 212
> > >
> > > Naval Research Laboratory, Code 7531
> > >
> > > 7 Grace Hopper Avenue
> > >
> > > Monterey, CA 93943-5502
> > >
> > >
> > >
> > > Ph. (831) 656-4111
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Aug 30 13:45:43 2019
Justin,
Sounds about right. Each time you run Grid-Stat or Point-Stat you can
write the CNT output line type which contains stats like MSE, ME, MAE,
and
RMSE. And I'm recommended that you also write the SL1L2 line type as
well.
Then you'd run a stat_analysis job like this:
stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
cnt_out.stat
This job reads any .stat files it finds in "/path/to/stat/data", reads
the
SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
write out the corresponding CNT line type to the output file named
cnt_out.stat.
John
On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> So if I understand what you're saying correctly, then if I wanted to
an
> average of 24 hour forecasts over a month long run, then I would use
the
> SL1L2 output to aggregate and produce this average? Whereas if I
used CNT,
> this would just provide me ~30 individual (per day over a month) 24
hour
> forecast verifications?
>
> On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> I am forgetting if we used stat_analysis to produce a plot or if the
plot
> you showed me was just something you guys post processed using
python or
> whatnot.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 8:47 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> We wrote the SL1L2 partial sums from Point-Stat because they can be
> aggregated together by the stat-analysis tool over multiple days or
cases.
>
> If you're interested in continuous statistics from Point-Stat, I'd
> recommend writing the CNT line type (which has the stats computed
for that
> single run) and the SL1L2 line type (so that you can aggregate them
> together in stat-analysis or METviewer).
>
> The other alternative is looking at the average of the daily
statistics
> scores. For RMSE, the average of the daily RMSE is equal to the
aggregated
> score... as long as the number of matched pairs remains constant day
to
> day. But if one today you have 98 matched pairs and tomorrow you
have 105,
> then tomorrow's score will have slightly more weight. The SL1L2
lines are
> aggregated as weighted averages, where the TOTAL column is the
weight. And
> then stats (like RMSE and MSE) are recomputed from those aggregated
> scores. Generally, the statisticians recommend this method over the
mean
> of the daily scores. Neither is "wrong", they just give you
slightly
> different information.
>
> Thanks,
> John
>
> On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John.
> >
> > Sorry it's taken me such a long time to get to this. It's nearing
the
> end
> > of FY19 so I have been finalizing several transition projects and
haven’t
> > had much time to work on MET recently. I just picked this back up
and
> have
> > loaded a couple new modules. Here is what I have to work with
now:
> >
> > 1) intel/xe_2013-sp1-u1
> > 2) netcdf-local/netcdf-met
> > 3) met-8.1/met-8.1a-with-grib2-support
> > 4) ncview-2.1.5/ncview-2.1.5
> > 5) udunits/udunits-2.1.24
> > 6) gcc-6.3.0/gcc-6.3.0
> > 7) ImageMagicK/ImageMagick-6.9.0-10
> > 8) python/anaconda-7-15-15-save.6.6.2017
> >
> >
> > Running
> > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> >
> > I get many matched pairs. Here is a sample of what the log file
looks
> > like for one of the pressure ranges I am verifying on:
> >
> > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> > observation type radiosonde, over region FULL, for interpolation
method
> > NEAREST(1), using 98 pairs.
> > 15258 DEBUG 3: Number of matched pairs = 98
> > 15259 DEBUG 3: Observations processed = 4680328
> > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > 15261 DEBUG 3: Rejected: obs type = 3890030
> > 15262 DEBUG 3: Rejected: valid time = 0
> > 15263 DEBUG 3: Rejected: bad obs value = 0
> > 15264 DEBUG 3: Rejected: off the grid = 786506
> > 15265 DEBUG 3: Rejected: topography = 0
> > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > 15267 DEBUG 3: Rejected: quality marker = 0
> > 15268 DEBUG 3: Rejected: message type = 0
> > 15269 DEBUG 3: Rejected: masking region = 0
> > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > 15271 DEBUG 3: Rejected: duplicates = 0
> > 15272 DEBUG 2: Computing Continuous Statistics.
> > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15280 DEBUG 2:
> > 15281 DEBUG 2:
> >
>
--------------------------------------------------------------------------------
> >
> > I am going to work on processing these point stat files to create
those
> > vertical raob plots we had a discussion about. I remember us
talking
> about
> > the partial sums file. Why did we choose to go the route of
producing
> > partial sums then feeding that into series analysis to generate
bias and
> > MSE? It looks like bias and MSE both exist within the CNT line
type
> (MBIAS
> > and MSE)?
> >
> >
> > Justin
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 16, 2019 12:16 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Great, thanks for sending me the sample data. Yes, I was able to
> replicate
> > the segfault. The good news is that this is caused by a simple
typo
> that's
> > easy to fix. If you look in the "obs.field" entry of the
relhumConfig
> > file, you'll see an empty string for the last field listed:
> >
> > *obs = { field = [*
> >
> >
> >
> > * ... {name = "dptd";level = ["P988-1006"];},
> {name =
> > "";level = ["P1007-1013"];} ];*
> > If you change that empty string to "dptd", the segfault will go
away:*
> > {name = "dpdt";level = ["P1007-1013"];}*
> > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > pairs. They were discarded because of the valid times (seen using
-v 3
> > command line option to Point-Stat). The ob file you sent is named
"
> > raob_2015020412.nc" but the actual times in that file are for
> > "20190426_120000":
> >
> > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
> >
> > * hdr_vld_table = "20190426_120000" ;*
> >
> > So please be aware of that discrepancy. To just produce some
matched
> > pairs, I told Point-Stat to use the valid times of the data:
> > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > <http://raob_2015020412.nc> relhumConfig \*
> > * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> > -obs_valid_end 20190426_120000*
> >
> > But I still get 0 matched pairs. This time, it's because of bad
forecast
> > values:
> > *DEBUG 3: Rejected: bad fcst value = 55*
> >
> > Taking a step back... let's run one of these fields through
> > plot_data_plane, which results in an error:
> > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> > 'name="./read_NRL_binary.py
> >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97,
> 97),
> > (x, y) = (97, 0)
> >
> > While the numpy object is 97x97, the grid is specified as being
118x118
> in
> > the python script ('nx': 118, 'ny': 118).
> >
> > Just to get something working, I modified the nx and ny in the
python
> > script:
> > 'nx':97,
> > 'ny':97,
> > Rerunning again, I still didn't get any matched pairs.
> >
> > So I'd suggest...
> > - Fix the typo in the config file.
> > - Figure out the discrepancy between the obs file name timestamp
and the
> > data in that file.
> > - Make sure the grid information is consistent with the data in
the
> python
> > script.
> >
> > Obviously though, we don't want to code to be segfaulting in any
> > condition. So next, I tested using met-8.1 with that empty
string. This
> > time it does run with no segfault, but prints a warning about the
empty
> > string.
> >
> > Hope that helps.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hey John,
> > >
> > > Ive put my data in tsu_data_20190815/ under met_help.
> > >
> > > I am running met-8.0/met-8.0-with-grib2-support and have
provided
> > > everything
> > > on that list you've provided me. Let me know if you're able to
> replicate
> > > it
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Thursday, August 15, 2019 4:08 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > > isn't much jumping out at me from the log messages you sent. In
fact,
> I
> > > hunted around for the DEBUG(7) log message but couldn't find
where in
> the
> > > code it's being written. Are you able to send me some sample
data to
> > > replicate this behavior?
> > >
> > > I'd need to know...
> > > - What version of MET are you running.
> > > - A copy of your Point-Stat config file.
> > > - The python script that you're running.
> > > - The input file for that python script.
> > > - The NetCDF point observation file you're passing to Point-
Stat.
> > >
> > > If I can replicate the behavior here, it should be easy to run
it in
> the
> > > debugger and figure it out.
> > >
> > > You can post data to our anonymous ftp site as described in "How
to
> send
> > us
> > > data":
> > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > Queue: met_help
> > > > Subject: point_stat seg faulting
> > > > Owner: Nobody
> > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > Status: new
> > > > Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > >
> > > >
> > > > Hey John,
> > > >
> > > >
> > > >
> > > > I'm trying to extrapolate the production of vertical raob
> verification
> > > > plots
> > > > using point_stat and stat_analysis like we did together for
winds but
> > for
> > > > relative humidity now. But when I run point_stat, it seg
faults
> > without
> > > > much explanation
> > > >
> > > >
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > >
> > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > > mean
> > > > levels, and 0 climatology standard deviation levels.
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> > > >
> > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > PYTHON_NUMPY
> > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > ./out/point_stat.log
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > From my log file:
> > > >
> > > > 607 DEBUG 2:
> > > >
> > > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > valid_time: 1
> > > >
> > > >
> > > >
> > > > Any help would be much appreciated
> > > >
> > > >
> > > >
> > > > Justin
> > > >
> > > >
> > > >
> > > > Justin Tsu
> > > >
> > > > Marine Meteorology Division
> > > >
> > > > Data Assimilation/Mesoscale Modeling
> > > >
> > > > Building 704 Room 212
> > > >
> > > > Naval Research Laboratory, Code 7531
> > > >
> > > > 7 Grace Hopper Avenue
> > > >
> > > > Monterey, CA 93943-5502
> > > >
> > > >
> > > >
> > > > Ph. (831) 656-4111
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Aug 30 17:10:37 2019
Thanks John,
This all helps me greatly. One more questions: is there any
information in either the CNT or SL1L2 that could give me confidence
intervals for each data point? I'm looking to replicate the attached
plot. Notice that the individual points could have either a 99, 95 or
90 % confidence.
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, August 30, 2019 12:46 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Sounds about right. Each time you run Grid-Stat or Point-Stat you can
write the CNT output line type which contains stats like MSE, ME, MAE,
and
RMSE. And I'm recommended that you also write the SL1L2 line type as
well.
Then you'd run a stat_analysis job like this:
stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
cnt_out.stat
This job reads any .stat files it finds in "/path/to/stat/data", reads
the
SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
write out the corresponding CNT line type to the output file named
cnt_out.stat.
John
On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> So if I understand what you're saying correctly, then if I wanted to
an
> average of 24 hour forecasts over a month long run, then I would use
the
> SL1L2 output to aggregate and produce this average? Whereas if I
used CNT,
> this would just provide me ~30 individual (per day over a month) 24
hour
> forecast verifications?
>
> On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> I am forgetting if we used stat_analysis to produce a plot or if the
plot
> you showed me was just something you guys post processed using
python or
> whatnot.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 8:47 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> We wrote the SL1L2 partial sums from Point-Stat because they can be
> aggregated together by the stat-analysis tool over multiple days or
cases.
>
> If you're interested in continuous statistics from Point-Stat, I'd
> recommend writing the CNT line type (which has the stats computed
for that
> single run) and the SL1L2 line type (so that you can aggregate them
> together in stat-analysis or METviewer).
>
> The other alternative is looking at the average of the daily
statistics
> scores. For RMSE, the average of the daily RMSE is equal to the
aggregated
> score... as long as the number of matched pairs remains constant day
to
> day. But if one today you have 98 matched pairs and tomorrow you
have 105,
> then tomorrow's score will have slightly more weight. The SL1L2
lines are
> aggregated as weighted averages, where the TOTAL column is the
weight. And
> then stats (like RMSE and MSE) are recomputed from those aggregated
> scores. Generally, the statisticians recommend this method over the
mean
> of the daily scores. Neither is "wrong", they just give you
slightly
> different information.
>
> Thanks,
> John
>
> On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John.
> >
> > Sorry it's taken me such a long time to get to this. It's nearing
the
> end
> > of FY19 so I have been finalizing several transition projects and
haven’t
> > had much time to work on MET recently. I just picked this back up
and
> have
> > loaded a couple new modules. Here is what I have to work with
now:
> >
> > 1) intel/xe_2013-sp1-u1
> > 2) netcdf-local/netcdf-met
> > 3) met-8.1/met-8.1a-with-grib2-support
> > 4) ncview-2.1.5/ncview-2.1.5
> > 5) udunits/udunits-2.1.24
> > 6) gcc-6.3.0/gcc-6.3.0
> > 7) ImageMagicK/ImageMagick-6.9.0-10
> > 8) python/anaconda-7-15-15-save.6.6.2017
> >
> >
> > Running
> > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> >
> > I get many matched pairs. Here is a sample of what the log file
looks
> > like for one of the pressure ranges I am verifying on:
> >
> > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-376,
for
> > observation type radiosonde, over region FULL, for interpolation
method
> > NEAREST(1), using 98 pairs.
> > 15258 DEBUG 3: Number of matched pairs = 98
> > 15259 DEBUG 3: Observations processed = 4680328
> > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > 15261 DEBUG 3: Rejected: obs type = 3890030
> > 15262 DEBUG 3: Rejected: valid time = 0
> > 15263 DEBUG 3: Rejected: bad obs value = 0
> > 15264 DEBUG 3: Rejected: off the grid = 786506
> > 15265 DEBUG 3: Rejected: topography = 0
> > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > 15267 DEBUG 3: Rejected: quality marker = 0
> > 15268 DEBUG 3: Rejected: message type = 0
> > 15269 DEBUG 3: Rejected: masking region = 0
> > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > 15271 DEBUG 3: Rejected: duplicates = 0
> > 15272 DEBUG 2: Computing Continuous Statistics.
> > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold >=0,
> > observation filtering threshold >=0, and field logic UNION.
> > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > 15280 DEBUG 2:
> > 15281 DEBUG 2:
> >
>
--------------------------------------------------------------------------------
> >
> > I am going to work on processing these point stat files to create
those
> > vertical raob plots we had a discussion about. I remember us
talking
> about
> > the partial sums file. Why did we choose to go the route of
producing
> > partial sums then feeding that into series analysis to generate
bias and
> > MSE? It looks like bias and MSE both exist within the CNT line
type
> (MBIAS
> > and MSE)?
> >
> >
> > Justin
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 16, 2019 12:16 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Great, thanks for sending me the sample data. Yes, I was able to
> replicate
> > the segfault. The good news is that this is caused by a simple
typo
> that's
> > easy to fix. If you look in the "obs.field" entry of the
relhumConfig
> > file, you'll see an empty string for the last field listed:
> >
> > *obs = { field = [*
> >
> >
> >
> > * ... {name = "dptd";level = ["P988-1006"];},
> {name =
> > "";level = ["P1007-1013"];} ];*
> > If you change that empty string to "dptd", the segfault will go
away:*
> > {name = "dpdt";level = ["P1007-1013"];}*
> > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > pairs. They were discarded because of the valid times (seen using
-v 3
> > command line option to Point-Stat). The ob file you sent is named
"
> > raob_2015020412.nc" but the actual times in that file are for
> > "20190426_120000":
> >
> > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc>*
> >
> > * hdr_vld_table = "20190426_120000" ;*
> >
> > So please be aware of that discrepancy. To just produce some
matched
> > pairs, I told Point-Stat to use the valid times of the data:
> > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > <http://raob_2015020412.nc> relhumConfig \*
> > * -outdir out -v 3 -log run_ps.log -obs_valid_beg 20190426_120000
> > -obs_valid_end 20190426_120000*
> >
> > But I still get 0 matched pairs. This time, it's because of bad
forecast
> > values:
> > *DEBUG 3: Rejected: bad fcst value = 55*
> >
> > Taking a step back... let's run one of these fields through
> > plot_data_plane, which results in an error:
> > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <http://plot.ps>
> > 'name="./read_NRL_binary.py
> >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny) =
(97,
> 97),
> > (x, y) = (97, 0)
> >
> > While the numpy object is 97x97, the grid is specified as being
118x118
> in
> > the python script ('nx': 118, 'ny': 118).
> >
> > Just to get something working, I modified the nx and ny in the
python
> > script:
> > 'nx':97,
> > 'ny':97,
> > Rerunning again, I still didn't get any matched pairs.
> >
> > So I'd suggest...
> > - Fix the typo in the config file.
> > - Figure out the discrepancy between the obs file name timestamp
and the
> > data in that file.
> > - Make sure the grid information is consistent with the data in
the
> python
> > script.
> >
> > Obviously though, we don't want to code to be segfaulting in any
> > condition. So next, I tested using met-8.1 with that empty
string. This
> > time it does run with no segfault, but prints a warning about the
empty
> > string.
> >
> > Hope that helps.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hey John,
> > >
> > > Ive put my data in tsu_data_20190815/ under met_help.
> > >
> > > I am running met-8.0/met-8.0-with-grib2-support and have
provided
> > > everything
> > > on that list you've provided me. Let me know if you're able to
> replicate
> > > it
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Thursday, August 15, 2019 4:08 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Well that doesn't seem to be very helpful of Point-Stat at all.
There
> > > isn't much jumping out at me from the log messages you sent. In
fact,
> I
> > > hunted around for the DEBUG(7) log message but couldn't find
where in
> the
> > > code it's being written. Are you able to send me some sample
data to
> > > replicate this behavior?
> > >
> > > I'd need to know...
> > > - What version of MET are you running.
> > > - A copy of your Point-Stat config file.
> > > - The python script that you're running.
> > > - The input file for that python script.
> > > - The NetCDF point observation file you're passing to Point-
Stat.
> > >
> > > If I can replicate the behavior here, it should be easy to run
it in
> the
> > > debugger and figure it out.
> > >
> > > You can post data to our anonymous ftp site as described in "How
to
> send
> > us
> > > data":
> > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > Queue: met_help
> > > > Subject: point_stat seg faulting
> > > > Owner: Nobody
> > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > Status: new
> > > > Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > >
> > > >
> > > > Hey John,
> > > >
> > > >
> > > >
> > > > I'm trying to extrapolate the production of vertical raob
> verification
> > > > plots
> > > > using point_stat and stat_analysis like we did together for
winds but
> > for
> > > > relative humidity now. But when I run point_stat, it seg
faults
> > without
> > > > much explanation
> > > >
> > > >
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > >
> > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
climatology
> > > mean
> > > > levels, and 0 climatology standard deviation levels.
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2:
> > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > ----
> > > >
> > > > DEBUG 2:
> > > >
> > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
valid_time: 1
> > > >
> > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > PYTHON_NUMPY
> > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > ./out/point_stat.log
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > From my log file:
> > > >
> > > > 607 DEBUG 2:
> > > >
> > > > 608 DEBUG 2: Searching 4680328 observations from 617 messages.
> > > >
> > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > valid_time: 1
> > > >
> > > >
> > > >
> > > > Any help would be much appreciated
> > > >
> > > >
> > > >
> > > > Justin
> > > >
> > > >
> > > >
> > > > Justin Tsu
> > > >
> > > > Marine Meteorology Division
> > > >
> > > > Data Assimilation/Mesoscale Modeling
> > > >
> > > > Building 704 Room 212
> > > >
> > > > Naval Research Laboratory, Code 7531
> > > >
> > > > 7 Grace Hopper Avenue
> > > >
> > > > Monterey, CA 93943-5502
> > > >
> > > >
> > > >
> > > > Ph. (831) 656-4111
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Tue Sep 03 09:35:40 2019
Justin,
I see that you're plotting RMSE and bias (called ME for Mean Error in
MET)
in the plots you sent.
Table 7.6 of the MET User's Guide (
https://dtcenter.org/sites/default/files/community-code/met/docs/user-
guide/MET_Users_Guide_v8.1.1.pdf)
describes the contents of the CNT line type type. Bot the columns for
RMSE
and ME are followed by _NCL and _NCU columns which give the parametric
approximation of the confidence interval for those scores. So yes,
you can
run Stat-Analysis to aggregate SL1L2 lines together and write the
corresponding CNT output line type.
The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
for the ME statistic.
You can change the alpha value for those confidence intervals by
setting:
-out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
Thanks,
John
On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> This all helps me greatly. One more questions: is there any
information
> in either the CNT or SL1L2 that could give me confidence intervals
for
> each data point? I'm looking to replicate the attached plot.
Notice that
> the individual points could have either a 99, 95 or 90 % confidence.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 12:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sounds about right. Each time you run Grid-Stat or Point-Stat you
can
> write the CNT output line type which contains stats like MSE, ME,
MAE, and
> RMSE. And I'm recommended that you also write the SL1L2 line type
as well.
>
> Then you'd run a stat_analysis job like this:
>
> stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> cnt_out.stat
>
> This job reads any .stat files it finds in "/path/to/stat/data",
reads the
> SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
> FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
> write out the corresponding CNT line type to the output file named
> cnt_out.stat.
>
> John
>
> On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > So if I understand what you're saying correctly, then if I wanted
to an
> > average of 24 hour forecasts over a month long run, then I would
use the
> > SL1L2 output to aggregate and produce this average? Whereas if I
used
> CNT,
> > this would just provide me ~30 individual (per day over a month)
24 hour
> > forecast verifications?
> >
> > On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> > I am forgetting if we used stat_analysis to produce a plot or if
the plot
> > you showed me was just something you guys post processed using
python or
> > whatnot.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 8:47 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > aggregated together by the stat-analysis tool over multiple days
or
> cases.
> >
> > If you're interested in continuous statistics from Point-Stat, I'd
> > recommend writing the CNT line type (which has the stats computed
for
> that
> > single run) and the SL1L2 line type (so that you can aggregate
them
> > together in stat-analysis or METviewer).
> >
> > The other alternative is looking at the average of the daily
statistics
> > scores. For RMSE, the average of the daily RMSE is equal to the
> aggregated
> > score... as long as the number of matched pairs remains constant
day to
> > day. But if one today you have 98 matched pairs and tomorrow you
have
> 105,
> > then tomorrow's score will have slightly more weight. The SL1L2
lines
> are
> > aggregated as weighted averages, where the TOTAL column is the
weight.
> And
> > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > scores. Generally, the statisticians recommend this method over
the mean
> > of the daily scores. Neither is "wrong", they just give you
slightly
> > different information.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John.
> > >
> > > Sorry it's taken me such a long time to get to this. It's
nearing the
> > end
> > > of FY19 so I have been finalizing several transition projects
and
> haven’t
> > > had much time to work on MET recently. I just picked this back
up and
> > have
> > > loaded a couple new modules. Here is what I have to work with
now:
> > >
> > > 1) intel/xe_2013-sp1-u1
> > > 2) netcdf-local/netcdf-met
> > > 3) met-8.1/met-8.1a-with-grib2-support
> > > 4) ncview-2.1.5/ncview-2.1.5
> > > 5) udunits/udunits-2.1.24
> > > 6) gcc-6.3.0/gcc-6.3.0
> > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > 8) python/anaconda-7-15-15-save.6.6.2017
> > >
> > >
> > > Running
> > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > >
> > > I get many matched pairs. Here is a sample of what the log file
looks
> > > like for one of the pressure ranges I am verifying on:
> > >
> > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > observation type radiosonde, over region FULL, for interpolation
method
> > > NEAREST(1), using 98 pairs.
> > > 15258 DEBUG 3: Number of matched pairs = 98
> > > 15259 DEBUG 3: Observations processed = 4680328
> > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > 15262 DEBUG 3: Rejected: valid time = 0
> > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > 15265 DEBUG 3: Rejected: topography = 0
> > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > 15268 DEBUG 3: Rejected: message type = 0
> > > 15269 DEBUG 3: Rejected: masking region = 0
> > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15280 DEBUG 2:
> > > 15281 DEBUG 2:
> > >
> >
>
--------------------------------------------------------------------------------
> > >
> > > I am going to work on processing these point stat files to
create those
> > > vertical raob plots we had a discussion about. I remember us
talking
> > about
> > > the partial sums file. Why did we choose to go the route of
producing
> > > partial sums then feeding that into series analysis to generate
bias
> and
> > > MSE? It looks like bias and MSE both exist within the CNT line
type
> > (MBIAS
> > > and MSE)?
> > >
> > >
> > > Justin
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 16, 2019 12:16 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Great, thanks for sending me the sample data. Yes, I was able
to
> > replicate
> > > the segfault. The good news is that this is caused by a simple
typo
> > that's
> > > easy to fix. If you look in the "obs.field" entry of the
relhumConfig
> > > file, you'll see an empty string for the last field listed:
> > >
> > > *obs = { field = [*
> > >
> > >
> > >
> > > * ... {name = "dptd";level = ["P988-1006"];},
> > {name =
> > > "";level = ["P1007-1013"];} ];*
> > > If you change that empty string to "dptd", the segfault will go
away:*
> > > {name = "dpdt";level = ["P1007-1013"];}*
> > > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > pairs. They were discarded because of the valid times (seen
using -v 3
> > > command line option to Point-Stat). The ob file you sent is
named "
> > > raob_2015020412.nc" but the actual times in that file are for
> > > "20190426_120000":
> > >
> > > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc
> >*
> > >
> > > * hdr_vld_table = "20190426_120000" ;*
> > >
> > > So please be aware of that discrepancy. To just produce some
matched
> > > pairs, I told Point-Stat to use the valid times of the data:
> > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > <http://raob_2015020412.nc> relhumConfig \*
> > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > -obs_valid_end 20190426_120000*
> > >
> > > But I still get 0 matched pairs. This time, it's because of bad
> forecast
> > > values:
> > > *DEBUG 3: Rejected: bad fcst value = 55*
> > >
> > > Taking a step back... let's run one of these fields through
> > > plot_data_plane, which results in an error:
> > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > 'name="./read_NRL_binary.py
> > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny)
= (97,
> > 97),
> > > (x, y) = (97, 0)
> > >
> > > While the numpy object is 97x97, the grid is specified as being
118x118
> > in
> > > the python script ('nx': 118, 'ny': 118).
> > >
> > > Just to get something working, I modified the nx and ny in the
python
> > > script:
> > > 'nx':97,
> > > 'ny':97,
> > > Rerunning again, I still didn't get any matched pairs.
> > >
> > > So I'd suggest...
> > > - Fix the typo in the config file.
> > > - Figure out the discrepancy between the obs file name timestamp
and
> the
> > > data in that file.
> > > - Make sure the grid information is consistent with the data in
the
> > python
> > > script.
> > >
> > > Obviously though, we don't want to code to be segfaulting in any
> > > condition. So next, I tested using met-8.1 with that empty
string.
> This
> > > time it does run with no segfault, but prints a warning about
the empty
> > > string.
> > >
> > > Hope that helps.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hey John,
> > > >
> > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > >
> > > > I am running met-8.0/met-8.0-with-grib2-support and have
provided
> > > > everything
> > > > on that list you've provided me. Let me know if you're able
to
> > replicate
> > > > it
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> There
> > > > isn't much jumping out at me from the log messages you sent.
In
> fact,
> > I
> > > > hunted around for the DEBUG(7) log message but couldn't find
where in
> > the
> > > > code it's being written. Are you able to send me some sample
data to
> > > > replicate this behavior?
> > > >
> > > > I'd need to know...
> > > > - What version of MET are you running.
> > > > - A copy of your Point-Stat config file.
> > > > - The python script that you're running.
> > > > - The input file for that python script.
> > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > >
> > > > If I can replicate the behavior here, it should be easy to run
it in
> > the
> > > > debugger and figure it out.
> > > >
> > > > You can post data to our anonymous ftp site as described in
"How to
> > send
> > > us
> > > > data":
> > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > > Queue: met_help
> > > > > Subject: point_stat seg faulting
> > > > > Owner: Nobody
> > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > Status: new
> > > > > Ticket <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > >
> > > > >
> > > > > Hey John,
> > > > >
> > > > >
> > > > >
> > > > > I'm trying to extrapolate the production of vertical raob
> > verification
> > > > > plots
> > > > > using point_stat and stat_analysis like we did together for
winds
> but
> > > for
> > > > > relative humidity now. But when I run point_stat, it seg
faults
> > > without
> > > > > much explanation
> > > > >
> > > > >
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > >
> > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> climatology
> > > > mean
> > > > > levels, and 0 climatology standard deviation levels.
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > >
> > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> valid_time: 1
> > > > >
> > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > PYTHON_NUMPY
> > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > ./out/point_stat.log
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > From my log file:
> > > > >
> > > > > 607 DEBUG 2:
> > > > >
> > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > >
> > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > valid_time: 1
> > > > >
> > > > >
> > > > >
> > > > > Any help would be much appreciated
> > > > >
> > > > >
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > >
> > > > > Justin Tsu
> > > > >
> > > > > Marine Meteorology Division
> > > > >
> > > > > Data Assimilation/Mesoscale Modeling
> > > > >
> > > > > Building 704 Room 212
> > > > >
> > > > > Naval Research Laboratory, Code 7531
> > > > >
> > > > > 7 Grace Hopper Avenue
> > > > >
> > > > > Monterey, CA 93943-5502
> > > > >
> > > > >
> > > > >
> > > > > Ph. (831) 656-4111
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Sep 06 13:02:46 2019
Thanks John,
I managed to scrap together some code to get RAOB stats from CNT
plotted with 95% CI. Working on Surface stats now.
So my configuration file looks like this right now:
fcst = {
field = [
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
];
}
obs = {
field = [
{name = "dptd";level = ["P0.86-1.5"];},
{name = "dptd";level = ["P1.6-2.5"];},
{name = "dptd";level = ["P2.6-3.5"];},
{name = "dptd";level = ["P3.6-4.5"];},
{name = "dptd";level = ["P4.6-6"];},
{name = "dptd";level = ["P6.1-8"];},
{name = "dptd";level = ["P9-15"];},
{name = "dptd";level = ["P16-25"];},
{name = "dptd";level = ["P26-40"];},
{name = "dptd";level = ["P41-65"];},
{name = "dptd";level = ["P66-85"];},
{name = "dptd";level = ["P86-125"];},
{name = "dptd";level = ["P126-175"];},
{name = "dptd";level = ["P176-225"];},
{name = "dptd";level = ["P226-275"];},
{name = "dptd";level = ["P276-325"];},
{name = "dptd";level = ["P326-375"];},
{name = "dptd";level = ["P376-425"];},
{name = "dptd";level = ["P426-475"];},
{name = "dptd";level = ["P476-525"];},
{name = "dptd";level = ["P526-575"];},
{name = "dptd";level = ["P576-625"];},
{name = "dptd";level = ["P626-675"];},
{name = "dptd";level = ["P676-725"];},
{name = "dptd";level = ["P726-775"];},
{name = "dptd";level = ["P776-825"];},
{name = "dptd";level = ["P826-875"];},
{name = "dptd";level = ["P876-912"];},
{name = "dptd";level = ["P913-936"];},
{name = "dptd";level = ["P937-962"];},
{name = "dptd";level = ["P963-987"];},
{name = "dptd";level = ["P988-1006"];},
{name = "dptd";level = ["P1007-1013"];}
And I have the data:
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
for a particular DTG and vertical level. If I want to run multiple
lead times, it seems like I'll have to copy that long list of fields
for each lead time in the fcst dict and then duplicate the obs
dictionary so that each forecast entry has a corresponding obs level
matching range. Is this correct or is there a shorter/better way to
do this?
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Tuesday, September 3, 2019 8:36 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
I see that you're plotting RMSE and bias (called ME for Mean Error in
MET)
in the plots you sent.
Table 7.6 of the MET User's Guide (
https://dtcenter.org/sites/default/files/community-code/met/docs/user-
guide/MET_Users_Guide_v8.1.1.pdf)
describes the contents of the CNT line type type. Bot the columns for
RMSE
and ME are followed by _NCL and _NCU columns which give the parametric
approximation of the confidence interval for those scores. So yes,
you can
run Stat-Analysis to aggregate SL1L2 lines together and write the
corresponding CNT output line type.
The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
for the ME statistic.
You can change the alpha value for those confidence intervals by
setting:
-out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
Thanks,
John
On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> This all helps me greatly. One more questions: is there any
information
> in either the CNT or SL1L2 that could give me confidence intervals
for
> each data point? I'm looking to replicate the attached plot.
Notice that
> the individual points could have either a 99, 95 or 90 % confidence.
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, August 30, 2019 12:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sounds about right. Each time you run Grid-Stat or Point-Stat you
can
> write the CNT output line type which contains stats like MSE, ME,
MAE, and
> RMSE. And I'm recommended that you also write the SL1L2 line type
as well.
>
> Then you'd run a stat_analysis job like this:
>
> stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> cnt_out.stat
>
> This job reads any .stat files it finds in "/path/to/stat/data",
reads the
> SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV, and
> FCST_LEAD columns, it'll aggregate those SL1L2 partial sums together
and
> write out the corresponding CNT line type to the output file named
> cnt_out.stat.
>
> John
>
> On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > So if I understand what you're saying correctly, then if I wanted
to an
> > average of 24 hour forecasts over a month long run, then I would
use the
> > SL1L2 output to aggregate and produce this average? Whereas if I
used
> CNT,
> > this would just provide me ~30 individual (per day over a month)
24 hour
> > forecast verifications?
> >
> > On a side note, did we ever go over how to plot the SL1L2 MSE and
biases?
> > I am forgetting if we used stat_analysis to produce a plot or if
the plot
> > you showed me was just something you guys post processed using
python or
> > whatnot.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 8:47 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > aggregated together by the stat-analysis tool over multiple days
or
> cases.
> >
> > If you're interested in continuous statistics from Point-Stat, I'd
> > recommend writing the CNT line type (which has the stats computed
for
> that
> > single run) and the SL1L2 line type (so that you can aggregate
them
> > together in stat-analysis or METviewer).
> >
> > The other alternative is looking at the average of the daily
statistics
> > scores. For RMSE, the average of the daily RMSE is equal to the
> aggregated
> > score... as long as the number of matched pairs remains constant
day to
> > day. But if one today you have 98 matched pairs and tomorrow you
have
> 105,
> > then tomorrow's score will have slightly more weight. The SL1L2
lines
> are
> > aggregated as weighted averages, where the TOTAL column is the
weight.
> And
> > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > scores. Generally, the statisticians recommend this method over
the mean
> > of the daily scores. Neither is "wrong", they just give you
slightly
> > different information.
> >
> > Thanks,
> > John
> >
> > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John.
> > >
> > > Sorry it's taken me such a long time to get to this. It's
nearing the
> > end
> > > of FY19 so I have been finalizing several transition projects
and
> haven’t
> > > had much time to work on MET recently. I just picked this back
up and
> > have
> > > loaded a couple new modules. Here is what I have to work with
now:
> > >
> > > 1) intel/xe_2013-sp1-u1
> > > 2) netcdf-local/netcdf-met
> > > 3) met-8.1/met-8.1a-with-grib2-support
> > > 4) ncview-2.1.5/ncview-2.1.5
> > > 5) udunits/udunits-2.1.24
> > > 6) gcc-6.3.0/gcc-6.3.0
> > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > 8) python/anaconda-7-15-15-save.6.6.2017
> > >
> > >
> > > Running
> > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v 3
> > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > >
> > > I get many matched pairs. Here is a sample of what the log file
looks
> > > like for one of the pressure ranges I am verifying on:
> > >
> > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > observation type radiosonde, over region FULL, for interpolation
method
> > > NEAREST(1), using 98 pairs.
> > > 15258 DEBUG 3: Number of matched pairs = 98
> > > 15259 DEBUG 3: Observations processed = 4680328
> > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > 15262 DEBUG 3: Rejected: valid time = 0
> > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > 15265 DEBUG 3: Rejected: topography = 0
> > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > 15268 DEBUG 3: Rejected: message type = 0
> > > 15269 DEBUG 3: Rejected: masking region = 0
> > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> >=0,
> > > observation filtering threshold >=0, and field logic UNION.
> > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > >=10.0, observation filtering threshold >=10.0, and field logic
UNION.
> > > 15280 DEBUG 2:
> > > 15281 DEBUG 2:
> > >
> >
>
--------------------------------------------------------------------------------
> > >
> > > I am going to work on processing these point stat files to
create those
> > > vertical raob plots we had a discussion about. I remember us
talking
> > about
> > > the partial sums file. Why did we choose to go the route of
producing
> > > partial sums then feeding that into series analysis to generate
bias
> and
> > > MSE? It looks like bias and MSE both exist within the CNT line
type
> > (MBIAS
> > > and MSE)?
> > >
> > >
> > > Justin
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 16, 2019 12:16 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Great, thanks for sending me the sample data. Yes, I was able
to
> > replicate
> > > the segfault. The good news is that this is caused by a simple
typo
> > that's
> > > easy to fix. If you look in the "obs.field" entry of the
relhumConfig
> > > file, you'll see an empty string for the last field listed:
> > >
> > > *obs = { field = [*
> > >
> > >
> > >
> > > * ... {name = "dptd";level = ["P988-1006"];},
> > {name =
> > > "";level = ["P1007-1013"];} ];*
> > > If you change that empty string to "dptd", the segfault will go
away:*
> > > {name = "dpdt";level = ["P1007-1013"];}*
> > > Rerunning met-8.0 with that change, Point-Stat ran to completion
(in 2
> > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > pairs. They were discarded because of the valid times (seen
using -v 3
> > > command line option to Point-Stat). The ob file you sent is
named "
> > > raob_2015020412.nc" but the actual times in that file are for
> > > "20190426_120000":
> > >
> > > *ncdump -v hdr_vld_table raob_2015020412.nc
<http://raob_2015020412.nc
> >*
> > >
> > > * hdr_vld_table = "20190426_120000" ;*
> > >
> > > So please be aware of that discrepancy. To just produce some
matched
> > > pairs, I told Point-Stat to use the valid times of the data:
> > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > <http://raob_2015020412.nc> relhumConfig \*
> > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > -obs_valid_end 20190426_120000*
> > >
> > > But I still get 0 matched pairs. This time, it's because of bad
> forecast
> > > values:
> > > *DEBUG 3: Rejected: bad fcst value = 55*
> > >
> > > Taking a step back... let's run one of these fields through
> > > plot_data_plane, which results in an error:
> > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > 'name="./read_NRL_binary.py
> > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > ERROR : DataPlane::two_to_one() -> range check error: (Nx, Ny)
= (97,
> > 97),
> > > (x, y) = (97, 0)
> > >
> > > While the numpy object is 97x97, the grid is specified as being
118x118
> > in
> > > the python script ('nx': 118, 'ny': 118).
> > >
> > > Just to get something working, I modified the nx and ny in the
python
> > > script:
> > > 'nx':97,
> > > 'ny':97,
> > > Rerunning again, I still didn't get any matched pairs.
> > >
> > > So I'd suggest...
> > > - Fix the typo in the config file.
> > > - Figure out the discrepancy between the obs file name timestamp
and
> the
> > > data in that file.
> > > - Make sure the grid information is consistent with the data in
the
> > python
> > > script.
> > >
> > > Obviously though, we don't want to code to be segfaulting in any
> > > condition. So next, I tested using met-8.1 with that empty
string.
> This
> > > time it does run with no segfault, but prints a warning about
the empty
> > > string.
> > >
> > > Hope that helps.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hey John,
> > > >
> > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > >
> > > > I am running met-8.0/met-8.0-with-grib2-support and have
provided
> > > > everything
> > > > on that list you've provided me. Let me know if you're able
to
> > replicate
> > > > it
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> There
> > > > isn't much jumping out at me from the log messages you sent.
In
> fact,
> > I
> > > > hunted around for the DEBUG(7) log message but couldn't find
where in
> > the
> > > > code it's being written. Are you able to send me some sample
data to
> > > > replicate this behavior?
> > > >
> > > > I'd need to know...
> > > > - What version of MET are you running.
> > > > - A copy of your Point-Stat config file.
> > > > - The python script that you're running.
> > > > - The input file for that python script.
> > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > >
> > > > If I can replicate the behavior here, it should be easy to run
it in
> > the
> > > > debugger and figure it out.
> > > >
> > > > You can post data to our anonymous ftp site as described in
"How to
> > send
> > > us
> > > > data":
> > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > > Queue: met_help
> > > > > Subject: point_stat seg faulting
> > > > > Owner: Nobody
> > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > Status: new
> > > > > Ticket <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > >
> > > > >
> > > > > Hey John,
> > > > >
> > > > >
> > > > >
> > > > > I'm trying to extrapolate the production of vertical raob
> > verification
> > > > > plots
> > > > > using point_stat and stat_analysis like we did together for
winds
> but
> > > for
> > > > > relative humidity now. But when I run point_stat, it seg
faults
> > > without
> > > > > much explanation
> > > > >
> > > > >
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > >
> > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> climatology
> > > > mean
> > > > > levels, and 0 climatology standard deviation levels.
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > ----
> > > > >
> > > > > DEBUG 2:
> > > > >
> > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > >
> > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> valid_time: 1
> > > > >
> > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > PYTHON_NUMPY
> > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > ./out/point_stat.log
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > From my log file:
> > > > >
> > > > > 607 DEBUG 2:
> > > > >
> > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > >
> > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > valid_time: 1
> > > > >
> > > > >
> > > > >
> > > > > Any help would be much appreciated
> > > > >
> > > > >
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > >
> > > > > Justin Tsu
> > > > >
> > > > > Marine Meteorology Division
> > > > >
> > > > > Data Assimilation/Mesoscale Modeling
> > > > >
> > > > > Building 704 Room 212
> > > > >
> > > > > Naval Research Laboratory, Code 7531
> > > > >
> > > > > 7 Grace Hopper Avenue
> > > > >
> > > > > Monterey, CA 93943-5502
> > > > >
> > > > >
> > > > >
> > > > > Ph. (831) 656-4111
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Sep 06 14:10:30 2019
Justin,
Yes, that is a long list of fields, but I don't see a way obvious way
of
shortening that. But to do multiple lead times, I'd just call Point-
Stat
multiple times, once for each lead time, and update the config file to
use
environment variables for the current time:
fcst = {
field = [
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
...
Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
variables.
John
On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> I managed to scrap together some code to get RAOB stats from CNT
plotted
> with 95% CI. Working on Surface stats now.
>
> So my configuration file looks like this right now:
>
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> ];
> }
>
> obs = {
> field = [
> {name = "dptd";level = ["P0.86-1.5"];},
> {name = "dptd";level = ["P1.6-2.5"];},
> {name = "dptd";level = ["P2.6-3.5"];},
> {name = "dptd";level = ["P3.6-4.5"];},
> {name = "dptd";level = ["P4.6-6"];},
> {name = "dptd";level = ["P6.1-8"];},
> {name = "dptd";level = ["P9-15"];},
> {name = "dptd";level = ["P16-25"];},
> {name = "dptd";level = ["P26-40"];},
> {name = "dptd";level = ["P41-65"];},
> {name = "dptd";level = ["P66-85"];},
> {name = "dptd";level = ["P86-125"];},
> {name = "dptd";level = ["P126-175"];},
> {name = "dptd";level = ["P176-225"];},
> {name = "dptd";level = ["P226-275"];},
> {name = "dptd";level = ["P276-325"];},
> {name = "dptd";level = ["P326-375"];},
> {name = "dptd";level = ["P376-425"];},
> {name = "dptd";level = ["P426-475"];},
> {name = "dptd";level = ["P476-525"];},
> {name = "dptd";level = ["P526-575"];},
> {name = "dptd";level = ["P576-625"];},
> {name = "dptd";level = ["P626-675"];},
> {name = "dptd";level = ["P676-725"];},
> {name = "dptd";level = ["P726-775"];},
> {name = "dptd";level = ["P776-825"];},
> {name = "dptd";level = ["P826-875"];},
> {name = "dptd";level = ["P876-912"];},
> {name = "dptd";level = ["P913-936"];},
> {name = "dptd";level = ["P937-962"];},
> {name = "dptd";level = ["P963-987"];},
> {name = "dptd";level = ["P988-1006"];},
> {name = "dptd";level = ["P1007-1013"];}
>
> And I have the data:
>
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
>
> for a particular DTG and vertical level. If I want to run multiple
lead
> times, it seems like I'll have to copy that long list of fields for
each
> lead time in the fcst dict and then duplicate the obs dictionary so
that
> each forecast entry has a corresponding obs level matching range.
Is this
> correct or is there a shorter/better way to do this?
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Tuesday, September 3, 2019 8:36 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> I see that you're plotting RMSE and bias (called ME for Mean Error
in MET)
> in the plots you sent.
>
> Table 7.6 of the MET User's Guide (
>
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> )
> describes the contents of the CNT line type type. Bot the columns
for RMSE
> and ME are followed by _NCL and _NCU columns which give the
parametric
> approximation of the confidence interval for those scores. So yes,
you can
> run Stat-Analysis to aggregate SL1L2 lines together and write the
> corresponding CNT output line type.
>
> The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> for the ME statistic.
>
> You can change the alpha value for those confidence intervals by
setting:
> -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
>
> Thanks,
> John
>
>
> On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > This all helps me greatly. One more questions: is there any
information
> > in either the CNT or SL1L2 that could give me confidence
intervals for
> > each data point? I'm looking to replicate the attached plot.
Notice
> that
> > the individual points could have either a 99, 95 or 90 %
confidence.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 12:46 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Sounds about right. Each time you run Grid-Stat or Point-Stat you
can
> > write the CNT output line type which contains stats like MSE, ME,
MAE,
> and
> > RMSE. And I'm recommended that you also write the SL1L2 line type
as
> well.
> >
> > Then you'd run a stat_analysis job like this:
> >
> > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> > cnt_out.stat
> >
> > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> the
> > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> and
> > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together and
> > write out the corresponding CNT line type to the output file named
> > cnt_out.stat.
> >
> > John
> >
> > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > So if I understand what you're saying correctly, then if I
wanted to an
> > > average of 24 hour forecasts over a month long run, then I would
use
> the
> > > SL1L2 output to aggregate and produce this average? Whereas if
I used
> > CNT,
> > > this would just provide me ~30 individual (per day over a month)
24
> hour
> > > forecast verifications?
> > >
> > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> biases?
> > > I am forgetting if we used stat_analysis to produce a plot or if
the
> plot
> > > you showed me was just something you guys post processed using
python
> or
> > > whatnot.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 8:47 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > > aggregated together by the stat-analysis tool over multiple days
or
> > cases.
> > >
> > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > recommend writing the CNT line type (which has the stats
computed for
> > that
> > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > together in stat-analysis or METviewer).
> > >
> > > The other alternative is looking at the average of the daily
statistics
> > > scores. For RMSE, the average of the daily RMSE is equal to the
> > aggregated
> > > score... as long as the number of matched pairs remains constant
day to
> > > day. But if one today you have 98 matched pairs and tomorrow
you have
> > 105,
> > > then tomorrow's score will have slightly more weight. The SL1L2
lines
> > are
> > > aggregated as weighted averages, where the TOTAL column is the
weight.
> > And
> > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > scores. Generally, the statisticians recommend this method over
the
> mean
> > > of the daily scores. Neither is "wrong", they just give you
slightly
> > > different information.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John.
> > > >
> > > > Sorry it's taken me such a long time to get to this. It's
nearing
> the
> > > end
> > > > of FY19 so I have been finalizing several transition projects
and
> > haven’t
> > > > had much time to work on MET recently. I just picked this
back up
> and
> > > have
> > > > loaded a couple new modules. Here is what I have to work with
now:
> > > >
> > > > 1) intel/xe_2013-sp1-u1
> > > > 2) netcdf-local/netcdf-met
> > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > 5) udunits/udunits-2.1.24
> > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > >
> > > >
> > > > Running
> > > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v
3
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > >
> > > > I get many matched pairs. Here is a sample of what the log
file
> looks
> > > > like for one of the pressure ranges I am verifying on:
> > > >
> > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > > observation type radiosonde, over region FULL, for
interpolation
> method
> > > > NEAREST(1), using 98 pairs.
> > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15280 DEBUG 2:
> > > > 15281 DEBUG 2:
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > >
> > > > I am going to work on processing these point stat files to
create
> those
> > > > vertical raob plots we had a discussion about. I remember us
talking
> > > about
> > > > the partial sums file. Why did we choose to go the route of
> producing
> > > > partial sums then feeding that into series analysis to
generate bias
> > and
> > > > MSE? It looks like bias and MSE both exist within the CNT
line type
> > > (MBIAS
> > > > and MSE)?
> > > >
> > > >
> > > > Justin
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Great, thanks for sending me the sample data. Yes, I was able
to
> > > replicate
> > > > the segfault. The good news is that this is caused by a
simple typo
> > > that's
> > > > easy to fix. If you look in the "obs.field" entry of the
> relhumConfig
> > > > file, you'll see an empty string for the last field listed:
> > > >
> > > > *obs = { field = [*
> > > >
> > > >
> > > >
> > > > * ... {name = "dptd";level = ["P988-1006"];},
> > > {name =
> > > > "";level = ["P1007-1013"];} ];*
> > > > If you change that empty string to "dptd", the segfault will
go
> away:*
> > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion (in
> 2
> > > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > > pairs. They were discarded because of the valid times (seen
using
> -v 3
> > > > command line option to Point-Stat). The ob file you sent is
named "
> > > > raob_2015020412.nc" but the actual times in that file are for
> > > > "20190426_120000":
> > > >
> > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> http://raob_2015020412.nc
> > >*
> > > >
> > > > * hdr_vld_table = "20190426_120000" ;*
> > > >
> > > > So please be aware of that discrepancy. To just produce some
matched
> > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > -obs_valid_end 20190426_120000*
> > > >
> > > > But I still get 0 matched pairs. This time, it's because of
bad
> > forecast
> > > > values:
> > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > >
> > > > Taking a step back... let's run one of these fields through
> > > > plot_data_plane, which results in an error:
> > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > 'name="./read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > ERROR : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> (97,
> > > 97),
> > > > (x, y) = (97, 0)
> > > >
> > > > While the numpy object is 97x97, the grid is specified as
being
> 118x118
> > > in
> > > > the python script ('nx': 118, 'ny': 118).
> > > >
> > > > Just to get something working, I modified the nx and ny in the
python
> > > > script:
> > > > 'nx':97,
> > > > 'ny':97,
> > > > Rerunning again, I still didn't get any matched pairs.
> > > >
> > > > So I'd suggest...
> > > > - Fix the typo in the config file.
> > > > - Figure out the discrepancy between the obs file name
timestamp and
> > the
> > > > data in that file.
> > > > - Make sure the grid information is consistent with the data
in the
> > > python
> > > > script.
> > > >
> > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > condition. So next, I tested using met-8.1 with that empty
string.
> > This
> > > > time it does run with no segfault, but prints a warning about
the
> empty
> > > > string.
> > > >
> > > > Hope that helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Hey John,
> > > > >
> > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > >
> > > > > I am running met-8.0/met-8.0-with-grib2-support and have
provided
> > > > > everything
> > > > > on that list you've provided me. Let me know if you're able
to
> > > replicate
> > > > > it
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > There
> > > > > isn't much jumping out at me from the log messages you sent.
In
> > fact,
> > > I
> > > > > hunted around for the DEBUG(7) log message but couldn't find
where
> in
> > > the
> > > > > code it's being written. Are you able to send me some
sample data
> to
> > > > > replicate this behavior?
> > > > >
> > > > > I'd need to know...
> > > > > - What version of MET are you running.
> > > > > - A copy of your Point-Stat config file.
> > > > > - The python script that you're running.
> > > > > - The input file for that python script.
> > > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > > >
> > > > > If I can replicate the behavior here, it should be easy to
run it
> in
> > > the
> > > > > debugger and figure it out.
> > > > >
> > > > > You can post data to our anonymous ftp site as described in
"How to
> > > send
> > > > us
> > > > > data":
> > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > > > Queue: met_help
> > > > > > Subject: point_stat seg faulting
> > > > > > Owner: Nobody
> > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > Status: new
> > > > > > Ticket <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > >
> > > > > >
> > > > > > I'm trying to extrapolate the production of vertical raob
> > > verification
> > > > > > plots
> > > > > > using point_stat and stat_analysis like we did together
for winds
> > but
> > > > for
> > > > > > relative humidity now. But when I run point_stat, it seg
faults
> > > > without
> > > > > > much explanation
> > > > > >
> > > > > >
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > >
> > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> > climatology
> > > > > mean
> > > > > > levels, and 0 climatology standard deviation levels.
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > > >
> > > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > valid_time: 1
> > > > > >
> > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > PYTHON_NUMPY
> > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > ./out/point_stat.log
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > From my log file:
> > > > > >
> > > > > > 607 DEBUG 2:
> > > > > >
> > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > >
> > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > > valid_time: 1
> > > > > >
> > > > > >
> > > > > >
> > > > > > Any help would be much appreciated
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin Tsu
> > > > > >
> > > > > > Marine Meteorology Division
> > > > > >
> > > > > > Data Assimilation/Mesoscale Modeling
> > > > > >
> > > > > > Building 704 Room 212
> > > > > >
> > > > > > Naval Research Laboratory, Code 7531
> > > > > >
> > > > > > 7 Grace Hopper Avenue
> > > > > >
> > > > > > Monterey, CA 93943-5502
> > > > > >
> > > > > >
> > > > > >
> > > > > > Ph. (831) 656-4111
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Fri Sep 06 14:15:47 2019
Invoking point_stat multiple times will create and replace the old
_cnt and _sl1l2 files right? At that point, I'll have a bunch of CNT
and SL1L2 files and then use stat_analysis to aggregate them?
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, September 6, 2019 1:11 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Yes, that is a long list of fields, but I don't see a way obvious way
of
shortening that. But to do multiple lead times, I'd just call Point-
Stat
multiple times, once for each lead time, and update the config file to
use
environment variables for the current time:
fcst = {
field = [
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
},
...
Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
variables.
John
On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Thanks John,
>
> I managed to scrap together some code to get RAOB stats from CNT
plotted
> with 95% CI. Working on Surface stats now.
>
> So my configuration file looks like this right now:
>
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
>
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> ];
> }
>
> obs = {
> field = [
> {name = "dptd";level = ["P0.86-1.5"];},
> {name = "dptd";level = ["P1.6-2.5"];},
> {name = "dptd";level = ["P2.6-3.5"];},
> {name = "dptd";level = ["P3.6-4.5"];},
> {name = "dptd";level = ["P4.6-6"];},
> {name = "dptd";level = ["P6.1-8"];},
> {name = "dptd";level = ["P9-15"];},
> {name = "dptd";level = ["P16-25"];},
> {name = "dptd";level = ["P26-40"];},
> {name = "dptd";level = ["P41-65"];},
> {name = "dptd";level = ["P66-85"];},
> {name = "dptd";level = ["P86-125"];},
> {name = "dptd";level = ["P126-175"];},
> {name = "dptd";level = ["P176-225"];},
> {name = "dptd";level = ["P226-275"];},
> {name = "dptd";level = ["P276-325"];},
> {name = "dptd";level = ["P326-375"];},
> {name = "dptd";level = ["P376-425"];},
> {name = "dptd";level = ["P426-475"];},
> {name = "dptd";level = ["P476-525"];},
> {name = "dptd";level = ["P526-575"];},
> {name = "dptd";level = ["P576-625"];},
> {name = "dptd";level = ["P626-675"];},
> {name = "dptd";level = ["P676-725"];},
> {name = "dptd";level = ["P726-775"];},
> {name = "dptd";level = ["P776-825"];},
> {name = "dptd";level = ["P826-875"];},
> {name = "dptd";level = ["P876-912"];},
> {name = "dptd";level = ["P913-936"];},
> {name = "dptd";level = ["P937-962"];},
> {name = "dptd";level = ["P963-987"];},
> {name = "dptd";level = ["P988-1006"];},
> {name = "dptd";level = ["P1007-1013"];}
>
> And I have the data:
>
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
>
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
>
> for a particular DTG and vertical level. If I want to run multiple
lead
> times, it seems like I'll have to copy that long list of fields for
each
> lead time in the fcst dict and then duplicate the obs dictionary so
that
> each forecast entry has a corresponding obs level matching range.
Is this
> correct or is there a shorter/better way to do this?
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Tuesday, September 3, 2019 8:36 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> I see that you're plotting RMSE and bias (called ME for Mean Error
in MET)
> in the plots you sent.
>
> Table 7.6 of the MET User's Guide (
>
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> )
> describes the contents of the CNT line type type. Bot the columns
for RMSE
> and ME are followed by _NCL and _NCU columns which give the
parametric
> approximation of the confidence interval for those scores. So yes,
you can
> run Stat-Analysis to aggregate SL1L2 lines together and write the
> corresponding CNT output line type.
>
> The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> for the ME statistic.
>
> You can change the alpha value for those confidence intervals by
setting:
> -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
>
> Thanks,
> John
>
>
> On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > This all helps me greatly. One more questions: is there any
information
> > in either the CNT or SL1L2 that could give me confidence
intervals for
> > each data point? I'm looking to replicate the attached plot.
Notice
> that
> > the individual points could have either a 99, 95 or 90 %
confidence.
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, August 30, 2019 12:46 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Sounds about right. Each time you run Grid-Stat or Point-Stat you
can
> > write the CNT output line type which contains stats like MSE, ME,
MAE,
> and
> > RMSE. And I'm recommended that you also write the SL1L2 line type
as
> well.
> >
> > Then you'd run a stat_analysis job like this:
> >
> > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD -out_stat
> > cnt_out.stat
> >
> > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> the
> > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> and
> > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together and
> > write out the corresponding CNT line type to the output file named
> > cnt_out.stat.
> >
> > John
> >
> > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > So if I understand what you're saying correctly, then if I
wanted to an
> > > average of 24 hour forecasts over a month long run, then I would
use
> the
> > > SL1L2 output to aggregate and produce this average? Whereas if
I used
> > CNT,
> > > this would just provide me ~30 individual (per day over a month)
24
> hour
> > > forecast verifications?
> > >
> > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> biases?
> > > I am forgetting if we used stat_analysis to produce a plot or if
the
> plot
> > > you showed me was just something you guys post processed using
python
> or
> > > whatnot.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 8:47 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > We wrote the SL1L2 partial sums from Point-Stat because they can
be
> > > aggregated together by the stat-analysis tool over multiple days
or
> > cases.
> > >
> > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > recommend writing the CNT line type (which has the stats
computed for
> > that
> > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > together in stat-analysis or METviewer).
> > >
> > > The other alternative is looking at the average of the daily
statistics
> > > scores. For RMSE, the average of the daily RMSE is equal to the
> > aggregated
> > > score... as long as the number of matched pairs remains constant
day to
> > > day. But if one today you have 98 matched pairs and tomorrow
you have
> > 105,
> > > then tomorrow's score will have slightly more weight. The SL1L2
lines
> > are
> > > aggregated as weighted averages, where the TOTAL column is the
weight.
> > And
> > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > scores. Generally, the statisticians recommend this method over
the
> mean
> > > of the daily scores. Neither is "wrong", they just give you
slightly
> > > different information.
> > >
> > > Thanks,
> > > John
> > >
> > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John.
> > > >
> > > > Sorry it's taken me such a long time to get to this. It's
nearing
> the
> > > end
> > > > of FY19 so I have been finalizing several transition projects
and
> > haven’t
> > > > had much time to work on MET recently. I just picked this
back up
> and
> > > have
> > > > loaded a couple new modules. Here is what I have to work with
now:
> > > >
> > > > 1) intel/xe_2013-sp1-u1
> > > > 2) netcdf-local/netcdf-met
> > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > 5) udunits/udunits-2.1.24
> > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > >
> > > >
> > > > Running
> > > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig -v
3
> > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > >
> > > > I get many matched pairs. Here is a sample of what the log
file
> looks
> > > > like for one of the pressure ranges I am verifying on:
> > > >
> > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus dptd/P425-
376, for
> > > > observation type radiosonde, over region FULL, for
interpolation
> method
> > > > NEAREST(1), using 98 pairs.
> > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > >=0,
> > > > observation filtering threshold >=0, and field logic UNION.
> > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=5.0, observation filtering threshold >=5.0, and field logic
UNION.
> > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
threshold
> > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> UNION.
> > > > 15280 DEBUG 2:
> > > > 15281 DEBUG 2:
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > >
> > > > I am going to work on processing these point stat files to
create
> those
> > > > vertical raob plots we had a discussion about. I remember us
talking
> > > about
> > > > the partial sums file. Why did we choose to go the route of
> producing
> > > > partial sums then feeding that into series analysis to
generate bias
> > and
> > > > MSE? It looks like bias and MSE both exist within the CNT
line type
> > > (MBIAS
> > > > and MSE)?
> > > >
> > > >
> > > > Justin
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Great, thanks for sending me the sample data. Yes, I was able
to
> > > replicate
> > > > the segfault. The good news is that this is caused by a
simple typo
> > > that's
> > > > easy to fix. If you look in the "obs.field" entry of the
> relhumConfig
> > > > file, you'll see an empty string for the last field listed:
> > > >
> > > > *obs = { field = [*
> > > >
> > > >
> > > >
> > > > * ... {name = "dptd";level = ["P988-1006"];},
> > > {name =
> > > > "";level = ["P1007-1013"];} ];*
> > > > If you change that empty string to "dptd", the segfault will
go
> away:*
> > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion (in
> 2
> > > > minutes 48 seconds on my desktop machine), but it produced 0
matched
> > > > pairs. They were discarded because of the valid times (seen
using
> -v 3
> > > > command line option to Point-Stat). The ob file you sent is
named "
> > > > raob_2015020412.nc" but the actual times in that file are for
> > > > "20190426_120000":
> > > >
> > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> http://raob_2015020412.nc
> > >*
> > > >
> > > > * hdr_vld_table = "20190426_120000" ;*
> > > >
> > > > So please be aware of that discrepancy. To just produce some
matched
> > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > -obs_valid_end 20190426_120000*
> > > >
> > > > But I still get 0 matched pairs. This time, it's because of
bad
> > forecast
> > > > values:
> > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > >
> > > > Taking a step back... let's run one of these fields through
> > > > plot_data_plane, which results in an error:
> > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > 'name="./read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > ERROR : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> (97,
> > > 97),
> > > > (x, y) = (97, 0)
> > > >
> > > > While the numpy object is 97x97, the grid is specified as
being
> 118x118
> > > in
> > > > the python script ('nx': 118, 'ny': 118).
> > > >
> > > > Just to get something working, I modified the nx and ny in the
python
> > > > script:
> > > > 'nx':97,
> > > > 'ny':97,
> > > > Rerunning again, I still didn't get any matched pairs.
> > > >
> > > > So I'd suggest...
> > > > - Fix the typo in the config file.
> > > > - Figure out the discrepancy between the obs file name
timestamp and
> > the
> > > > data in that file.
> > > > - Make sure the grid information is consistent with the data
in the
> > > python
> > > > script.
> > > >
> > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > condition. So next, I tested using met-8.1 with that empty
string.
> > This
> > > > time it does run with no segfault, but prints a warning about
the
> empty
> > > > string.
> > > >
> > > > Hope that helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Hey John,
> > > > >
> > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > >
> > > > > I am running met-8.0/met-8.0-with-grib2-support and have
provided
> > > > > everything
> > > > > on that list you've provided me. Let me know if you're able
to
> > > replicate
> > > > > it
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > There
> > > > > isn't much jumping out at me from the log messages you sent.
In
> > fact,
> > > I
> > > > > hunted around for the DEBUG(7) log message but couldn't find
where
> in
> > > the
> > > > > code it's being written. Are you able to send me some
sample data
> to
> > > > > replicate this behavior?
> > > > >
> > > > > I'd need to know...
> > > > > - What version of MET are you running.
> > > > > - A copy of your Point-Stat config file.
> > > > > - The python script that you're running.
> > > > > - The input file for that python script.
> > > > > - The NetCDF point observation file you're passing to Point-
Stat.
> > > > >
> > > > > If I can replicate the behavior here, it should be easy to
run it
> in
> > > the
> > > > > debugger and figure it out.
> > > > >
> > > > > You can post data to our anonymous ftp site as described in
"How to
> > > send
> > > > us
> > > > > data":
> > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > Transaction: Ticket created by justin.tsu at nrlmry.navy.mil
> > > > > > Queue: met_help
> > > > > > Subject: point_stat seg faulting
> > > > > > Owner: Nobody
> > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > Status: new
> > > > > > Ticket <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > >
> > > > > >
> > > > > > I'm trying to extrapolate the production of vertical raob
> > > verification
> > > > > > plots
> > > > > > using point_stat and stat_analysis like we did together
for winds
> > but
> > > > for
> > > > > > relative humidity now. But when I run point_stat, it seg
faults
> > > > without
> > > > > > much explanation
> > > > > >
> > > > > >
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > >
> > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels, 0
> > climatology
> > > > > mean
> > > > > > levels, and 0 climatology standard deviation levels.
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > ----
> > > > > >
> > > > > > DEBUG 2:
> > > > > >
> > > > > > DEBUG 2: Searching 4680328 observations from 617 messages.
> > > > > >
> > > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > valid_time: 1
> > > > > >
> > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > PYTHON_NUMPY
> > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > ./out/point_stat.log
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > From my log file:
> > > > > >
> > > > > > 607 DEBUG 2:
> > > > > >
> > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > >
> > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > > valid_time: 1
> > > > > >
> > > > > >
> > > > > >
> > > > > > Any help would be much appreciated
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > >
> > > > > >
> > > > > > Justin Tsu
> > > > > >
> > > > > > Marine Meteorology Division
> > > > > >
> > > > > > Data Assimilation/Mesoscale Modeling
> > > > > >
> > > > > > Building 704 Room 212
> > > > > >
> > > > > > Naval Research Laboratory, Code 7531
> > > > > >
> > > > > > 7 Grace Hopper Avenue
> > > > > >
> > > > > > Monterey, CA 93943-5502
> > > > > >
> > > > > >
> > > > > >
> > > > > > Ph. (831) 656-4111
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Sep 06 14:40:04 2019
Justin,
Here's a sample Point-Stat output file name:
point_stat_360000L_20070331_120000V.stat
The "360000L" indicates that this is output for a 36-hour forecast.
And
the "20070331_120000V" timestamp is the valid time.
If you run Point-Stat once for each forecast lead time, the timestamps
should be different and they should not clobber eachother.
But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
with the same timing info. The "output_prefix" config file entry is
used
to customize the output file names to prevent them from clobbering
eachother. For example, setting:
output_prefix="RUN1";
Would result in files named "
point_stat_RUN1_360000L_20070331_120000V.stat".
Make sense?
Thanks,
John
On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Invoking point_stat multiple times will create and replace the old
_cnt
> and _sl1l2 files right? At that point, I'll have a bunch of CNT and
SL1L2
> files and then use stat_analysis to aggregate them?
>
> Justin
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:11 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Yes, that is a long list of fields, but I don't see a way obvious
way of
> shortening that. But to do multiple lead times, I'd just call
Point-Stat
> multiple times, once for each lead time, and update the config file
to use
> environment variables for the current time:
>
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> ...
>
> Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
> variables.
>
> John
>
> On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > I managed to scrap together some code to get RAOB stats from CNT
plotted
> > with 95% CI. Working on Surface stats now.
> >
> > So my configuration file looks like this right now:
> >
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > ];
> > }
> >
> > obs = {
> > field = [
> > {name = "dptd";level = ["P0.86-1.5"];},
> > {name = "dptd";level = ["P1.6-2.5"];},
> > {name = "dptd";level = ["P2.6-3.5"];},
> > {name = "dptd";level = ["P3.6-4.5"];},
> > {name = "dptd";level = ["P4.6-6"];},
> > {name = "dptd";level = ["P6.1-8"];},
> > {name = "dptd";level = ["P9-15"];},
> > {name = "dptd";level = ["P16-25"];},
> > {name = "dptd";level = ["P26-40"];},
> > {name = "dptd";level = ["P41-65"];},
> > {name = "dptd";level = ["P66-85"];},
> > {name = "dptd";level = ["P86-125"];},
> > {name = "dptd";level = ["P126-175"];},
> > {name = "dptd";level = ["P176-225"];},
> > {name = "dptd";level = ["P226-275"];},
> > {name = "dptd";level = ["P276-325"];},
> > {name = "dptd";level = ["P326-375"];},
> > {name = "dptd";level = ["P376-425"];},
> > {name = "dptd";level = ["P426-475"];},
> > {name = "dptd";level = ["P476-525"];},
> > {name = "dptd";level = ["P526-575"];},
> > {name = "dptd";level = ["P576-625"];},
> > {name = "dptd";level = ["P626-675"];},
> > {name = "dptd";level = ["P676-725"];},
> > {name = "dptd";level = ["P726-775"];},
> > {name = "dptd";level = ["P776-825"];},
> > {name = "dptd";level = ["P826-875"];},
> > {name = "dptd";level = ["P876-912"];},
> > {name = "dptd";level = ["P913-936"];},
> > {name = "dptd";level = ["P937-962"];},
> > {name = "dptd";level = ["P963-987"];},
> > {name = "dptd";level = ["P988-1006"];},
> > {name = "dptd";level = ["P1007-1013"];}
> >
> > And I have the data:
> >
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> >
> > for a particular DTG and vertical level. If I want to run
multiple lead
> > times, it seems like I'll have to copy that long list of fields
for each
> > lead time in the fcst dict and then duplicate the obs dictionary
so that
> > each forecast entry has a corresponding obs level matching range.
Is
> this
> > correct or is there a shorter/better way to do this?
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Tuesday, September 3, 2019 8:36 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > I see that you're plotting RMSE and bias (called ME for Mean Error
in
> MET)
> > in the plots you sent.
> >
> > Table 7.6 of the MET User's Guide (
> >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > )
> > describes the contents of the CNT line type type. Bot the columns
for
> RMSE
> > and ME are followed by _NCL and _NCU columns which give the
parametric
> > approximation of the confidence interval for those scores. So
yes, you
> can
> > run Stat-Analysis to aggregate SL1L2 lines together and write the
> > corresponding CNT output line type.
> >
> > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> > confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> > for the ME statistic.
> >
> > You can change the alpha value for those confidence intervals by
setting:
> > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> >
> > Thanks,
> > John
> >
> >
> > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > This all helps me greatly. One more questions: is there any
> information
> > > in either the CNT or SL1L2 that could give me confidence
intervals for
> > > each data point? I'm looking to replicate the attached plot.
Notice
> > that
> > > the individual points could have either a 99, 95 or 90 %
confidence.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 12:46 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Sounds about right. Each time you run Grid-Stat or Point-Stat
you can
> > > write the CNT output line type which contains stats like MSE,
ME, MAE,
> > and
> > > RMSE. And I'm recommended that you also write the SL1L2 line
type as
> > well.
> > >
> > > Then you'd run a stat_analysis job like this:
> > >
> > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > cnt_out.stat
> > >
> > > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> > the
> > > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> > and
> > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> and
> > > write out the corresponding CNT line type to the output file
named
> > > cnt_out.stat.
> > >
> > > John
> > >
> > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > So if I understand what you're saying correctly, then if I
wanted to
> an
> > > > average of 24 hour forecasts over a month long run, then I
would use
> > the
> > > > SL1L2 output to aggregate and produce this average? Whereas
if I
> used
> > > CNT,
> > > > this would just provide me ~30 individual (per day over a
month) 24
> > hour
> > > > forecast verifications?
> > > >
> > > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> > biases?
> > > > I am forgetting if we used stat_analysis to produce a plot or
if the
> > plot
> > > > you showed me was just something you guys post processed using
python
> > or
> > > > whatnot.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > aggregated together by the stat-analysis tool over multiple
days or
> > > cases.
> > > >
> > > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > > recommend writing the CNT line type (which has the stats
computed for
> > > that
> > > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > > together in stat-analysis or METviewer).
> > > >
> > > > The other alternative is looking at the average of the daily
> statistics
> > > > scores. For RMSE, the average of the daily RMSE is equal to
the
> > > aggregated
> > > > score... as long as the number of matched pairs remains
constant day
> to
> > > > day. But if one today you have 98 matched pairs and tomorrow
you
> have
> > > 105,
> > > > then tomorrow's score will have slightly more weight. The
SL1L2
> lines
> > > are
> > > > aggregated as weighted averages, where the TOTAL column is the
> weight.
> > > And
> > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > scores. Generally, the statisticians recommend this method
over the
> > mean
> > > > of the daily scores. Neither is "wrong", they just give you
slightly
> > > > different information.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John.
> > > > >
> > > > > Sorry it's taken me such a long time to get to this. It's
nearing
> > the
> > > > end
> > > > > of FY19 so I have been finalizing several transition
projects and
> > > haven’t
> > > > > had much time to work on MET recently. I just picked this
back up
> > and
> > > > have
> > > > > loaded a couple new modules. Here is what I have to work
with now:
> > > > >
> > > > > 1) intel/xe_2013-sp1-u1
> > > > > 2) netcdf-local/netcdf-met
> > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > 5) udunits/udunits-2.1.24
> > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > >
> > > > >
> > > > > Running
> > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > >
> > > > > I get many matched pairs. Here is a sample of what the log
file
> > looks
> > > > > like for one of the pressure ranges I am verifying on:
> > > > >
> > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> for
> > > > > observation type radiosonde, over region FULL, for
interpolation
> > method
> > > > > NEAREST(1), using 98 pairs.
> > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15280 DEBUG 2:
> > > > > 15281 DEBUG 2:
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > >
> > > > > I am going to work on processing these point stat files to
create
> > those
> > > > > vertical raob plots we had a discussion about. I remember
us
> talking
> > > > about
> > > > > the partial sums file. Why did we choose to go the route of
> > producing
> > > > > partial sums then feeding that into series analysis to
generate
> bias
> > > and
> > > > > MSE? It looks like bias and MSE both exist within the CNT
line
> type
> > > > (MBIAS
> > > > > and MSE)?
> > > > >
> > > > >
> > > > > Justin
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Great, thanks for sending me the sample data. Yes, I was
able to
> > > > replicate
> > > > > the segfault. The good news is that this is caused by a
simple
> typo
> > > > that's
> > > > > easy to fix. If you look in the "obs.field" entry of the
> > relhumConfig
> > > > > file, you'll see an empty string for the last field listed:
> > > > >
> > > > > *obs = { field = [*
> > > > >
> > > > >
> > > > >
> > > > > * ... {name = "dptd";level = ["P988-1006"];},
> > > > {name =
> > > > > "";level = ["P1007-1013"];} ];*
> > > > > If you change that empty string to "dptd", the segfault will
go
> > away:*
> > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> (in
> > 2
> > > > > minutes 48 seconds on my desktop machine), but it produced 0
> matched
> > > > > pairs. They were discarded because of the valid times (seen
using
> > -v 3
> > > > > command line option to Point-Stat). The ob file you sent is
named
> "
> > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > "20190426_120000":
> > > > >
> > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > http://raob_2015020412.nc
> > > >*
> > > > >
> > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > >
> > > > > So please be aware of that discrepancy. To just produce
some
> matched
> > > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > -obs_valid_end 20190426_120000*
> > > > >
> > > > > But I still get 0 matched pairs. This time, it's because of
bad
> > > forecast
> > > > > values:
> > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > >
> > > > > Taking a step back... let's run one of these fields through
> > > > > plot_data_plane, which results in an error:
> > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > > 'name="./read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > ERROR : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> > (97,
> > > > 97),
> > > > > (x, y) = (97, 0)
> > > > >
> > > > > While the numpy object is 97x97, the grid is specified as
being
> > 118x118
> > > > in
> > > > > the python script ('nx': 118, 'ny': 118).
> > > > >
> > > > > Just to get something working, I modified the nx and ny in
the
> python
> > > > > script:
> > > > > 'nx':97,
> > > > > 'ny':97,
> > > > > Rerunning again, I still didn't get any matched pairs.
> > > > >
> > > > > So I'd suggest...
> > > > > - Fix the typo in the config file.
> > > > > - Figure out the discrepancy between the obs file name
timestamp
> and
> > > the
> > > > > data in that file.
> > > > > - Make sure the grid information is consistent with the data
in the
> > > > python
> > > > > script.
> > > > >
> > > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > > condition. So next, I tested using met-8.1 with that empty
string.
> > > This
> > > > > time it does run with no segfault, but prints a warning
about the
> > empty
> > > > > string.
> > > > >
> > > > > Hope that helps.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > >
> > > > > > I am running met-8.0/met-8.0-with-grib2-support and have
> provided
> > > > > > everything
> > > > > > on that list you've provided me. Let me know if you're
able to
> > > > replicate
> > > > > > it
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > > There
> > > > > > isn't much jumping out at me from the log messages you
sent. In
> > > fact,
> > > > I
> > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> where
> > in
> > > > the
> > > > > > code it's being written. Are you able to send me some
sample
> data
> > to
> > > > > > replicate this behavior?
> > > > > >
> > > > > > I'd need to know...
> > > > > > - What version of MET are you running.
> > > > > > - A copy of your Point-Stat config file.
> > > > > > - The python script that you're running.
> > > > > > - The input file for that python script.
> > > > > > - The NetCDF point observation file you're passing to
Point-Stat.
> > > > > >
> > > > > > If I can replicate the behavior here, it should be easy to
run it
> > in
> > > > the
> > > > > > debugger and figure it out.
> > > > > >
> > > > > > You can post data to our anonymous ftp site as described
in "How
> to
> > > > send
> > > > > us
> > > > > > data":
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > Queue: met_help
> > > > > > > Subject: point_stat seg faulting
> > > > > > > Owner: Nobody
> > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > Status: new
> > > > > > > Ticket <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > verification
> > > > > > > plots
> > > > > > > using point_stat and stat_analysis like we did together
for
> winds
> > > but
> > > > > for
> > > > > > > relative humidity now. But when I run point_stat, it
seg
> faults
> > > > > without
> > > > > > > much explanation
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > >
> > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels,
0
> > > climatology
> > > > > > mean
> > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > valid_time: 1
> > > > > > >
> > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > > PYTHON_NUMPY
> > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > > ./out/point_stat.log
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > From my log file:
> > > > > > >
> > > > > > > 607 DEBUG 2:
> > > > > > >
> > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > valid_time: 1
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Any help would be much appreciated
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin Tsu
> > > > > > >
> > > > > > > Marine Meteorology Division
> > > > > > >
> > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > >
> > > > > > > Building 704 Room 212
> > > > > > >
> > > > > > > Naval Research Laboratory, Code 7531
> > > > > > >
> > > > > > > 7 Grace Hopper Avenue
> > > > > > >
> > > > > > > Monterey, CA 93943-5502
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Ph. (831) 656-4111
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Mon Sep 09 16:56:17 2019
Hey John,
That makes sense. The way that I've set up my config file is as
follows:
fcst = {
field = [
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
];
}
obs = {
field = [
{name = "dptd";level = ["P${LEV1}-${LEV2}"];}
];
}
message_type = [ "${MSG_TYPE}" ];
The environmental variables I'm setting in the wrapper script are LEV,
INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it seems
like I will only be able to run point_Stat for a single elevation and
a single lead time. Do you recommend this? Or Should I put all the
elevations for a single lead time in one pass of point_stat?
So my config file will look like something like this...
fcst = {
field = [
{name = "/users/tsu/MET/work/read_NRL_binary.py
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
... etc.
];
}
Also, I am not sure what happened by when I run point_stat now I am
getting that error
ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
abbreviation 'dptd' for table version 2
Again. This makes me think that the obs_var name is wrong, but
ncdump -v obs_var raob_*.nc gives me obs_var =
"ws",
"wdir",
"t",
"dptd",
"pres",
"ght" ;
So clearly dptd exists.
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, September 6, 2019 1:40 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Here's a sample Point-Stat output file name:
point_stat_360000L_20070331_120000V.stat
The "360000L" indicates that this is output for a 36-hour forecast.
And
the "20070331_120000V" timestamp is the valid time.
If you run Point-Stat once for each forecast lead time, the timestamps
should be different and they should not clobber eachother.
But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
with the same timing info. The "output_prefix" config file entry is
used
to customize the output file names to prevent them from clobbering
eachother. For example, setting:
output_prefix="RUN1";
Would result in files named "
point_stat_RUN1_360000L_20070331_120000V.stat".
Make sense?
Thanks,
John
On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Invoking point_stat multiple times will create and replace the old
_cnt
> and _sl1l2 files right? At that point, I'll have a bunch of CNT and
SL1L2
> files and then use stat_analysis to aggregate them?
>
> Justin
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:11 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Yes, that is a long list of fields, but I don't see a way obvious
way of
> shortening that. But to do multiple lead times, I'd just call
Point-Stat
> multiple times, once for each lead time, and update the config file
to use
> environment variables for the current time:
>
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> },
> ...
>
> Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
environment
> variables.
>
> John
>
> On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Thanks John,
> >
> > I managed to scrap together some code to get RAOB stats from CNT
plotted
> > with 95% CI. Working on Surface stats now.
> >
> > So my configuration file looks like this right now:
> >
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> >
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > ];
> > }
> >
> > obs = {
> > field = [
> > {name = "dptd";level = ["P0.86-1.5"];},
> > {name = "dptd";level = ["P1.6-2.5"];},
> > {name = "dptd";level = ["P2.6-3.5"];},
> > {name = "dptd";level = ["P3.6-4.5"];},
> > {name = "dptd";level = ["P4.6-6"];},
> > {name = "dptd";level = ["P6.1-8"];},
> > {name = "dptd";level = ["P9-15"];},
> > {name = "dptd";level = ["P16-25"];},
> > {name = "dptd";level = ["P26-40"];},
> > {name = "dptd";level = ["P41-65"];},
> > {name = "dptd";level = ["P66-85"];},
> > {name = "dptd";level = ["P86-125"];},
> > {name = "dptd";level = ["P126-175"];},
> > {name = "dptd";level = ["P176-225"];},
> > {name = "dptd";level = ["P226-275"];},
> > {name = "dptd";level = ["P276-325"];},
> > {name = "dptd";level = ["P326-375"];},
> > {name = "dptd";level = ["P376-425"];},
> > {name = "dptd";level = ["P426-475"];},
> > {name = "dptd";level = ["P476-525"];},
> > {name = "dptd";level = ["P526-575"];},
> > {name = "dptd";level = ["P576-625"];},
> > {name = "dptd";level = ["P626-675"];},
> > {name = "dptd";level = ["P676-725"];},
> > {name = "dptd";level = ["P726-775"];},
> > {name = "dptd";level = ["P776-825"];},
> > {name = "dptd";level = ["P826-875"];},
> > {name = "dptd";level = ["P876-912"];},
> > {name = "dptd";level = ["P913-936"];},
> > {name = "dptd";level = ["P937-962"];},
> > {name = "dptd";level = ["P963-987"];},
> > {name = "dptd";level = ["P988-1006"];},
> > {name = "dptd";level = ["P1007-1013"];}
> >
> > And I have the data:
> >
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> >
> > for a particular DTG and vertical level. If I want to run
multiple lead
> > times, it seems like I'll have to copy that long list of fields
for each
> > lead time in the fcst dict and then duplicate the obs dictionary
so that
> > each forecast entry has a corresponding obs level matching range.
Is
> this
> > correct or is there a shorter/better way to do this?
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Tuesday, September 3, 2019 8:36 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > I see that you're plotting RMSE and bias (called ME for Mean Error
in
> MET)
> > in the plots you sent.
> >
> > Table 7.6 of the MET User's Guide (
> >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > )
> > describes the contents of the CNT line type type. Bot the columns
for
> RMSE
> > and ME are followed by _NCL and _NCU columns which give the
parametric
> > approximation of the confidence interval for those scores. So
yes, you
> can
> > run Stat-Analysis to aggregate SL1L2 lines together and write the
> > corresponding CNT output line type.
> >
> > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
parametric
> > confidence intervals for the RMSE statistic and ME_NCL and ME_NCU
columns
> > for the ME statistic.
> >
> > You can change the alpha value for those confidence intervals by
setting:
> > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> >
> > Thanks,
> > John
> >
> >
> > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > This all helps me greatly. One more questions: is there any
> information
> > > in either the CNT or SL1L2 that could give me confidence
intervals for
> > > each data point? I'm looking to replicate the attached plot.
Notice
> > that
> > > the individual points could have either a 99, 95 or 90 %
confidence.
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, August 30, 2019 12:46 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Sounds about right. Each time you run Grid-Stat or Point-Stat
you can
> > > write the CNT output line type which contains stats like MSE,
ME, MAE,
> > and
> > > RMSE. And I'm recommended that you also write the SL1L2 line
type as
> > well.
> > >
> > > Then you'd run a stat_analysis job like this:
> > >
> > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
-line_type
> > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > cnt_out.stat
> > >
> > > This job reads any .stat files it finds in "/path/to/stat/data",
reads
> > the
> > > SL1L2 line type, and for each unique combination of FCST_VAR,
FCST_LEV,
> > and
> > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> and
> > > write out the corresponding CNT line type to the output file
named
> > > cnt_out.stat.
> > >
> > > John
> > >
> > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > So if I understand what you're saying correctly, then if I
wanted to
> an
> > > > average of 24 hour forecasts over a month long run, then I
would use
> > the
> > > > SL1L2 output to aggregate and produce this average? Whereas
if I
> used
> > > CNT,
> > > > this would just provide me ~30 individual (per day over a
month) 24
> > hour
> > > > forecast verifications?
> > > >
> > > > On a side note, did we ever go over how to plot the SL1L2 MSE
and
> > biases?
> > > > I am forgetting if we used stat_analysis to produce a plot or
if the
> > plot
> > > > you showed me was just something you guys post processed using
python
> > or
> > > > whatnot.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > aggregated together by the stat-analysis tool over multiple
days or
> > > cases.
> > > >
> > > > If you're interested in continuous statistics from Point-Stat,
I'd
> > > > recommend writing the CNT line type (which has the stats
computed for
> > > that
> > > > single run) and the SL1L2 line type (so that you can aggregate
them
> > > > together in stat-analysis or METviewer).
> > > >
> > > > The other alternative is looking at the average of the daily
> statistics
> > > > scores. For RMSE, the average of the daily RMSE is equal to
the
> > > aggregated
> > > > score... as long as the number of matched pairs remains
constant day
> to
> > > > day. But if one today you have 98 matched pairs and tomorrow
you
> have
> > > 105,
> > > > then tomorrow's score will have slightly more weight. The
SL1L2
> lines
> > > are
> > > > aggregated as weighted averages, where the TOTAL column is the
> weight.
> > > And
> > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > scores. Generally, the statisticians recommend this method
over the
> > mean
> > > > of the daily scores. Neither is "wrong", they just give you
slightly
> > > > different information.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John.
> > > > >
> > > > > Sorry it's taken me such a long time to get to this. It's
nearing
> > the
> > > > end
> > > > > of FY19 so I have been finalizing several transition
projects and
> > > haven’t
> > > > > had much time to work on MET recently. I just picked this
back up
> > and
> > > > have
> > > > > loaded a couple new modules. Here is what I have to work
with now:
> > > > >
> > > > > 1) intel/xe_2013-sp1-u1
> > > > > 2) netcdf-local/netcdf-met
> > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > 5) udunits/udunits-2.1.24
> > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > >
> > > > >
> > > > > Running
> > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > >
> > > > > I get many matched pairs. Here is a sample of what the log
file
> > looks
> > > > > like for one of the pressure ranges I am verifying on:
> > > > >
> > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> for
> > > > > observation type radiosonde, over region FULL, for
interpolation
> > method
> > > > > NEAREST(1), using 98 pairs.
> > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > >=0,
> > > > > observation filtering threshold >=0, and field logic UNION.
> > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> UNION.
> > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> threshold
> > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > UNION.
> > > > > 15280 DEBUG 2:
> > > > > 15281 DEBUG 2:
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > >
> > > > > I am going to work on processing these point stat files to
create
> > those
> > > > > vertical raob plots we had a discussion about. I remember
us
> talking
> > > > about
> > > > > the partial sums file. Why did we choose to go the route of
> > producing
> > > > > partial sums then feeding that into series analysis to
generate
> bias
> > > and
> > > > > MSE? It looks like bias and MSE both exist within the CNT
line
> type
> > > > (MBIAS
> > > > > and MSE)?
> > > > >
> > > > >
> > > > > Justin
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Great, thanks for sending me the sample data. Yes, I was
able to
> > > > replicate
> > > > > the segfault. The good news is that this is caused by a
simple
> typo
> > > > that's
> > > > > easy to fix. If you look in the "obs.field" entry of the
> > relhumConfig
> > > > > file, you'll see an empty string for the last field listed:
> > > > >
> > > > > *obs = { field = [*
> > > > >
> > > > >
> > > > >
> > > > > * ... {name = "dptd";level = ["P988-1006"];},
> > > > {name =
> > > > > "";level = ["P1007-1013"];} ];*
> > > > > If you change that empty string to "dptd", the segfault will
go
> > away:*
> > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> (in
> > 2
> > > > > minutes 48 seconds on my desktop machine), but it produced 0
> matched
> > > > > pairs. They were discarded because of the valid times (seen
using
> > -v 3
> > > > > command line option to Point-Stat). The ob file you sent is
named
> "
> > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > "20190426_120000":
> > > > >
> > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > http://raob_2015020412.nc
> > > >*
> > > > >
> > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > >
> > > > > So please be aware of that discrepancy. To just produce
some
> matched
> > > > > pairs, I told Point-Stat to use the valid times of the data:
> > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > -obs_valid_end 20190426_120000*
> > > > >
> > > > > But I still get 0 matched pairs. This time, it's because of
bad
> > > forecast
> > > > > values:
> > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > >
> > > > > Taking a step back... let's run one of these fields through
> > > > > plot_data_plane, which results in an error:
> > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<http://plot.ps>
> > > > > 'name="./read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > ERROR : DataPlane::two_to_one() -> range check error: (Nx,
Ny) =
> > (97,
> > > > 97),
> > > > > (x, y) = (97, 0)
> > > > >
> > > > > While the numpy object is 97x97, the grid is specified as
being
> > 118x118
> > > > in
> > > > > the python script ('nx': 118, 'ny': 118).
> > > > >
> > > > > Just to get something working, I modified the nx and ny in
the
> python
> > > > > script:
> > > > > 'nx':97,
> > > > > 'ny':97,
> > > > > Rerunning again, I still didn't get any matched pairs.
> > > > >
> > > > > So I'd suggest...
> > > > > - Fix the typo in the config file.
> > > > > - Figure out the discrepancy between the obs file name
timestamp
> and
> > > the
> > > > > data in that file.
> > > > > - Make sure the grid information is consistent with the data
in the
> > > > python
> > > > > script.
> > > > >
> > > > > Obviously though, we don't want to code to be segfaulting in
any
> > > > > condition. So next, I tested using met-8.1 with that empty
string.
> > > This
> > > > > time it does run with no segfault, but prints a warning
about the
> > empty
> > > > > string.
> > > > >
> > > > > Hope that helps.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > >
> > > > > > I am running met-8.0/met-8.0-with-grib2-support and have
> provided
> > > > > > everything
> > > > > > on that list you've provided me. Let me know if you're
able to
> > > > replicate
> > > > > > it
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Well that doesn't seem to be very helpful of Point-Stat at
all.
> > > There
> > > > > > isn't much jumping out at me from the log messages you
sent. In
> > > fact,
> > > > I
> > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> where
> > in
> > > > the
> > > > > > code it's being written. Are you able to send me some
sample
> data
> > to
> > > > > > replicate this behavior?
> > > > > >
> > > > > > I'd need to know...
> > > > > > - What version of MET are you running.
> > > > > > - A copy of your Point-Stat config file.
> > > > > > - The python script that you're running.
> > > > > > - The input file for that python script.
> > > > > > - The NetCDF point observation file you're passing to
Point-Stat.
> > > > > >
> > > > > > If I can replicate the behavior here, it should be easy to
run it
> > in
> > > > the
> > > > > > debugger and figure it out.
> > > > > >
> > > > > > You can post data to our anonymous ftp site as described
in "How
> to
> > > > send
> > > > > us
> > > > > > data":
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted upon.
> > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > Queue: met_help
> > > > > > > Subject: point_stat seg faulting
> > > > > > > Owner: Nobody
> > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > Status: new
> > > > > > > Ticket <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > verification
> > > > > > > plots
> > > > > > > using point_stat and stat_analysis like we did together
for
> winds
> > > but
> > > > > for
> > > > > > > relative humidity now. But when I run point_stat, it
seg
> faults
> > > > > without
> > > > > > > much explanation
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > >
> > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast levels,
0
> > > climatology
> > > > > > mean
> > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > ----
> > > > > > >
> > > > > > > DEBUG 2:
> > > > > > >
> > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > valid_time: 1
> > > > > > >
> > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
point_stat
> > > > > > > PYTHON_NUMPY
> > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat -log
> > > > > > > ./out/point_stat.log
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > From my log file:
> > > > > > >
> > > > > > > 607 DEBUG 2:
> > > > > > >
> > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > >
> > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > valid_time: 1
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Any help would be much appreciated
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Justin Tsu
> > > > > > >
> > > > > > > Marine Meteorology Division
> > > > > > >
> > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > >
> > > > > > > Building 704 Room 212
> > > > > > >
> > > > > > > Naval Research Laboratory, Code 7531
> > > > > > >
> > > > > > > 7 Grace Hopper Avenue
> > > > > > >
> > > > > > > Monterey, CA 93943-5502
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Ph. (831) 656-4111
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Sep 13 16:46:25 2019
Justin,
Sorry for the delay. I was in DC on travel this week until today.
It's really up to you how you'd like to configure it. Unless it's too
unwieldy, I do think I'd try verifying all levels at once in a single
call
to Point-Stat. All those observations are contained in the same point
observation file. If you verify each level in a separate call to
Point-Stat, you'll be looping through and processing those obs many,
many
times, which will be relatively slow. From a processing perspective,
it'd
be more efficient to process them all at once, in a single call to
Point-Stat.
But you balance runtime efficiency versus ease of scripting and
configuration. And that's why it's up to you to decide which you
prefer.
Hope that helps.
Thanks,
John
On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> That makes sense. The way that I've set up my config file is as
follows:
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> ];
> }
> obs = {
> field = [
> {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> ];
> }
> message_type = [ "${MSG_TYPE}" ];
>
> The environmental variables I'm setting in the wrapper script are
LEV,
> INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it seems
like I
> will only be able to run point_Stat for a single elevation and a
single
> lead time. Do you recommend this? Or Should I put all the
elevations for a
> single lead time in one pass of point_stat?
>
> So my config file will look like something like this...
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> ... etc.
> ];
> }
>
> Also, I am not sure what happened by when I run point_stat now I am
> getting that error
> ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> Again. This makes me think that the obs_var name is wrong, but
ncdump -v
> obs_var raob_*.nc gives me obs_var =
> "ws",
> "wdir",
> "t",
> "dptd",
> "pres",
> "ght" ;
> So clearly dptd exists.
>
> Justin
>
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:40 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Here's a sample Point-Stat output file name:
> point_stat_360000L_20070331_120000V.stat
>
> The "360000L" indicates that this is output for a 36-hour forecast.
And
> the "20070331_120000V" timestamp is the valid time.
>
> If you run Point-Stat once for each forecast lead time, the
timestamps
> should be different and they should not clobber eachother.
>
> But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
> with the same timing info. The "output_prefix" config file entry is
used
> to customize the output file names to prevent them from clobbering
> eachother. For example, setting:
> output_prefix="RUN1";
> Would result in files named "
> point_stat_RUN1_360000L_20070331_120000V.stat".
>
> Make sense?
>
> Thanks,
> John
>
> On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Invoking point_stat multiple times will create and replace the old
_cnt
> > and _sl1l2 files right? At that point, I'll have a bunch of CNT
and
> SL1L2
> > files and then use stat_analysis to aggregate them?
> >
> > Justin
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:11 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Yes, that is a long list of fields, but I don't see a way obvious
way of
> > shortening that. But to do multiple lead times, I'd just call
Point-Stat
> > multiple times, once for each lead time, and update the config
file to
> use
> > environment variables for the current time:
> >
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > ...
> >
> > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> environment
> > variables.
> >
> > John
> >
> > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > I managed to scrap together some code to get RAOB stats from CNT
> plotted
> > > with 95% CI. Working on Surface stats now.
> > >
> > > So my configuration file looks like this right now:
> > >
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > ];
> > > }
> > >
> > > obs = {
> > > field = [
> > > {name = "dptd";level = ["P0.86-1.5"];},
> > > {name = "dptd";level = ["P1.6-2.5"];},
> > > {name = "dptd";level = ["P2.6-3.5"];},
> > > {name = "dptd";level = ["P3.6-4.5"];},
> > > {name = "dptd";level = ["P4.6-6"];},
> > > {name = "dptd";level = ["P6.1-8"];},
> > > {name = "dptd";level = ["P9-15"];},
> > > {name = "dptd";level = ["P16-25"];},
> > > {name = "dptd";level = ["P26-40"];},
> > > {name = "dptd";level = ["P41-65"];},
> > > {name = "dptd";level = ["P66-85"];},
> > > {name = "dptd";level = ["P86-125"];},
> > > {name = "dptd";level = ["P126-175"];},
> > > {name = "dptd";level = ["P176-225"];},
> > > {name = "dptd";level = ["P226-275"];},
> > > {name = "dptd";level = ["P276-325"];},
> > > {name = "dptd";level = ["P326-375"];},
> > > {name = "dptd";level = ["P376-425"];},
> > > {name = "dptd";level = ["P426-475"];},
> > > {name = "dptd";level = ["P476-525"];},
> > > {name = "dptd";level = ["P526-575"];},
> > > {name = "dptd";level = ["P576-625"];},
> > > {name = "dptd";level = ["P626-675"];},
> > > {name = "dptd";level = ["P676-725"];},
> > > {name = "dptd";level = ["P726-775"];},
> > > {name = "dptd";level = ["P776-825"];},
> > > {name = "dptd";level = ["P826-875"];},
> > > {name = "dptd";level = ["P876-912"];},
> > > {name = "dptd";level = ["P913-936"];},
> > > {name = "dptd";level = ["P937-962"];},
> > > {name = "dptd";level = ["P963-987"];},
> > > {name = "dptd";level = ["P988-1006"];},
> > > {name = "dptd";level = ["P1007-1013"];}
> > >
> > > And I have the data:
> > >
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > >
> > > for a particular DTG and vertical level. If I want to run
multiple
> lead
> > > times, it seems like I'll have to copy that long list of fields
for
> each
> > > lead time in the fcst dict and then duplicate the obs dictionary
so
> that
> > > each forecast entry has a corresponding obs level matching
range. Is
> > this
> > > correct or is there a shorter/better way to do this?
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > MET)
> > > in the plots you sent.
> > >
> > > Table 7.6 of the MET User's Guide (
> > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > )
> > > describes the contents of the CNT line type type. Bot the
columns for
> > RMSE
> > > and ME are followed by _NCL and _NCU columns which give the
parametric
> > > approximation of the confidence interval for those scores. So
yes, you
> > can
> > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > corresponding CNT output line type.
> > >
> > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> parametric
> > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> columns
> > > for the ME statistic.
> > >
> > > You can change the alpha value for those confidence intervals by
> setting:
> > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > This all helps me greatly. One more questions: is there any
> > information
> > > > in either the CNT or SL1L2 that could give me confidence
intervals
> for
> > > > each data point? I'm looking to replicate the attached plot.
Notice
> > > that
> > > > the individual points could have either a 99, 95 or 90 %
confidence.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Sounds about right. Each time you run Grid-Stat or Point-Stat
you
> can
> > > > write the CNT output line type which contains stats like MSE,
ME,
> MAE,
> > > and
> > > > RMSE. And I'm recommended that you also write the SL1L2 line
type as
> > > well.
> > > >
> > > > Then you'd run a stat_analysis job like this:
> > > >
> > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> -line_type
> > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > cnt_out.stat
> > > >
> > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> reads
> > > the
> > > > SL1L2 line type, and for each unique combination of FCST_VAR,
> FCST_LEV,
> > > and
> > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> > and
> > > > write out the corresponding CNT line type to the output file
named
> > > > cnt_out.stat.
> > > >
> > > > John
> > > >
> > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > So if I understand what you're saying correctly, then if I
wanted
> to
> > an
> > > > > average of 24 hour forecasts over a month long run, then I
would
> use
> > > the
> > > > > SL1L2 output to aggregate and produce this average? Whereas
if I
> > used
> > > > CNT,
> > > > > this would just provide me ~30 individual (per day over a
month) 24
> > > hour
> > > > > forecast verifications?
> > > > >
> > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > biases?
> > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> the
> > > plot
> > > > > you showed me was just something you guys post processed
using
> python
> > > or
> > > > > whatnot.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > > aggregated together by the stat-analysis tool over multiple
days or
> > > > cases.
> > > > >
> > > > > If you're interested in continuous statistics from Point-
Stat, I'd
> > > > > recommend writing the CNT line type (which has the stats
computed
> for
> > > > that
> > > > > single run) and the SL1L2 line type (so that you can
aggregate them
> > > > > together in stat-analysis or METviewer).
> > > > >
> > > > > The other alternative is looking at the average of the daily
> > statistics
> > > > > scores. For RMSE, the average of the daily RMSE is equal to
the
> > > > aggregated
> > > > > score... as long as the number of matched pairs remains
constant
> day
> > to
> > > > > day. But if one today you have 98 matched pairs and
tomorrow you
> > have
> > > > 105,
> > > > > then tomorrow's score will have slightly more weight. The
SL1L2
> > lines
> > > > are
> > > > > aggregated as weighted averages, where the TOTAL column is
the
> > weight.
> > > > And
> > > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > > scores. Generally, the statisticians recommend this method
over
> the
> > > mean
> > > > > of the daily scores. Neither is "wrong", they just give you
> slightly
> > > > > different information.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John.
> > > > > >
> > > > > > Sorry it's taken me such a long time to get to this. It's
> nearing
> > > the
> > > > > end
> > > > > > of FY19 so I have been finalizing several transition
projects and
> > > > haven’t
> > > > > > had much time to work on MET recently. I just picked this
back
> up
> > > and
> > > > > have
> > > > > > loaded a couple new modules. Here is what I have to work
with
> now:
> > > > > >
> > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > 2) netcdf-local/netcdf-met
> > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > 5) udunits/udunits-2.1.24
> > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > >
> > > > > >
> > > > > > Running
> > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > > >
> > > > > > I get many matched pairs. Here is a sample of what the
log file
> > > looks
> > > > > > like for one of the pressure ranges I am verifying on:
> > > > > >
> > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> > for
> > > > > > observation type radiosonde, over region FULL, for
interpolation
> > > method
> > > > > > NEAREST(1), using 98 pairs.
> > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15280 DEBUG 2:
> > > > > > 15281 DEBUG 2:
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > >
> > > > > > I am going to work on processing these point stat files to
create
> > > those
> > > > > > vertical raob plots we had a discussion about. I remember
us
> > talking
> > > > > about
> > > > > > the partial sums file. Why did we choose to go the route
of
> > > producing
> > > > > > partial sums then feeding that into series analysis to
generate
> > bias
> > > > and
> > > > > > MSE? It looks like bias and MSE both exist within the CNT
line
> > type
> > > > > (MBIAS
> > > > > > and MSE)?
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Great, thanks for sending me the sample data. Yes, I was
able to
> > > > > replicate
> > > > > > the segfault. The good news is that this is caused by a
simple
> > typo
> > > > > that's
> > > > > > easy to fix. If you look in the "obs.field" entry of the
> > > relhumConfig
> > > > > > file, you'll see an empty string for the last field
listed:
> > > > > >
> > > > > > *obs = { field = [*
> > > > > >
> > > > > >
> > > > > >
> > > > > > * ... {name = "dptd";level = ["P988-
1006"];},
> > > > > {name =
> > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > If you change that empty string to "dptd", the segfault
will go
> > > away:*
> > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> > (in
> > > 2
> > > > > > minutes 48 seconds on my desktop machine), but it produced
0
> > matched
> > > > > > pairs. They were discarded because of the valid times
(seen
> using
> > > -v 3
> > > > > > command line option to Point-Stat). The ob file you sent
is
> named
> > "
> > > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > > "20190426_120000":
> > > > > >
> > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > http://raob_2015020412.nc
> > > > >*
> > > > > >
> > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > >
> > > > > > So please be aware of that discrepancy. To just produce
some
> > matched
> > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > > -obs_valid_end 20190426_120000*
> > > > > >
> > > > > > But I still get 0 matched pairs. This time, it's because
of bad
> > > > forecast
> > > > > > values:
> > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > >
> > > > > > Taking a step back... let's run one of these fields
through
> > > > > > plot_data_plane, which results in an error:
> > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> http://plot.ps>
> > > > > > 'name="./read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > ERROR : DataPlane::two_to_one() -> range check error:
(Nx, Ny) =
> > > (97,
> > > > > 97),
> > > > > > (x, y) = (97, 0)
> > > > > >
> > > > > > While the numpy object is 97x97, the grid is specified as
being
> > > 118x118
> > > > > in
> > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > >
> > > > > > Just to get something working, I modified the nx and ny in
the
> > python
> > > > > > script:
> > > > > > 'nx':97,
> > > > > > 'ny':97,
> > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > >
> > > > > > So I'd suggest...
> > > > > > - Fix the typo in the config file.
> > > > > > - Figure out the discrepancy between the obs file name
timestamp
> > and
> > > > the
> > > > > > data in that file.
> > > > > > - Make sure the grid information is consistent with the
data in
> the
> > > > > python
> > > > > > script.
> > > > > >
> > > > > > Obviously though, we don't want to code to be segfaulting
in any
> > > > > > condition. So next, I tested using met-8.1 with that
empty
> string.
> > > > This
> > > > > > time it does run with no segfault, but prints a warning
about the
> > > empty
> > > > > > string.
> > > > > >
> > > > > > Hope that helps.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > >
> > > > > > > I am running met-8.0/met-8.0-with-grib2-support and
have
> > provided
> > > > > > > everything
> > > > > > > on that list you've provided me. Let me know if you're
able to
> > > > > replicate
> > > > > > > it
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Well that doesn't seem to be very helpful of Point-Stat
at all.
> > > > There
> > > > > > > isn't much jumping out at me from the log messages you
sent.
> In
> > > > fact,
> > > > > I
> > > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> > where
> > > in
> > > > > the
> > > > > > > code it's being written. Are you able to send me some
sample
> > data
> > > to
> > > > > > > replicate this behavior?
> > > > > > >
> > > > > > > I'd need to know...
> > > > > > > - What version of MET are you running.
> > > > > > > - A copy of your Point-Stat config file.
> > > > > > > - The python script that you're running.
> > > > > > > - The input file for that python script.
> > > > > > > - The NetCDF point observation file you're passing to
> Point-Stat.
> > > > > > >
> > > > > > > If I can replicate the behavior here, it should be easy
to run
> it
> > > in
> > > > > the
> > > > > > > debugger and figure it out.
> > > > > > >
> > > > > > > You can post data to our anonymous ftp site as described
in
> "How
> > to
> > > > > send
> > > > > > us
> > > > > > > data":
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > > Queue: met_help
> > > > > > > > Subject: point_stat seg faulting
> > > > > > > > Owner: Nobody
> > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > Status: new
> > > > > > > > Ticket <URL:
> > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > verification
> > > > > > > > plots
> > > > > > > > using point_stat and stat_analysis like we did
together for
> > winds
> > > > but
> > > > > > for
> > > > > > > > relative humidity now. But when I run point_stat, it
seg
> > faults
> > > > > > without
> > > > > > > > much explanation
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > >
> > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > climatology
> > > > > > > mean
> > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > >
> > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > > valid_time: 1
> > > > > > > >
> > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> point_stat
> > > > > > > > PYTHON_NUMPY
> > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > ./out/point_stat.log
> > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > From my log file:
> > > > > > > >
> > > > > > > > 607 DEBUG 2:
> > > > > > > >
> > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> messages.
> > > > > > > >
> > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > > valid_time: 1
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Any help would be much appreciated
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin Tsu
> > > > > > > >
> > > > > > > > Marine Meteorology Division
> > > > > > > >
> > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > >
> > > > > > > > Building 704 Room 212
> > > > > > > >
> > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > >
> > > > > > > > 7 Grace Hopper Avenue
> > > > > > > >
> > > > > > > > Monterey, CA 93943-5502
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Ph. (831) 656-4111
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Tue Oct 01 14:34:01 2019
Hi John,
Apologies for taking such a long time getting back to you. End of
fiscal year things have consumed much of my time and I have not had
much time to work on any of this.
Before proceeding to the planning process of determining how to call
point_stat to deal with the vertical levels, I need to fix what is
going on with my GRIB1 variables. When I run point_stat, I keep
getting this error:
DEBUG 1: Default Config File: /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
DEBUG 1: User Config File: dwptdpConfig
ERROR :
ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
abbreviation 'dptd' for table version 2
ERROR :
I remember getting this before but don't remember how we fixed it.
I am using met-8.1/met-8.1a-with-grib2-support
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Friday, September 13, 2019 3:46 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Sorry for the delay. I was in DC on travel this week until today.
It's really up to you how you'd like to configure it. Unless it's too
unwieldy, I do think I'd try verifying all levels at once in a single
call
to Point-Stat. All those observations are contained in the same point
observation file. If you verify each level in a separate call to
Point-Stat, you'll be looping through and processing those obs many,
many
times, which will be relatively slow. From a processing perspective,
it'd
be more efficient to process them all at once, in a single call to
Point-Stat.
But you balance runtime efficiency versus ease of scripting and
configuration. And that's why it's up to you to decide which you
prefer.
Hope that helps.
Thanks,
John
On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> That makes sense. The way that I've set up my config file is as
follows:
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> ];
> }
> obs = {
> field = [
> {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> ];
> }
> message_type = [ "${MSG_TYPE}" ];
>
> The environmental variables I'm setting in the wrapper script are
LEV,
> INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it seems
like I
> will only be able to run point_Stat for a single elevation and a
single
> lead time. Do you recommend this? Or Should I put all the
elevations for a
> single lead time in one pass of point_stat?
>
> So my config file will look like something like this...
> fcst = {
> field = [
> {name = "/users/tsu/MET/work/read_NRL_binary.py
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
>
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> ... etc.
> ];
> }
>
> Also, I am not sure what happened by when I run point_stat now I am
> getting that error
> ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> Again. This makes me think that the obs_var name is wrong, but
ncdump -v
> obs_var raob_*.nc gives me obs_var =
> "ws",
> "wdir",
> "t",
> "dptd",
> "pres",
> "ght" ;
> So clearly dptd exists.
>
> Justin
>
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 6, 2019 1:40 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Here's a sample Point-Stat output file name:
> point_stat_360000L_20070331_120000V.stat
>
> The "360000L" indicates that this is output for a 36-hour forecast.
And
> the "20070331_120000V" timestamp is the valid time.
>
> If you run Point-Stat once for each forecast lead time, the
timestamps
> should be different and they should not clobber eachother.
>
> But let's say you don't want to run Point-Stat or Grid-Stat multiple
times
> with the same timing info. The "output_prefix" config file entry is
used
> to customize the output file names to prevent them from clobbering
> eachother. For example, setting:
> output_prefix="RUN1";
> Would result in files named "
> point_stat_RUN1_360000L_20070331_120000V.stat".
>
> Make sense?
>
> Thanks,
> John
>
> On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Invoking point_stat multiple times will create and replace the old
_cnt
> > and _sl1l2 files right? At that point, I'll have a bunch of CNT
and
> SL1L2
> > files and then use stat_analysis to aggregate them?
> >
> > Justin
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:11 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Yes, that is a long list of fields, but I don't see a way obvious
way of
> > shortening that. But to do multiple lead times, I'd just call
Point-Stat
> > multiple times, once for each lead time, and update the config
file to
> use
> > environment variables for the current time:
> >
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > },
> > ...
> >
> > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> environment
> > variables.
> >
> > John
> >
> > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Thanks John,
> > >
> > > I managed to scrap together some code to get RAOB stats from CNT
> plotted
> > > with 95% CI. Working on Surface stats now.
> > >
> > > So my configuration file looks like this right now:
> > >
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > >
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > ];
> > > }
> > >
> > > obs = {
> > > field = [
> > > {name = "dptd";level = ["P0.86-1.5"];},
> > > {name = "dptd";level = ["P1.6-2.5"];},
> > > {name = "dptd";level = ["P2.6-3.5"];},
> > > {name = "dptd";level = ["P3.6-4.5"];},
> > > {name = "dptd";level = ["P4.6-6"];},
> > > {name = "dptd";level = ["P6.1-8"];},
> > > {name = "dptd";level = ["P9-15"];},
> > > {name = "dptd";level = ["P16-25"];},
> > > {name = "dptd";level = ["P26-40"];},
> > > {name = "dptd";level = ["P41-65"];},
> > > {name = "dptd";level = ["P66-85"];},
> > > {name = "dptd";level = ["P86-125"];},
> > > {name = "dptd";level = ["P126-175"];},
> > > {name = "dptd";level = ["P176-225"];},
> > > {name = "dptd";level = ["P226-275"];},
> > > {name = "dptd";level = ["P276-325"];},
> > > {name = "dptd";level = ["P326-375"];},
> > > {name = "dptd";level = ["P376-425"];},
> > > {name = "dptd";level = ["P426-475"];},
> > > {name = "dptd";level = ["P476-525"];},
> > > {name = "dptd";level = ["P526-575"];},
> > > {name = "dptd";level = ["P576-625"];},
> > > {name = "dptd";level = ["P626-675"];},
> > > {name = "dptd";level = ["P676-725"];},
> > > {name = "dptd";level = ["P726-775"];},
> > > {name = "dptd";level = ["P776-825"];},
> > > {name = "dptd";level = ["P826-875"];},
> > > {name = "dptd";level = ["P876-912"];},
> > > {name = "dptd";level = ["P913-936"];},
> > > {name = "dptd";level = ["P937-962"];},
> > > {name = "dptd";level = ["P963-987"];},
> > > {name = "dptd";level = ["P988-1006"];},
> > > {name = "dptd";level = ["P1007-1013"];}
> > >
> > > And I have the data:
> > >
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > >
> > > for a particular DTG and vertical level. If I want to run
multiple
> lead
> > > times, it seems like I'll have to copy that long list of fields
for
> each
> > > lead time in the fcst dict and then duplicate the obs dictionary
so
> that
> > > each forecast entry has a corresponding obs level matching
range. Is
> > this
> > > correct or is there a shorter/better way to do this?
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > MET)
> > > in the plots you sent.
> > >
> > > Table 7.6 of the MET User's Guide (
> > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > )
> > > describes the contents of the CNT line type type. Bot the
columns for
> > RMSE
> > > and ME are followed by _NCL and _NCU columns which give the
parametric
> > > approximation of the confidence interval for those scores. So
yes, you
> > can
> > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > corresponding CNT output line type.
> > >
> > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> parametric
> > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> columns
> > > for the ME statistic.
> > >
> > > You can change the alpha value for those confidence intervals by
> setting:
> > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > This all helps me greatly. One more questions: is there any
> > information
> > > > in either the CNT or SL1L2 that could give me confidence
intervals
> for
> > > > each data point? I'm looking to replicate the attached plot.
Notice
> > > that
> > > > the individual points could have either a 99, 95 or 90 %
confidence.
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Sounds about right. Each time you run Grid-Stat or Point-Stat
you
> can
> > > > write the CNT output line type which contains stats like MSE,
ME,
> MAE,
> > > and
> > > > RMSE. And I'm recommended that you also write the SL1L2 line
type as
> > > well.
> > > >
> > > > Then you'd run a stat_analysis job like this:
> > > >
> > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> -line_type
> > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > cnt_out.stat
> > > >
> > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> reads
> > > the
> > > > SL1L2 line type, and for each unique combination of FCST_VAR,
> FCST_LEV,
> > > and
> > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
together
> > and
> > > > write out the corresponding CNT line type to the output file
named
> > > > cnt_out.stat.
> > > >
> > > > John
> > > >
> > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > So if I understand what you're saying correctly, then if I
wanted
> to
> > an
> > > > > average of 24 hour forecasts over a month long run, then I
would
> use
> > > the
> > > > > SL1L2 output to aggregate and produce this average? Whereas
if I
> > used
> > > > CNT,
> > > > > this would just provide me ~30 individual (per day over a
month) 24
> > > hour
> > > > > forecast verifications?
> > > > >
> > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > biases?
> > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> the
> > > plot
> > > > > you showed me was just something you guys post processed
using
> python
> > > or
> > > > > whatnot.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > We wrote the SL1L2 partial sums from Point-Stat because they
can be
> > > > > aggregated together by the stat-analysis tool over multiple
days or
> > > > cases.
> > > > >
> > > > > If you're interested in continuous statistics from Point-
Stat, I'd
> > > > > recommend writing the CNT line type (which has the stats
computed
> for
> > > > that
> > > > > single run) and the SL1L2 line type (so that you can
aggregate them
> > > > > together in stat-analysis or METviewer).
> > > > >
> > > > > The other alternative is looking at the average of the daily
> > statistics
> > > > > scores. For RMSE, the average of the daily RMSE is equal to
the
> > > > aggregated
> > > > > score... as long as the number of matched pairs remains
constant
> day
> > to
> > > > > day. But if one today you have 98 matched pairs and
tomorrow you
> > have
> > > > 105,
> > > > > then tomorrow's score will have slightly more weight. The
SL1L2
> > lines
> > > > are
> > > > > aggregated as weighted averages, where the TOTAL column is
the
> > weight.
> > > > And
> > > > > then stats (like RMSE and MSE) are recomputed from those
aggregated
> > > > > scores. Generally, the statisticians recommend this method
over
> the
> > > mean
> > > > > of the daily scores. Neither is "wrong", they just give you
> slightly
> > > > > different information.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John.
> > > > > >
> > > > > > Sorry it's taken me such a long time to get to this. It's
> nearing
> > > the
> > > > > end
> > > > > > of FY19 so I have been finalizing several transition
projects and
> > > > haven’t
> > > > > > had much time to work on MET recently. I just picked this
back
> up
> > > and
> > > > > have
> > > > > > loaded a couple new modules. Here is what I have to work
with
> now:
> > > > > >
> > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > 2) netcdf-local/netcdf-met
> > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > 5) udunits/udunits-2.1.24
> > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > >
> > > > > >
> > > > > > Running
> > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc dwptdpConfig
-v 3
> > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >> log.out
> > > > > >
> > > > > > I get many matched pairs. Here is a sample of what the
log file
> > > looks
> > > > > > like for one of the pressure ranges I am verifying on:
> > > > > >
> > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
dptd/P425-376,
> > for
> > > > > > observation type radiosonde, over region FULL, for
interpolation
> > > method
> > > > > > NEAREST(1), using 98 pairs.
> > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > >=0,
> > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > UNION.
> > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast filtering
> > threshold
> > > > > > >=10.0, observation filtering threshold >=10.0, and field
logic
> > > UNION.
> > > > > > 15280 DEBUG 2:
> > > > > > 15281 DEBUG 2:
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > >
> > > > > > I am going to work on processing these point stat files to
create
> > > those
> > > > > > vertical raob plots we had a discussion about. I remember
us
> > talking
> > > > > about
> > > > > > the partial sums file. Why did we choose to go the route
of
> > > producing
> > > > > > partial sums then feeding that into series analysis to
generate
> > bias
> > > > and
> > > > > > MSE? It looks like bias and MSE both exist within the CNT
line
> > type
> > > > > (MBIAS
> > > > > > and MSE)?
> > > > > >
> > > > > >
> > > > > > Justin
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Great, thanks for sending me the sample data. Yes, I was
able to
> > > > > replicate
> > > > > > the segfault. The good news is that this is caused by a
simple
> > typo
> > > > > that's
> > > > > > easy to fix. If you look in the "obs.field" entry of the
> > > relhumConfig
> > > > > > file, you'll see an empty string for the last field
listed:
> > > > > >
> > > > > > *obs = { field = [*
> > > > > >
> > > > > >
> > > > > >
> > > > > > * ... {name = "dptd";level = ["P988-
1006"];},
> > > > > {name =
> > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > If you change that empty string to "dptd", the segfault
will go
> > > away:*
> > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
completion
> > (in
> > > 2
> > > > > > minutes 48 seconds on my desktop machine), but it produced
0
> > matched
> > > > > > pairs. They were discarded because of the valid times
(seen
> using
> > > -v 3
> > > > > > command line option to Point-Stat). The ob file you sent
is
> named
> > "
> > > > > > raob_2015020412.nc" but the actual times in that file are
for
> > > > > > "20190426_120000":
> > > > > >
> > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > http://raob_2015020412.nc
> > > > >*
> > > > > >
> > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > >
> > > > > > So please be aware of that discrepancy. To just produce
some
> > matched
> > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
20190426_120000
> > > > > > -obs_valid_end 20190426_120000*
> > > > > >
> > > > > > But I still get 0 matched pairs. This time, it's because
of bad
> > > > forecast
> > > > > > values:
> > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > >
> > > > > > Taking a step back... let's run one of these fields
through
> > > > > > plot_data_plane, which results in an error:
> > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> http://plot.ps>
> > > > > > 'name="./read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > ERROR : DataPlane::two_to_one() -> range check error:
(Nx, Ny) =
> > > (97,
> > > > > 97),
> > > > > > (x, y) = (97, 0)
> > > > > >
> > > > > > While the numpy object is 97x97, the grid is specified as
being
> > > 118x118
> > > > > in
> > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > >
> > > > > > Just to get something working, I modified the nx and ny in
the
> > python
> > > > > > script:
> > > > > > 'nx':97,
> > > > > > 'ny':97,
> > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > >
> > > > > > So I'd suggest...
> > > > > > - Fix the typo in the config file.
> > > > > > - Figure out the discrepancy between the obs file name
timestamp
> > and
> > > > the
> > > > > > data in that file.
> > > > > > - Make sure the grid information is consistent with the
data in
> the
> > > > > python
> > > > > > script.
> > > > > >
> > > > > > Obviously though, we don't want to code to be segfaulting
in any
> > > > > > condition. So next, I tested using met-8.1 with that
empty
> string.
> > > > This
> > > > > > time it does run with no segfault, but prints a warning
about the
> > > empty
> > > > > > string.
> > > > > >
> > > > > > Hope that helps.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Hey John,
> > > > > > >
> > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > >
> > > > > > > I am running met-8.0/met-8.0-with-grib2-support and
have
> > provided
> > > > > > > everything
> > > > > > > on that list you've provided me. Let me know if you're
able to
> > > > > replicate
> > > > > > > it
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Well that doesn't seem to be very helpful of Point-Stat
at all.
> > > > There
> > > > > > > isn't much jumping out at me from the log messages you
sent.
> In
> > > > fact,
> > > > > I
> > > > > > > hunted around for the DEBUG(7) log message but couldn't
find
> > where
> > > in
> > > > > the
> > > > > > > code it's being written. Are you able to send me some
sample
> > data
> > > to
> > > > > > > replicate this behavior?
> > > > > > >
> > > > > > > I'd need to know...
> > > > > > > - What version of MET are you running.
> > > > > > > - A copy of your Point-Stat config file.
> > > > > > > - The python script that you're running.
> > > > > > > - The input file for that python script.
> > > > > > > - The NetCDF point observation file you're passing to
> Point-Stat.
> > > > > > >
> > > > > > > If I can replicate the behavior here, it should be easy
to run
> it
> > > in
> > > > > the
> > > > > > > debugger and figure it out.
> > > > > > >
> > > > > > > You can post data to our anonymous ftp site as described
in
> "How
> > to
> > > > > send
> > > > > > us
> > > > > > > data":
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > > Queue: met_help
> > > > > > > > Subject: point_stat seg faulting
> > > > > > > > Owner: Nobody
> > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > Status: new
> > > > > > > > Ticket <URL:
> > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > verification
> > > > > > > > plots
> > > > > > > > using point_stat and stat_analysis like we did
together for
> > winds
> > > > but
> > > > > > for
> > > > > > > > relative humidity now. But when I run point_stat, it
seg
> > faults
> > > > > > without
> > > > > > > > much explanation
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > >
> > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > climatology
> > > > > > > mean
> > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > ----
> > > > > > > >
> > > > > > > > DEBUG 2:
> > > > > > > >
> > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > >
> > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id: 617
> > > > valid_time: 1
> > > > > > > >
> > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> point_stat
> > > > > > > > PYTHON_NUMPY
> > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > ./out/point_stat.log
> > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > From my log file:
> > > > > > > >
> > > > > > > > 607 DEBUG 2:
> > > > > > > >
> > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> messages.
> > > > > > > >
> > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > > valid_time: 1
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Any help would be much appreciated
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin Tsu
> > > > > > > >
> > > > > > > > Marine Meteorology Division
> > > > > > > >
> > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > >
> > > > > > > > Building 704 Room 212
> > > > > > > >
> > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > >
> > > > > > > > 7 Grace Hopper Avenue
> > > > > > > >
> > > > > > > > Monterey, CA 93943-5502
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Ph. (831) 656-4111
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Wed Oct 02 12:13:38 2019
Justin,
This means that you're requesting a variable named "dpdt" in the
Point-Stat
config file. MET looks for a definition of that string in it's
default
GRIB1 tables:
grep dpdt met-8.1/share/met/table_files/*
But that returns 0 matches. So this error message is telling you that
MET
doesn't know how to interpret that variable name.
Here's what I'd suggest:
(1) Run the input GRIB1 file through the "wgrib" utility. If "wgrib"
knows
about this variable, it will report the name... and most likely,
that's the
same name that MET will know. If so, switch from using "dpdt" to
using
whatever name wgrib reports.
(2) If "wgrib" does NOT know about this variable, it'll just list out
the
corresponding GRIB1 codes instead. That means we'll need to go create
a
small GRIB table to define these strings. Take a look in:
met-8.1/share/met/table_files
We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt" where
CENTER is the number encoded in your GRIB file to define NRL and PTV
is the
parameter table version number used in your GRIB file. In that,
you'll
define the mapping of GRIB1 codes to strings (like "dpdt"). And for
now,
we'll need to set the "MET_GRIB_TABLES" environment variable to the
location of that file. But in the long run, you can send me that
file, and
we'll add it to "table_files" directory to be included in the next
release
of MET.
If you have trouble creating a new GRIB table file, just let me know
and
send me a sample GRIB file.
Thanks,
John
On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hi John,
>
> Apologies for taking such a long time getting back to you. End of
fiscal
> year things have consumed much of my time and I have not had much
time to
> work on any of this.
>
> Before proceeding to the planning process of determining how to call
> point_stat to deal with the vertical levels, I need to fix what is
going on
> with my GRIB1 variables. When I run point_stat, I keep getting this
error:
>
> DEBUG 1: Default Config File:
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> DEBUG 1: User Config File: dwptdpConfig
> ERROR :
> ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> ERROR :
>
> I remember getting this before but don't remember how we fixed it.
> I am using met-8.1/met-8.1a-with-grib2-support
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 13, 2019 3:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sorry for the delay. I was in DC on travel this week until today.
>
> It's really up to you how you'd like to configure it. Unless it's
too
> unwieldy, I do think I'd try verifying all levels at once in a
single call
> to Point-Stat. All those observations are contained in the same
point
> observation file. If you verify each level in a separate call to
> Point-Stat, you'll be looping through and processing those obs many,
many
> times, which will be relatively slow. From a processing
perspective, it'd
> be more efficient to process them all at once, in a single call to
> Point-Stat.
>
> But you balance runtime efficiency versus ease of scripting and
> configuration. And that's why it's up to you to decide which you
prefer.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > That makes sense. The way that I've set up my config file is as
follows:
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > ];
> > }
> > obs = {
> > field = [
> > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > ];
> > }
> > message_type = [ "${MSG_TYPE}" ];
> >
> > The environmental variables I'm setting in the wrapper script are
LEV,
> > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it
seems
> like I
> > will only be able to run point_Stat for a single elevation and a
single
> > lead time. Do you recommend this? Or Should I put all the
elevations
> for a
> > single lead time in one pass of point_stat?
> >
> > So my config file will look like something like this...
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > ... etc.
> > ];
> > }
> >
> > Also, I am not sure what happened by when I run point_stat now I
am
> > getting that error
> > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > Again. This makes me think that the obs_var name is wrong, but
ncdump
> -v
> > obs_var raob_*.nc gives me obs_var =
> > "ws",
> > "wdir",
> > "t",
> > "dptd",
> > "pres",
> > "ght" ;
> > So clearly dptd exists.
> >
> > Justin
> >
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:40 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Here's a sample Point-Stat output file name:
> > point_stat_360000L_20070331_120000V.stat
> >
> > The "360000L" indicates that this is output for a 36-hour
forecast. And
> > the "20070331_120000V" timestamp is the valid time.
> >
> > If you run Point-Stat once for each forecast lead time, the
timestamps
> > should be different and they should not clobber eachother.
> >
> > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> times
> > with the same timing info. The "output_prefix" config file entry
is used
> > to customize the output file names to prevent them from clobbering
> > eachother. For example, setting:
> > output_prefix="RUN1";
> > Would result in files named "
> > point_stat_RUN1_360000L_20070331_120000V.stat".
> >
> > Make sense?
> >
> > Thanks,
> > John
> >
> > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Invoking point_stat multiple times will create and replace the
old _cnt
> > > and _sl1l2 files right? At that point, I'll have a bunch of CNT
and
> > SL1L2
> > > files and then use stat_analysis to aggregate them?
> > >
> > > Justin
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 6, 2019 1:11 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Yes, that is a long list of fields, but I don't see a way
obvious way
> of
> > > shortening that. But to do multiple lead times, I'd just call
> Point-Stat
> > > multiple times, once for each lead time, and update the config
file to
> > use
> > > environment variables for the current time:
> > >
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > ...
> > >
> > > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> > environment
> > > variables.
> > >
> > > John
> > >
> > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > I managed to scrap together some code to get RAOB stats from
CNT
> > plotted
> > > > with 95% CI. Working on Surface stats now.
> > > >
> > > > So my configuration file looks like this right now:
> > > >
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > ];
> > > > }
> > > >
> > > > obs = {
> > > > field = [
> > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > {name = "dptd";level = ["P4.6-6"];},
> > > > {name = "dptd";level = ["P6.1-8"];},
> > > > {name = "dptd";level = ["P9-15"];},
> > > > {name = "dptd";level = ["P16-25"];},
> > > > {name = "dptd";level = ["P26-40"];},
> > > > {name = "dptd";level = ["P41-65"];},
> > > > {name = "dptd";level = ["P66-85"];},
> > > > {name = "dptd";level = ["P86-125"];},
> > > > {name = "dptd";level = ["P126-175"];},
> > > > {name = "dptd";level = ["P176-225"];},
> > > > {name = "dptd";level = ["P226-275"];},
> > > > {name = "dptd";level = ["P276-325"];},
> > > > {name = "dptd";level = ["P326-375"];},
> > > > {name = "dptd";level = ["P376-425"];},
> > > > {name = "dptd";level = ["P426-475"];},
> > > > {name = "dptd";level = ["P476-525"];},
> > > > {name = "dptd";level = ["P526-575"];},
> > > > {name = "dptd";level = ["P576-625"];},
> > > > {name = "dptd";level = ["P626-675"];},
> > > > {name = "dptd";level = ["P676-725"];},
> > > > {name = "dptd";level = ["P726-775"];},
> > > > {name = "dptd";level = ["P776-825"];},
> > > > {name = "dptd";level = ["P826-875"];},
> > > > {name = "dptd";level = ["P876-912"];},
> > > > {name = "dptd";level = ["P913-936"];},
> > > > {name = "dptd";level = ["P937-962"];},
> > > > {name = "dptd";level = ["P963-987"];},
> > > > {name = "dptd";level = ["P988-1006"];},
> > > > {name = "dptd";level = ["P1007-1013"];}
> > > >
> > > > And I have the data:
> > > >
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > >
> > > > for a particular DTG and vertical level. If I want to run
multiple
> > lead
> > > > times, it seems like I'll have to copy that long list of
fields for
> > each
> > > > lead time in the fcst dict and then duplicate the obs
dictionary so
> > that
> > > > each forecast entry has a corresponding obs level matching
range. Is
> > > this
> > > > correct or is there a shorter/better way to do this?
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > > MET)
> > > > in the plots you sent.
> > > >
> > > > Table 7.6 of the MET User's Guide (
> > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > )
> > > > describes the contents of the CNT line type type. Bot the
columns for
> > > RMSE
> > > > and ME are followed by _NCL and _NCU columns which give the
> parametric
> > > > approximation of the confidence interval for those scores. So
yes,
> you
> > > can
> > > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > > corresponding CNT output line type.
> > > >
> > > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> > parametric
> > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > columns
> > > > for the ME statistic.
> > > >
> > > > You can change the alpha value for those confidence intervals
by
> > setting:
> > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John,
> > > > >
> > > > > This all helps me greatly. One more questions: is there any
> > > information
> > > > > in either the CNT or SL1L2 that could give me confidence
intervals
> > for
> > > > > each data point? I'm looking to replicate the attached
plot.
> Notice
> > > > that
> > > > > the individual points could have either a 99, 95 or 90 %
> confidence.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Sounds about right. Each time you run Grid-Stat or Point-
Stat you
> > can
> > > > > write the CNT output line type which contains stats like
MSE, ME,
> > MAE,
> > > > and
> > > > > RMSE. And I'm recommended that you also write the SL1L2
line type
> as
> > > > well.
> > > > >
> > > > > Then you'd run a stat_analysis job like this:
> > > > >
> > > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> > -line_type
> > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > > cnt_out.stat
> > > > >
> > > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> > reads
> > > > the
> > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > FCST_LEV,
> > > > and
> > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
> together
> > > and
> > > > > write out the corresponding CNT line type to the output file
named
> > > > > cnt_out.stat.
> > > > >
> > > > > John
> > > > >
> > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu
> > > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > So if I understand what you're saying correctly, then if I
wanted
> > to
> > > an
> > > > > > average of 24 hour forecasts over a month long run, then I
would
> > use
> > > > the
> > > > > > SL1L2 output to aggregate and produce this average?
Whereas if I
> > > used
> > > > > CNT,
> > > > > > this would just provide me ~30 individual (per day over a
month)
> 24
> > > > hour
> > > > > > forecast verifications?
> > > > > >
> > > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > > biases?
> > > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> > the
> > > > plot
> > > > > > you showed me was just something you guys post processed
using
> > python
> > > > or
> > > > > > whatnot.
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > We wrote the SL1L2 partial sums from Point-Stat because
they can
> be
> > > > > > aggregated together by the stat-analysis tool over
multiple days
> or
> > > > > cases.
> > > > > >
> > > > > > If you're interested in continuous statistics from Point-
Stat,
> I'd
> > > > > > recommend writing the CNT line type (which has the stats
computed
> > for
> > > > > that
> > > > > > single run) and the SL1L2 line type (so that you can
aggregate
> them
> > > > > > together in stat-analysis or METviewer).
> > > > > >
> > > > > > The other alternative is looking at the average of the
daily
> > > statistics
> > > > > > scores. For RMSE, the average of the daily RMSE is equal
to the
> > > > > aggregated
> > > > > > score... as long as the number of matched pairs remains
constant
> > day
> > > to
> > > > > > day. But if one today you have 98 matched pairs and
tomorrow you
> > > have
> > > > > 105,
> > > > > > then tomorrow's score will have slightly more weight. The
SL1L2
> > > lines
> > > > > are
> > > > > > aggregated as weighted averages, where the TOTAL column is
the
> > > weight.
> > > > > And
> > > > > > then stats (like RMSE and MSE) are recomputed from those
> aggregated
> > > > > > scores. Generally, the statisticians recommend this
method over
> > the
> > > > mean
> > > > > > of the daily scores. Neither is "wrong", they just give
you
> > slightly
> > > > > > different information.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John.
> > > > > > >
> > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > nearing
> > > > the
> > > > > > end
> > > > > > > of FY19 so I have been finalizing several transition
projects
> and
> > > > > haven’t
> > > > > > > had much time to work on MET recently. I just picked
this back
> > up
> > > > and
> > > > > > have
> > > > > > > loaded a couple new modules. Here is what I have to
work with
> > now:
> > > > > > >
> > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > >
> > > > > > >
> > > > > > > Running
> > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig -v
> 3
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > >
> > > > > > > I get many matched pairs. Here is a sample of what the
log
> file
> > > > looks
> > > > > > > like for one of the pressure ranges I am verifying on:
> > > > > > >
> > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> dptd/P425-376,
> > > for
> > > > > > > observation type radiosonde, over region FULL, for
> interpolation
> > > > method
> > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15280 DEBUG 2:
> > > > > > > 15281 DEBUG 2:
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > >
> > > > > > > I am going to work on processing these point stat files
to
> create
> > > > those
> > > > > > > vertical raob plots we had a discussion about. I
remember us
> > > talking
> > > > > > about
> > > > > > > the partial sums file. Why did we choose to go the
route of
> > > > producing
> > > > > > > partial sums then feeding that into series analysis to
generate
> > > bias
> > > > > and
> > > > > > > MSE? It looks like bias and MSE both exist within the
CNT line
> > > type
> > > > > > (MBIAS
> > > > > > > and MSE)?
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Great, thanks for sending me the sample data. Yes, I
was able
> to
> > > > > > replicate
> > > > > > > the segfault. The good news is that this is caused by a
simple
> > > typo
> > > > > > that's
> > > > > > > easy to fix. If you look in the "obs.field" entry of
the
> > > > relhumConfig
> > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > >
> > > > > > > *obs = { field = [*
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > * ... {name = "dptd";level = ["P988-
1006"];},
> > > > > > {name =
> > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > If you change that empty string to "dptd", the segfault
will go
> > > > away:*
> > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
> completion
> > > (in
> > > > 2
> > > > > > > minutes 48 seconds on my desktop machine), but it
produced 0
> > > matched
> > > > > > > pairs. They were discarded because of the valid times
(seen
> > using
> > > > -v 3
> > > > > > > command line option to Point-Stat). The ob file you
sent is
> > named
> > > "
> > > > > > > raob_2015020412.nc" but the actual times in that file
are for
> > > > > > > "20190426_120000":
> > > > > > >
> > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > http://raob_2015020412.nc
> > > > > >*
> > > > > > >
> > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > >
> > > > > > > So please be aware of that discrepancy. To just produce
some
> > > matched
> > > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> 20190426_120000
> > > > > > > -obs_valid_end 20190426_120000*
> > > > > > >
> > > > > > > But I still get 0 matched pairs. This time, it's
because of
> bad
> > > > > forecast
> > > > > > > values:
> > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > >
> > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > plot_data_plane, which results in an error:
> > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > http://plot.ps>
> > > > > > > 'name="./read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > ERROR : DataPlane::two_to_one() -> range check error:
(Nx,
> Ny) =
> > > > (97,
> > > > > > 97),
> > > > > > > (x, y) = (97, 0)
> > > > > > >
> > > > > > > While the numpy object is 97x97, the grid is specified
as being
> > > > 118x118
> > > > > > in
> > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > >
> > > > > > > Just to get something working, I modified the nx and ny
in the
> > > python
> > > > > > > script:
> > > > > > > 'nx':97,
> > > > > > > 'ny':97,
> > > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > > >
> > > > > > > So I'd suggest...
> > > > > > > - Fix the typo in the config file.
> > > > > > > - Figure out the discrepancy between the obs file name
> timestamp
> > > and
> > > > > the
> > > > > > > data in that file.
> > > > > > > - Make sure the grid information is consistent with the
data in
> > the
> > > > > > python
> > > > > > > script.
> > > > > > >
> > > > > > > Obviously though, we don't want to code to be
segfaulting in
> any
> > > > > > > condition. So next, I tested using met-8.1 with that
empty
> > string.
> > > > > This
> > > > > > > time it does run with no segfault, but prints a warning
about
> the
> > > > empty
> > > > > > > string.
> > > > > > >
> > > > > > > Hope that helps.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > > >
> > > > > > > > I am running met-8.0/met-8.0-with-grib2-support and
have
> > > provided
> > > > > > > > everything
> > > > > > > > on that list you've provided me. Let me know if
you're able
> to
> > > > > > replicate
> > > > > > > > it
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Well that doesn't seem to be very helpful of Point-
Stat at
> all.
> > > > > There
> > > > > > > > isn't much jumping out at me from the log messages you
sent.
> > In
> > > > > fact,
> > > > > > I
> > > > > > > > hunted around for the DEBUG(7) log message but
couldn't find
> > > where
> > > > in
> > > > > > the
> > > > > > > > code it's being written. Are you able to send me some
sample
> > > data
> > > > to
> > > > > > > > replicate this behavior?
> > > > > > > >
> > > > > > > > I'd need to know...
> > > > > > > > - What version of MET are you running.
> > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > - The python script that you're running.
> > > > > > > > - The input file for that python script.
> > > > > > > > - The NetCDF point observation file you're passing to
> > Point-Stat.
> > > > > > > >
> > > > > > > > If I can replicate the behavior here, it should be
easy to
> run
> > it
> > > > in
> > > > > > the
> > > > > > > > debugger and figure it out.
> > > > > > > >
> > > > > > > > You can post data to our anonymous ftp site as
described in
> > "How
> > > to
> > > > > > send
> > > > > > > us
> > > > > > > > data":
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > > > Queue: met_help
> > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > Owner: Nobody
> > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > Status: new
> > > > > > > > > Ticket <URL:
> > > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Hey John,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > > verification
> > > > > > > > > plots
> > > > > > > > > using point_stat and stat_analysis like we did
together for
> > > winds
> > > > > but
> > > > > > > for
> > > > > > > > > relative humidity now. But when I run point_stat,
it seg
> > > faults
> > > > > > > without
> > > > > > > > > much explanation
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > >
> > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > > climatology
> > > > > > > > mean
> > > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > > >
> > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > point_stat
> > > > > > > > > PYTHON_NUMPY
> > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > > ./out/point_stat.log
> > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > From my log file:
> > > > > > > > >
> > > > > > > > > 607 DEBUG 2:
> > > > > > > > >
> > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> > messages.
> > > > > > > > >
> > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station
id: 617
> > > > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Any help would be much appreciated
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin Tsu
> > > > > > > > >
> > > > > > > > > Marine Meteorology Division
> > > > > > > > >
> > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > >
> > > > > > > > > Building 704 Room 212
> > > > > > > > >
> > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > >
> > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > >
> > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Ph. (831) 656-4111
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Wed Oct 16 16:46:27 2019
John,
I don't think this is a GRIB issue since I am using netCDF files
generated from ascii2nc. Therefore, I am able to specify the names of
the variables when I write the ascii file that goes into ascii2nc and
(I thought) as long I am consistent with the variable names in the
netCDF file and the config file, it should work.
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Wednesday, October 2, 2019 11:14 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
This means that you're requesting a variable named "dpdt" in the
Point-Stat
config file. MET looks for a definition of that string in it's
default
GRIB1 tables:
grep dpdt met-8.1/share/met/table_files/*
But that returns 0 matches. So this error message is telling you that
MET
doesn't know how to interpret that variable name.
Here's what I'd suggest:
(1) Run the input GRIB1 file through the "wgrib" utility. If "wgrib"
knows
about this variable, it will report the name... and most likely,
that's the
same name that MET will know. If so, switch from using "dpdt" to
using
whatever name wgrib reports.
(2) If "wgrib" does NOT know about this variable, it'll just list out
the
corresponding GRIB1 codes instead. That means we'll need to go create
a
small GRIB table to define these strings. Take a look in:
met-8.1/share/met/table_files
We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt" where
CENTER is the number encoded in your GRIB file to define NRL and PTV
is the
parameter table version number used in your GRIB file. In that,
you'll
define the mapping of GRIB1 codes to strings (like "dpdt"). And for
now,
we'll need to set the "MET_GRIB_TABLES" environment variable to the
location of that file. But in the long run, you can send me that
file, and
we'll add it to "table_files" directory to be included in the next
release
of MET.
If you have trouble creating a new GRIB table file, just let me know
and
send me a sample GRIB file.
Thanks,
John
On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hi John,
>
> Apologies for taking such a long time getting back to you. End of
fiscal
> year things have consumed much of my time and I have not had much
time to
> work on any of this.
>
> Before proceeding to the planning process of determining how to call
> point_stat to deal with the vertical levels, I need to fix what is
going on
> with my GRIB1 variables. When I run point_stat, I keep getting this
error:
>
> DEBUG 1: Default Config File:
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> DEBUG 1: User Config File: dwptdpConfig
> ERROR :
> ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> ERROR :
>
> I remember getting this before but don't remember how we fixed it.
> I am using met-8.1/met-8.1a-with-grib2-support
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 13, 2019 3:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sorry for the delay. I was in DC on travel this week until today.
>
> It's really up to you how you'd like to configure it. Unless it's
too
> unwieldy, I do think I'd try verifying all levels at once in a
single call
> to Point-Stat. All those observations are contained in the same
point
> observation file. If you verify each level in a separate call to
> Point-Stat, you'll be looping through and processing those obs many,
many
> times, which will be relatively slow. From a processing
perspective, it'd
> be more efficient to process them all at once, in a single call to
> Point-Stat.
>
> But you balance runtime efficiency versus ease of scripting and
> configuration. And that's why it's up to you to decide which you
prefer.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > That makes sense. The way that I've set up my config file is as
follows:
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > ];
> > }
> > obs = {
> > field = [
> > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > ];
> > }
> > message_type = [ "${MSG_TYPE}" ];
> >
> > The environmental variables I'm setting in the wrapper script are
LEV,
> > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it
seems
> like I
> > will only be able to run point_Stat for a single elevation and a
single
> > lead time. Do you recommend this? Or Should I put all the
elevations
> for a
> > single lead time in one pass of point_stat?
> >
> > So my config file will look like something like this...
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > ... etc.
> > ];
> > }
> >
> > Also, I am not sure what happened by when I run point_stat now I
am
> > getting that error
> > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > Again. This makes me think that the obs_var name is wrong, but
ncdump
> -v
> > obs_var raob_*.nc gives me obs_var =
> > "ws",
> > "wdir",
> > "t",
> > "dptd",
> > "pres",
> > "ght" ;
> > So clearly dptd exists.
> >
> > Justin
> >
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:40 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Here's a sample Point-Stat output file name:
> > point_stat_360000L_20070331_120000V.stat
> >
> > The "360000L" indicates that this is output for a 36-hour
forecast. And
> > the "20070331_120000V" timestamp is the valid time.
> >
> > If you run Point-Stat once for each forecast lead time, the
timestamps
> > should be different and they should not clobber eachother.
> >
> > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> times
> > with the same timing info. The "output_prefix" config file entry
is used
> > to customize the output file names to prevent them from clobbering
> > eachother. For example, setting:
> > output_prefix="RUN1";
> > Would result in files named "
> > point_stat_RUN1_360000L_20070331_120000V.stat".
> >
> > Make sense?
> >
> > Thanks,
> > John
> >
> > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Invoking point_stat multiple times will create and replace the
old _cnt
> > > and _sl1l2 files right? At that point, I'll have a bunch of CNT
and
> > SL1L2
> > > files and then use stat_analysis to aggregate them?
> > >
> > > Justin
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 6, 2019 1:11 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Yes, that is a long list of fields, but I don't see a way
obvious way
> of
> > > shortening that. But to do multiple lead times, I'd just call
> Point-Stat
> > > multiple times, once for each lead time, and update the config
file to
> > use
> > > environment variables for the current time:
> > >
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > ...
> > >
> > > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> > environment
> > > variables.
> > >
> > > John
> > >
> > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > I managed to scrap together some code to get RAOB stats from
CNT
> > plotted
> > > > with 95% CI. Working on Surface stats now.
> > > >
> > > > So my configuration file looks like this right now:
> > > >
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > ];
> > > > }
> > > >
> > > > obs = {
> > > > field = [
> > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > {name = "dptd";level = ["P4.6-6"];},
> > > > {name = "dptd";level = ["P6.1-8"];},
> > > > {name = "dptd";level = ["P9-15"];},
> > > > {name = "dptd";level = ["P16-25"];},
> > > > {name = "dptd";level = ["P26-40"];},
> > > > {name = "dptd";level = ["P41-65"];},
> > > > {name = "dptd";level = ["P66-85"];},
> > > > {name = "dptd";level = ["P86-125"];},
> > > > {name = "dptd";level = ["P126-175"];},
> > > > {name = "dptd";level = ["P176-225"];},
> > > > {name = "dptd";level = ["P226-275"];},
> > > > {name = "dptd";level = ["P276-325"];},
> > > > {name = "dptd";level = ["P326-375"];},
> > > > {name = "dptd";level = ["P376-425"];},
> > > > {name = "dptd";level = ["P426-475"];},
> > > > {name = "dptd";level = ["P476-525"];},
> > > > {name = "dptd";level = ["P526-575"];},
> > > > {name = "dptd";level = ["P576-625"];},
> > > > {name = "dptd";level = ["P626-675"];},
> > > > {name = "dptd";level = ["P676-725"];},
> > > > {name = "dptd";level = ["P726-775"];},
> > > > {name = "dptd";level = ["P776-825"];},
> > > > {name = "dptd";level = ["P826-875"];},
> > > > {name = "dptd";level = ["P876-912"];},
> > > > {name = "dptd";level = ["P913-936"];},
> > > > {name = "dptd";level = ["P937-962"];},
> > > > {name = "dptd";level = ["P963-987"];},
> > > > {name = "dptd";level = ["P988-1006"];},
> > > > {name = "dptd";level = ["P1007-1013"];}
> > > >
> > > > And I have the data:
> > > >
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > >
> > > > for a particular DTG and vertical level. If I want to run
multiple
> > lead
> > > > times, it seems like I'll have to copy that long list of
fields for
> > each
> > > > lead time in the fcst dict and then duplicate the obs
dictionary so
> > that
> > > > each forecast entry has a corresponding obs level matching
range. Is
> > > this
> > > > correct or is there a shorter/better way to do this?
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > > MET)
> > > > in the plots you sent.
> > > >
> > > > Table 7.6 of the MET User's Guide (
> > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > )
> > > > describes the contents of the CNT line type type. Bot the
columns for
> > > RMSE
> > > > and ME are followed by _NCL and _NCU columns which give the
> parametric
> > > > approximation of the confidence interval for those scores. So
yes,
> you
> > > can
> > > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > > corresponding CNT output line type.
> > > >
> > > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> > parametric
> > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > columns
> > > > for the ME statistic.
> > > >
> > > > You can change the alpha value for those confidence intervals
by
> > setting:
> > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John,
> > > > >
> > > > > This all helps me greatly. One more questions: is there any
> > > information
> > > > > in either the CNT or SL1L2 that could give me confidence
intervals
> > for
> > > > > each data point? I'm looking to replicate the attached
plot.
> Notice
> > > > that
> > > > > the individual points could have either a 99, 95 or 90 %
> confidence.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Sounds about right. Each time you run Grid-Stat or Point-
Stat you
> > can
> > > > > write the CNT output line type which contains stats like
MSE, ME,
> > MAE,
> > > > and
> > > > > RMSE. And I'm recommended that you also write the SL1L2
line type
> as
> > > > well.
> > > > >
> > > > > Then you'd run a stat_analysis job like this:
> > > > >
> > > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> > -line_type
> > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > > cnt_out.stat
> > > > >
> > > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> > reads
> > > > the
> > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > FCST_LEV,
> > > > and
> > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
> together
> > > and
> > > > > write out the corresponding CNT line type to the output file
named
> > > > > cnt_out.stat.
> > > > >
> > > > > John
> > > > >
> > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu
> > > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > So if I understand what you're saying correctly, then if I
wanted
> > to
> > > an
> > > > > > average of 24 hour forecasts over a month long run, then I
would
> > use
> > > > the
> > > > > > SL1L2 output to aggregate and produce this average?
Whereas if I
> > > used
> > > > > CNT,
> > > > > > this would just provide me ~30 individual (per day over a
month)
> 24
> > > > hour
> > > > > > forecast verifications?
> > > > > >
> > > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > > biases?
> > > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> > the
> > > > plot
> > > > > > you showed me was just something you guys post processed
using
> > python
> > > > or
> > > > > > whatnot.
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > We wrote the SL1L2 partial sums from Point-Stat because
they can
> be
> > > > > > aggregated together by the stat-analysis tool over
multiple days
> or
> > > > > cases.
> > > > > >
> > > > > > If you're interested in continuous statistics from Point-
Stat,
> I'd
> > > > > > recommend writing the CNT line type (which has the stats
computed
> > for
> > > > > that
> > > > > > single run) and the SL1L2 line type (so that you can
aggregate
> them
> > > > > > together in stat-analysis or METviewer).
> > > > > >
> > > > > > The other alternative is looking at the average of the
daily
> > > statistics
> > > > > > scores. For RMSE, the average of the daily RMSE is equal
to the
> > > > > aggregated
> > > > > > score... as long as the number of matched pairs remains
constant
> > day
> > > to
> > > > > > day. But if one today you have 98 matched pairs and
tomorrow you
> > > have
> > > > > 105,
> > > > > > then tomorrow's score will have slightly more weight. The
SL1L2
> > > lines
> > > > > are
> > > > > > aggregated as weighted averages, where the TOTAL column is
the
> > > weight.
> > > > > And
> > > > > > then stats (like RMSE and MSE) are recomputed from those
> aggregated
> > > > > > scores. Generally, the statisticians recommend this
method over
> > the
> > > > mean
> > > > > > of the daily scores. Neither is "wrong", they just give
you
> > slightly
> > > > > > different information.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John.
> > > > > > >
> > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > nearing
> > > > the
> > > > > > end
> > > > > > > of FY19 so I have been finalizing several transition
projects
> and
> > > > > haven’t
> > > > > > > had much time to work on MET recently. I just picked
this back
> > up
> > > > and
> > > > > > have
> > > > > > > loaded a couple new modules. Here is what I have to
work with
> > now:
> > > > > > >
> > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > >
> > > > > > >
> > > > > > > Running
> > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig -v
> 3
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > >
> > > > > > > I get many matched pairs. Here is a sample of what the
log
> file
> > > > looks
> > > > > > > like for one of the pressure ranges I am verifying on:
> > > > > > >
> > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> dptd/P425-376,
> > > for
> > > > > > > observation type radiosonde, over region FULL, for
> interpolation
> > > > method
> > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15280 DEBUG 2:
> > > > > > > 15281 DEBUG 2:
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > >
> > > > > > > I am going to work on processing these point stat files
to
> create
> > > > those
> > > > > > > vertical raob plots we had a discussion about. I
remember us
> > > talking
> > > > > > about
> > > > > > > the partial sums file. Why did we choose to go the
route of
> > > > producing
> > > > > > > partial sums then feeding that into series analysis to
generate
> > > bias
> > > > > and
> > > > > > > MSE? It looks like bias and MSE both exist within the
CNT line
> > > type
> > > > > > (MBIAS
> > > > > > > and MSE)?
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Great, thanks for sending me the sample data. Yes, I
was able
> to
> > > > > > replicate
> > > > > > > the segfault. The good news is that this is caused by a
simple
> > > typo
> > > > > > that's
> > > > > > > easy to fix. If you look in the "obs.field" entry of
the
> > > > relhumConfig
> > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > >
> > > > > > > *obs = { field = [*
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > * ... {name = "dptd";level = ["P988-
1006"];},
> > > > > > {name =
> > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > If you change that empty string to "dptd", the segfault
will go
> > > > away:*
> > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
> completion
> > > (in
> > > > 2
> > > > > > > minutes 48 seconds on my desktop machine), but it
produced 0
> > > matched
> > > > > > > pairs. They were discarded because of the valid times
(seen
> > using
> > > > -v 3
> > > > > > > command line option to Point-Stat). The ob file you
sent is
> > named
> > > "
> > > > > > > raob_2015020412.nc" but the actual times in that file
are for
> > > > > > > "20190426_120000":
> > > > > > >
> > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > http://raob_2015020412.nc
> > > > > >*
> > > > > > >
> > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > >
> > > > > > > So please be aware of that discrepancy. To just produce
some
> > > matched
> > > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> 20190426_120000
> > > > > > > -obs_valid_end 20190426_120000*
> > > > > > >
> > > > > > > But I still get 0 matched pairs. This time, it's
because of
> bad
> > > > > forecast
> > > > > > > values:
> > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > >
> > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > plot_data_plane, which results in an error:
> > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > http://plot.ps>
> > > > > > > 'name="./read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > ERROR : DataPlane::two_to_one() -> range check error:
(Nx,
> Ny) =
> > > > (97,
> > > > > > 97),
> > > > > > > (x, y) = (97, 0)
> > > > > > >
> > > > > > > While the numpy object is 97x97, the grid is specified
as being
> > > > 118x118
> > > > > > in
> > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > >
> > > > > > > Just to get something working, I modified the nx and ny
in the
> > > python
> > > > > > > script:
> > > > > > > 'nx':97,
> > > > > > > 'ny':97,
> > > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > > >
> > > > > > > So I'd suggest...
> > > > > > > - Fix the typo in the config file.
> > > > > > > - Figure out the discrepancy between the obs file name
> timestamp
> > > and
> > > > > the
> > > > > > > data in that file.
> > > > > > > - Make sure the grid information is consistent with the
data in
> > the
> > > > > > python
> > > > > > > script.
> > > > > > >
> > > > > > > Obviously though, we don't want to code to be
segfaulting in
> any
> > > > > > > condition. So next, I tested using met-8.1 with that
empty
> > string.
> > > > > This
> > > > > > > time it does run with no segfault, but prints a warning
about
> the
> > > > empty
> > > > > > > string.
> > > > > > >
> > > > > > > Hope that helps.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > > >
> > > > > > > > I am running met-8.0/met-8.0-with-grib2-support and
have
> > > provided
> > > > > > > > everything
> > > > > > > > on that list you've provided me. Let me know if
you're able
> to
> > > > > > replicate
> > > > > > > > it
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Well that doesn't seem to be very helpful of Point-
Stat at
> all.
> > > > > There
> > > > > > > > isn't much jumping out at me from the log messages you
sent.
> > In
> > > > > fact,
> > > > > > I
> > > > > > > > hunted around for the DEBUG(7) log message but
couldn't find
> > > where
> > > > in
> > > > > > the
> > > > > > > > code it's being written. Are you able to send me some
sample
> > > data
> > > > to
> > > > > > > > replicate this behavior?
> > > > > > > >
> > > > > > > > I'd need to know...
> > > > > > > > - What version of MET are you running.
> > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > - The python script that you're running.
> > > > > > > > - The input file for that python script.
> > > > > > > > - The NetCDF point observation file you're passing to
> > Point-Stat.
> > > > > > > >
> > > > > > > > If I can replicate the behavior here, it should be
easy to
> run
> > it
> > > > in
> > > > > > the
> > > > > > > > debugger and figure it out.
> > > > > > > >
> > > > > > > > You can post data to our anonymous ftp site as
described in
> > "How
> > > to
> > > > > > send
> > > > > > > us
> > > > > > > > data":
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > > > Queue: met_help
> > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > Owner: Nobody
> > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > Status: new
> > > > > > > > > Ticket <URL:
> > > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Hey John,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > > verification
> > > > > > > > > plots
> > > > > > > > > using point_stat and stat_analysis like we did
together for
> > > winds
> > > > > but
> > > > > > > for
> > > > > > > > > relative humidity now. But when I run point_stat,
it seg
> > > faults
> > > > > > > without
> > > > > > > > > much explanation
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > >
> > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > > climatology
> > > > > > > > mean
> > > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > > >
> > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > point_stat
> > > > > > > > > PYTHON_NUMPY
> > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > > ./out/point_stat.log
> > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > From my log file:
> > > > > > > > >
> > > > > > > > > 607 DEBUG 2:
> > > > > > > > >
> > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> > messages.
> > > > > > > > >
> > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station
id: 617
> > > > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Any help would be much appreciated
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin Tsu
> > > > > > > > >
> > > > > > > > > Marine Meteorology Division
> > > > > > > > >
> > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > >
> > > > > > > > > Building 704 Room 212
> > > > > > > > >
> > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > >
> > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > >
> > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Ph. (831) 656-4111
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Wed Oct 16 18:20:01 2019
Hi John,
I also created my own grib table file named grib1_nrl_v2_2.txt and
added the following:
[tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
256 128 98 -1 "wdir" "NRL WIND DIRECTION"
256 128 98 -1 "t" "NRL TEMPERATURE"
256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
256 128 98 -1 "pres" "NRL PRESSURE"
256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
Which are the names of the variables I am using in my netcdf file.
Setting export MET_GRIB_TABLES=$(pwd) then running point_stat I get:
ERROR :
ERROR : get_filenames_from_dir() -> can't stat
"/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
ERROR :
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Wednesday, October 2, 2019 11:14 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
This means that you're requesting a variable named "dpdt" in the
Point-Stat
config file. MET looks for a definition of that string in it's
default
GRIB1 tables:
grep dpdt met-8.1/share/met/table_files/*
But that returns 0 matches. So this error message is telling you that
MET
doesn't know how to interpret that variable name.
Here's what I'd suggest:
(1) Run the input GRIB1 file through the "wgrib" utility. If "wgrib"
knows
about this variable, it will report the name... and most likely,
that's the
same name that MET will know. If so, switch from using "dpdt" to
using
whatever name wgrib reports.
(2) If "wgrib" does NOT know about this variable, it'll just list out
the
corresponding GRIB1 codes instead. That means we'll need to go create
a
small GRIB table to define these strings. Take a look in:
met-8.1/share/met/table_files
We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt" where
CENTER is the number encoded in your GRIB file to define NRL and PTV
is the
parameter table version number used in your GRIB file. In that,
you'll
define the mapping of GRIB1 codes to strings (like "dpdt"). And for
now,
we'll need to set the "MET_GRIB_TABLES" environment variable to the
location of that file. But in the long run, you can send me that
file, and
we'll add it to "table_files" directory to be included in the next
release
of MET.
If you have trouble creating a new GRIB table file, just let me know
and
send me a sample GRIB file.
Thanks,
John
On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hi John,
>
> Apologies for taking such a long time getting back to you. End of
fiscal
> year things have consumed much of my time and I have not had much
time to
> work on any of this.
>
> Before proceeding to the planning process of determining how to call
> point_stat to deal with the vertical levels, I need to fix what is
going on
> with my GRIB1 variables. When I run point_stat, I keep getting this
error:
>
> DEBUG 1: Default Config File:
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> DEBUG 1: User Config File: dwptdpConfig
> ERROR :
> ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> ERROR :
>
> I remember getting this before but don't remember how we fixed it.
> I am using met-8.1/met-8.1a-with-grib2-support
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Friday, September 13, 2019 3:46 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Sorry for the delay. I was in DC on travel this week until today.
>
> It's really up to you how you'd like to configure it. Unless it's
too
> unwieldy, I do think I'd try verifying all levels at once in a
single call
> to Point-Stat. All those observations are contained in the same
point
> observation file. If you verify each level in a separate call to
> Point-Stat, you'll be looping through and processing those obs many,
many
> times, which will be relatively slow. From a processing
perspective, it'd
> be more efficient to process them all at once, in a single call to
> Point-Stat.
>
> But you balance runtime efficiency versus ease of scripting and
> configuration. And that's why it's up to you to decide which you
prefer.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hey John,
> >
> > That makes sense. The way that I've set up my config file is as
follows:
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > ];
> > }
> > obs = {
> > field = [
> > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > ];
> > }
> > message_type = [ "${MSG_TYPE}" ];
> >
> > The environmental variables I'm setting in the wrapper script are
LEV,
> > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it
seems
> like I
> > will only be able to run point_Stat for a single elevation and a
single
> > lead time. Do you recommend this? Or Should I put all the
elevations
> for a
> > single lead time in one pass of point_stat?
> >
> > So my config file will look like something like this...
> > fcst = {
> > field = [
> > {name = "/users/tsu/MET/work/read_NRL_binary.py
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > ... etc.
> > ];
> > }
> >
> > Also, I am not sure what happened by when I run point_stat now I
am
> > getting that error
> > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > Again. This makes me think that the obs_var name is wrong, but
ncdump
> -v
> > obs_var raob_*.nc gives me obs_var =
> > "ws",
> > "wdir",
> > "t",
> > "dptd",
> > "pres",
> > "ght" ;
> > So clearly dptd exists.
> >
> > Justin
> >
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 6, 2019 1:40 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Here's a sample Point-Stat output file name:
> > point_stat_360000L_20070331_120000V.stat
> >
> > The "360000L" indicates that this is output for a 36-hour
forecast. And
> > the "20070331_120000V" timestamp is the valid time.
> >
> > If you run Point-Stat once for each forecast lead time, the
timestamps
> > should be different and they should not clobber eachother.
> >
> > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> times
> > with the same timing info. The "output_prefix" config file entry
is used
> > to customize the output file names to prevent them from clobbering
> > eachother. For example, setting:
> > output_prefix="RUN1";
> > Would result in files named "
> > point_stat_RUN1_360000L_20070331_120000V.stat".
> >
> > Make sense?
> >
> > Thanks,
> > John
> >
> > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Invoking point_stat multiple times will create and replace the
old _cnt
> > > and _sl1l2 files right? At that point, I'll have a bunch of CNT
and
> > SL1L2
> > > files and then use stat_analysis to aggregate them?
> > >
> > > Justin
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 6, 2019 1:11 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Yes, that is a long list of fields, but I don't see a way
obvious way
> of
> > > shortening that. But to do multiple lead times, I'd just call
> Point-Stat
> > > multiple times, once for each lead time, and update the config
file to
> > use
> > > environment variables for the current time:
> > >
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > },
> > > ...
> > >
> > > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> > environment
> > > variables.
> > >
> > > John
> > >
> > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Thanks John,
> > > >
> > > > I managed to scrap together some code to get RAOB stats from
CNT
> > plotted
> > > > with 95% CI. Working on Surface stats now.
> > > >
> > > > So my configuration file looks like this right now:
> > > >
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > >
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > ];
> > > > }
> > > >
> > > > obs = {
> > > > field = [
> > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > {name = "dptd";level = ["P4.6-6"];},
> > > > {name = "dptd";level = ["P6.1-8"];},
> > > > {name = "dptd";level = ["P9-15"];},
> > > > {name = "dptd";level = ["P16-25"];},
> > > > {name = "dptd";level = ["P26-40"];},
> > > > {name = "dptd";level = ["P41-65"];},
> > > > {name = "dptd";level = ["P66-85"];},
> > > > {name = "dptd";level = ["P86-125"];},
> > > > {name = "dptd";level = ["P126-175"];},
> > > > {name = "dptd";level = ["P176-225"];},
> > > > {name = "dptd";level = ["P226-275"];},
> > > > {name = "dptd";level = ["P276-325"];},
> > > > {name = "dptd";level = ["P326-375"];},
> > > > {name = "dptd";level = ["P376-425"];},
> > > > {name = "dptd";level = ["P426-475"];},
> > > > {name = "dptd";level = ["P476-525"];},
> > > > {name = "dptd";level = ["P526-575"];},
> > > > {name = "dptd";level = ["P576-625"];},
> > > > {name = "dptd";level = ["P626-675"];},
> > > > {name = "dptd";level = ["P676-725"];},
> > > > {name = "dptd";level = ["P726-775"];},
> > > > {name = "dptd";level = ["P776-825"];},
> > > > {name = "dptd";level = ["P826-875"];},
> > > > {name = "dptd";level = ["P876-912"];},
> > > > {name = "dptd";level = ["P913-936"];},
> > > > {name = "dptd";level = ["P937-962"];},
> > > > {name = "dptd";level = ["P963-987"];},
> > > > {name = "dptd";level = ["P988-1006"];},
> > > > {name = "dptd";level = ["P1007-1013"];}
> > > >
> > > > And I have the data:
> > > >
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > >
> > > > for a particular DTG and vertical level. If I want to run
multiple
> > lead
> > > > times, it seems like I'll have to copy that long list of
fields for
> > each
> > > > lead time in the fcst dict and then duplicate the obs
dictionary so
> > that
> > > > each forecast entry has a corresponding obs level matching
range. Is
> > > this
> > > > correct or is there a shorter/better way to do this?
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > I see that you're plotting RMSE and bias (called ME for Mean
Error in
> > > MET)
> > > > in the plots you sent.
> > > >
> > > > Table 7.6 of the MET User's Guide (
> > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > )
> > > > describes the contents of the CNT line type type. Bot the
columns for
> > > RMSE
> > > > and ME are followed by _NCL and _NCU columns which give the
> parametric
> > > > approximation of the confidence interval for those scores. So
yes,
> you
> > > can
> > > > run Stat-Analysis to aggregate SL1L2 lines together and write
the
> > > > corresponding CNT output line type.
> > > >
> > > > The RMSE_NCL and RMSE_NCU columns contain the lower and upper
> > parametric
> > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > columns
> > > > for the ME statistic.
> > > >
> > > > You can change the alpha value for those confidence intervals
by
> > setting:
> > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95% CI).
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu>
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John,
> > > > >
> > > > > This all helps me greatly. One more questions: is there any
> > > information
> > > > > in either the CNT or SL1L2 that could give me confidence
intervals
> > for
> > > > > each data point? I'm looking to replicate the attached
plot.
> Notice
> > > > that
> > > > > the individual points could have either a 99, 95 or 90 %
> confidence.
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Sounds about right. Each time you run Grid-Stat or Point-
Stat you
> > can
> > > > > write the CNT output line type which contains stats like
MSE, ME,
> > MAE,
> > > > and
> > > > > RMSE. And I'm recommended that you also write the SL1L2
line type
> as
> > > > well.
> > > > >
> > > > > Then you'd run a stat_analysis job like this:
> > > > >
> > > > > stat_analysis -lookin /path/to/stat/data -job aggregate_stat
> > -line_type
> > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
-out_stat
> > > > > cnt_out.stat
> > > > >
> > > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> > reads
> > > > the
> > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > FCST_LEV,
> > > > and
> > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial sums
> together
> > > and
> > > > > write out the corresponding CNT line type to the output file
named
> > > > > cnt_out.stat.
> > > > >
> > > > > John
> > > > >
> > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu
> > > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > So if I understand what you're saying correctly, then if I
wanted
> > to
> > > an
> > > > > > average of 24 hour forecasts over a month long run, then I
would
> > use
> > > > the
> > > > > > SL1L2 output to aggregate and produce this average?
Whereas if I
> > > used
> > > > > CNT,
> > > > > > this would just provide me ~30 individual (per day over a
month)
> 24
> > > > hour
> > > > > > forecast verifications?
> > > > > >
> > > > > > On a side note, did we ever go over how to plot the SL1L2
MSE and
> > > > biases?
> > > > > > I am forgetting if we used stat_analysis to produce a plot
or if
> > the
> > > > plot
> > > > > > you showed me was just something you guys post processed
using
> > python
> > > > or
> > > > > > whatnot.
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > We wrote the SL1L2 partial sums from Point-Stat because
they can
> be
> > > > > > aggregated together by the stat-analysis tool over
multiple days
> or
> > > > > cases.
> > > > > >
> > > > > > If you're interested in continuous statistics from Point-
Stat,
> I'd
> > > > > > recommend writing the CNT line type (which has the stats
computed
> > for
> > > > > that
> > > > > > single run) and the SL1L2 line type (so that you can
aggregate
> them
> > > > > > together in stat-analysis or METviewer).
> > > > > >
> > > > > > The other alternative is looking at the average of the
daily
> > > statistics
> > > > > > scores. For RMSE, the average of the daily RMSE is equal
to the
> > > > > aggregated
> > > > > > score... as long as the number of matched pairs remains
constant
> > day
> > > to
> > > > > > day. But if one today you have 98 matched pairs and
tomorrow you
> > > have
> > > > > 105,
> > > > > > then tomorrow's score will have slightly more weight. The
SL1L2
> > > lines
> > > > > are
> > > > > > aggregated as weighted averages, where the TOTAL column is
the
> > > weight.
> > > > > And
> > > > > > then stats (like RMSE and MSE) are recomputed from those
> aggregated
> > > > > > scores. Generally, the statisticians recommend this
method over
> > the
> > > > mean
> > > > > > of the daily scores. Neither is "wrong", they just give
you
> > slightly
> > > > > > different information.
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John.
> > > > > > >
> > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > nearing
> > > > the
> > > > > > end
> > > > > > > of FY19 so I have been finalizing several transition
projects
> and
> > > > > haven’t
> > > > > > > had much time to work on MET recently. I just picked
this back
> > up
> > > > and
> > > > > > have
> > > > > > > loaded a couple new modules. Here is what I have to
work with
> > now:
> > > > > > >
> > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > >
> > > > > > >
> > > > > > > Running
> > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig -v
> 3
> > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > >
> > > > > > > I get many matched pairs. Here is a sample of what the
log
> file
> > > > looks
> > > > > > > like for one of the pressure ranges I am verifying on:
> > > > > > >
> > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> dptd/P425-376,
> > > for
> > > > > > > observation type radiosonde, over region FULL, for
> interpolation
> > > > method
> > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > >=0,
> > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=5.0, observation filtering threshold >=5.0, and field
logic
> > > UNION.
> > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > threshold
> > > > > > > >=10.0, observation filtering threshold >=10.0, and
field logic
> > > > UNION.
> > > > > > > 15280 DEBUG 2:
> > > > > > > 15281 DEBUG 2:
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > >
> > > > > > > I am going to work on processing these point stat files
to
> create
> > > > those
> > > > > > > vertical raob plots we had a discussion about. I
remember us
> > > talking
> > > > > > about
> > > > > > > the partial sums file. Why did we choose to go the
route of
> > > > producing
> > > > > > > partial sums then feeding that into series analysis to
generate
> > > bias
> > > > > and
> > > > > > > MSE? It looks like bias and MSE both exist within the
CNT line
> > > type
> > > > > > (MBIAS
> > > > > > > and MSE)?
> > > > > > >
> > > > > > >
> > > > > > > Justin
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Great, thanks for sending me the sample data. Yes, I
was able
> to
> > > > > > replicate
> > > > > > > the segfault. The good news is that this is caused by a
simple
> > > typo
> > > > > > that's
> > > > > > > easy to fix. If you look in the "obs.field" entry of
the
> > > > relhumConfig
> > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > >
> > > > > > > *obs = { field = [*
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > * ... {name = "dptd";level = ["P988-
1006"];},
> > > > > > {name =
> > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > If you change that empty string to "dptd", the segfault
will go
> > > > away:*
> > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
> completion
> > > (in
> > > > 2
> > > > > > > minutes 48 seconds on my desktop machine), but it
produced 0
> > > matched
> > > > > > > pairs. They were discarded because of the valid times
(seen
> > using
> > > > -v 3
> > > > > > > command line option to Point-Stat). The ob file you
sent is
> > named
> > > "
> > > > > > > raob_2015020412.nc" but the actual times in that file
are for
> > > > > > > "20190426_120000":
> > > > > > >
> > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > http://raob_2015020412.nc
> > > > > >*
> > > > > > >
> > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > >
> > > > > > > So please be aware of that discrepancy. To just produce
some
> > > matched
> > > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY raob_2015020412.nc
> > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> 20190426_120000
> > > > > > > -obs_valid_end 20190426_120000*
> > > > > > >
> > > > > > > But I still get 0 matched pairs. This time, it's
because of
> bad
> > > > > forecast
> > > > > > > values:
> > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > >
> > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > plot_data_plane, which results in an error:
> > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > http://plot.ps>
> > > > > > > 'name="./read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > ERROR : DataPlane::two_to_one() -> range check error:
(Nx,
> Ny) =
> > > > (97,
> > > > > > 97),
> > > > > > > (x, y) = (97, 0)
> > > > > > >
> > > > > > > While the numpy object is 97x97, the grid is specified
as being
> > > > 118x118
> > > > > > in
> > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > >
> > > > > > > Just to get something working, I modified the nx and ny
in the
> > > python
> > > > > > > script:
> > > > > > > 'nx':97,
> > > > > > > 'ny':97,
> > > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > > >
> > > > > > > So I'd suggest...
> > > > > > > - Fix the typo in the config file.
> > > > > > > - Figure out the discrepancy between the obs file name
> timestamp
> > > and
> > > > > the
> > > > > > > data in that file.
> > > > > > > - Make sure the grid information is consistent with the
data in
> > the
> > > > > > python
> > > > > > > script.
> > > > > > >
> > > > > > > Obviously though, we don't want to code to be
segfaulting in
> any
> > > > > > > condition. So next, I tested using met-8.1 with that
empty
> > string.
> > > > > This
> > > > > > > time it does run with no segfault, but prints a warning
about
> the
> > > > empty
> > > > > > > string.
> > > > > > >
> > > > > > > Hope that helps.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Hey John,
> > > > > > > >
> > > > > > > > Ive put my data in tsu_data_20190815/ under met_help.
> > > > > > > >
> > > > > > > > I am running met-8.0/met-8.0-with-grib2-support and
have
> > > provided
> > > > > > > > everything
> > > > > > > > on that list you've provided me. Let me know if
you're able
> to
> > > > > > replicate
> > > > > > > > it
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Well that doesn't seem to be very helpful of Point-
Stat at
> all.
> > > > > There
> > > > > > > > isn't much jumping out at me from the log messages you
sent.
> > In
> > > > > fact,
> > > > > > I
> > > > > > > > hunted around for the DEBUG(7) log message but
couldn't find
> > > where
> > > > in
> > > > > > the
> > > > > > > > code it's being written. Are you able to send me some
sample
> > > data
> > > > to
> > > > > > > > replicate this behavior?
> > > > > > > >
> > > > > > > > I'd need to know...
> > > > > > > > - What version of MET are you running.
> > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > - The python script that you're running.
> > > > > > > > - The input file for that python script.
> > > > > > > > - The NetCDF point observation file you're passing to
> > Point-Stat.
> > > > > > > >
> > > > > > > > If I can replicate the behavior here, it should be
easy to
> run
> > it
> > > > in
> > > > > > the
> > > > > > > > debugger and figure it out.
> > > > > > > >
> > > > > > > > You can post data to our anonymous ftp site as
described in
> > "How
> > > to
> > > > > > send
> > > > > > > us
> > > > > > > > data":
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > > Transaction: Ticket created by
justin.tsu at nrlmry.navy.mil
> > > > > > > > > Queue: met_help
> > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > Owner: Nobody
> > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > Status: new
> > > > > > > > > Ticket <URL:
> > > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Hey John,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I'm trying to extrapolate the production of vertical
raob
> > > > > > verification
> > > > > > > > > plots
> > > > > > > > > using point_stat and stat_analysis like we did
together for
> > > winds
> > > > > but
> > > > > > > for
> > > > > > > > > relative humidity now. But when I run point_stat,
it seg
> > > faults
> > > > > > > without
> > > > > > > > > much explanation
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > >
> > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > > climatology
> > > > > > > > mean
> > > > > > > > > levels, and 0 climatology standard deviation levels.
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > ----
> > > > > > > > >
> > > > > > > > > DEBUG 2:
> > > > > > > > >
> > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
messages.
> > > > > > > > >
> > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > point_stat
> > > > > > > > > PYTHON_NUMPY
> > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > > ./out/point_stat.log
> > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > From my log file:
> > > > > > > > >
> > > > > > > > > 607 DEBUG 2:
> > > > > > > > >
> > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from 617
> > messages.
> > > > > > > > >
> > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station
id: 617
> > > > > > > valid_time: 1
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Any help would be much appreciated
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin Tsu
> > > > > > > > >
> > > > > > > > > Marine Meteorology Division
> > > > > > > > >
> > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > >
> > > > > > > > > Building 704 Room 212
> > > > > > > > >
> > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > >
> > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > >
> > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Ph. (831) 656-4111
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Thu Oct 17 09:26:06 2019
Justin,
When MET_GRIB_TABLES is set to a directory, MET tries to process all
files
in that directory. Please try to instead set it explicitly to your
single
filename:
setenv MET_GRIB_TABLES `pwd`/grib1_nrl_v2_2.txt
... or ...
export MET_GRIB_TABLES=`pwd`/grib1_nrl_v2_2.txt
Does that work any better?
Thanks,
John
On Wed, Oct 16, 2019 at 6:20 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hi John,
>
> I also created my own grib table file named grib1_nrl_v2_2.txt and
added
> the following:
>
> [tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
> 256 128 98 -1 "wdir" "NRL WIND DIRECTION"
> 256 128 98 -1 "t" "NRL TEMPERATURE"
> 256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
> 256 128 98 -1 "pres" "NRL PRESSURE"
> 256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
>
> Which are the names of the variables I am using in my netcdf file.
> Setting export MET_GRIB_TABLES=$(pwd) then running point_stat I get:
>
> ERROR :
> ERROR : get_filenames_from_dir() -> can't stat
> "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> ERROR :
>
> Justin
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Wednesday, October 2, 2019 11:14 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> This means that you're requesting a variable named "dpdt" in the
Point-Stat
> config file. MET looks for a definition of that string in it's
default
> GRIB1 tables:
> grep dpdt met-8.1/share/met/table_files/*
>
> But that returns 0 matches. So this error message is telling you
that MET
> doesn't know how to interpret that variable name.
>
> Here's what I'd suggest:
> (1) Run the input GRIB1 file through the "wgrib" utility. If
"wgrib" knows
> about this variable, it will report the name... and most likely,
that's the
> same name that MET will know. If so, switch from using "dpdt" to
using
> whatever name wgrib reports.
>
> (2) If "wgrib" does NOT know about this variable, it'll just list
out the
> corresponding GRIB1 codes instead. That means we'll need to go
create a
> small GRIB table to define these strings. Take a look in:
> met-8.1/share/met/table_files
>
> We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt"
where
> CENTER is the number encoded in your GRIB file to define NRL and PTV
is the
> parameter table version number used in your GRIB file. In that,
you'll
> define the mapping of GRIB1 codes to strings (like "dpdt"). And for
now,
> we'll need to set the "MET_GRIB_TABLES" environment variable to the
> location of that file. But in the long run, you can send me that
file, and
> we'll add it to "table_files" directory to be included in the next
release
> of MET.
>
> If you have trouble creating a new GRIB table file, just let me know
and
> send me a sample GRIB file.
>
> Thanks,
> John
>
>
> On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hi John,
> >
> > Apologies for taking such a long time getting back to you. End of
fiscal
> > year things have consumed much of my time and I have not had much
time to
> > work on any of this.
> >
> > Before proceeding to the planning process of determining how to
call
> > point_stat to deal with the vertical levels, I need to fix what is
going
> on
> > with my GRIB1 variables. When I run point_stat, I keep getting
this
> error:
> >
> > DEBUG 1: Default Config File:
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > DEBUG 1: User Config File: dwptdpConfig
> > ERROR :
> > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > ERROR :
> >
> > I remember getting this before but don't remember how we fixed it.
> > I am using met-8.1/met-8.1a-with-grib2-support
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 13, 2019 3:46 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Sorry for the delay. I was in DC on travel this week until today.
> >
> > It's really up to you how you'd like to configure it. Unless it's
too
> > unwieldy, I do think I'd try verifying all levels at once in a
single
> call
> > to Point-Stat. All those observations are contained in the same
point
> > observation file. If you verify each level in a separate call to
> > Point-Stat, you'll be looping through and processing those obs
many, many
> > times, which will be relatively slow. From a processing
perspective,
> it'd
> > be more efficient to process them all at once, in a single call to
> > Point-Stat.
> >
> > But you balance runtime efficiency versus ease of scripting and
> > configuration. And that's why it's up to you to decide which you
prefer.
> >
> > Hope that helps.
> >
> > Thanks,
> > John
> >
> > On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hey John,
> > >
> > > That makes sense. The way that I've set up my config file is as
> follows:
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > > ];
> > > }
> > > obs = {
> > > field = [
> > > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > > ];
> > > }
> > > message_type = [ "${MSG_TYPE}" ];
> > >
> > > The environmental variables I'm setting in the wrapper script
are LEV,
> > > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it
seems
> > like I
> > > will only be able to run point_Stat for a single elevation and a
single
> > > lead time. Do you recommend this? Or Should I put all the
elevations
> > for a
> > > single lead time in one pass of point_stat?
> > >
> > > So my config file will look like something like this...
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > ... etc.
> > > ];
> > > }
> > >
> > > Also, I am not sure what happened by when I run point_stat now I
am
> > > getting that error
> > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > abbreviation 'dptd' for table version 2
> > > Again. This makes me think that the obs_var name is wrong, but
ncdump
> > -v
> > > obs_var raob_*.nc gives me obs_var =
> > > "ws",
> > > "wdir",
> > > "t",
> > > "dptd",
> > > "pres",
> > > "ght" ;
> > > So clearly dptd exists.
> > >
> > > Justin
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 6, 2019 1:40 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Here's a sample Point-Stat output file name:
> > > point_stat_360000L_20070331_120000V.stat
> > >
> > > The "360000L" indicates that this is output for a 36-hour
forecast.
> And
> > > the "20070331_120000V" timestamp is the valid time.
> > >
> > > If you run Point-Stat once for each forecast lead time, the
timestamps
> > > should be different and they should not clobber eachother.
> > >
> > > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> > times
> > > with the same timing info. The "output_prefix" config file
entry is
> used
> > > to customize the output file names to prevent them from
clobbering
> > > eachother. For example, setting:
> > > output_prefix="RUN1";
> > > Would result in files named "
> > > point_stat_RUN1_360000L_20070331_120000V.stat".
> > >
> > > Make sense?
> > >
> > > Thanks,
> > > John
> > >
> > > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Invoking point_stat multiple times will create and replace the
old
> _cnt
> > > > and _sl1l2 files right? At that point, I'll have a bunch of
CNT and
> > > SL1L2
> > > > files and then use stat_analysis to aggregate them?
> > > >
> > > > Justin
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, September 6, 2019 1:11 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Yes, that is a long list of fields, but I don't see a way
obvious way
> > of
> > > > shortening that. But to do multiple lead times, I'd just call
> > Point-Stat
> > > > multiple times, once for each lead time, and update the config
file
> to
> > > use
> > > > environment variables for the current time:
> > > >
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > ...
> > > >
> > > > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> > > environment
> > > > variables.
> > > >
> > > > John
> > > >
> > > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John,
> > > > >
> > > > > I managed to scrap together some code to get RAOB stats from
CNT
> > > plotted
> > > > > with 95% CI. Working on Surface stats now.
> > > > >
> > > > > So my configuration file looks like this right now:
> > > > >
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > >
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > > ];
> > > > > }
> > > > >
> > > > > obs = {
> > > > > field = [
> > > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > > {name = "dptd";level = ["P4.6-6"];},
> > > > > {name = "dptd";level = ["P6.1-8"];},
> > > > > {name = "dptd";level = ["P9-15"];},
> > > > > {name = "dptd";level = ["P16-25"];},
> > > > > {name = "dptd";level = ["P26-40"];},
> > > > > {name = "dptd";level = ["P41-65"];},
> > > > > {name = "dptd";level = ["P66-85"];},
> > > > > {name = "dptd";level = ["P86-125"];},
> > > > > {name = "dptd";level = ["P126-175"];},
> > > > > {name = "dptd";level = ["P176-225"];},
> > > > > {name = "dptd";level = ["P226-275"];},
> > > > > {name = "dptd";level = ["P276-325"];},
> > > > > {name = "dptd";level = ["P326-375"];},
> > > > > {name = "dptd";level = ["P376-425"];},
> > > > > {name = "dptd";level = ["P426-475"];},
> > > > > {name = "dptd";level = ["P476-525"];},
> > > > > {name = "dptd";level = ["P526-575"];},
> > > > > {name = "dptd";level = ["P576-625"];},
> > > > > {name = "dptd";level = ["P626-675"];},
> > > > > {name = "dptd";level = ["P676-725"];},
> > > > > {name = "dptd";level = ["P726-775"];},
> > > > > {name = "dptd";level = ["P776-825"];},
> > > > > {name = "dptd";level = ["P826-875"];},
> > > > > {name = "dptd";level = ["P876-912"];},
> > > > > {name = "dptd";level = ["P913-936"];},
> > > > > {name = "dptd";level = ["P937-962"];},
> > > > > {name = "dptd";level = ["P963-987"];},
> > > > > {name = "dptd";level = ["P988-1006"];},
> > > > > {name = "dptd";level = ["P1007-1013"];}
> > > > >
> > > > > And I have the data:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > > >
> > > > > for a particular DTG and vertical level. If I want to run
multiple
> > > lead
> > > > > times, it seems like I'll have to copy that long list of
fields for
> > > each
> > > > > lead time in the fcst dict and then duplicate the obs
dictionary so
> > > that
> > > > > each forecast entry has a corresponding obs level matching
range.
> Is
> > > > this
> > > > > correct or is there a shorter/better way to do this?
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > I see that you're plotting RMSE and bias (called ME for Mean
Error
> in
> > > > MET)
> > > > > in the plots you sent.
> > > > >
> > > > > Table 7.6 of the MET User's Guide (
> > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > > )
> > > > > describes the contents of the CNT line type type. Bot the
columns
> for
> > > > RMSE
> > > > > and ME are followed by _NCL and _NCU columns which give the
> > parametric
> > > > > approximation of the confidence interval for those scores.
So yes,
> > you
> > > > can
> > > > > run Stat-Analysis to aggregate SL1L2 lines together and
write the
> > > > > corresponding CNT output line type.
> > > > >
> > > > > The RMSE_NCL and RMSE_NCU columns contain the lower and
upper
> > > parametric
> > > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > > columns
> > > > > for the ME statistic.
> > > > >
> > > > > You can change the alpha value for those confidence
intervals by
> > > setting:
> > > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95%
CI).
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > >
> > > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John,
> > > > > >
> > > > > > This all helps me greatly. One more questions: is there
any
> > > > information
> > > > > > in either the CNT or SL1L2 that could give me confidence
> intervals
> > > for
> > > > > > each data point? I'm looking to replicate the attached
plot.
> > Notice
> > > > > that
> > > > > > the individual points could have either a 99, 95 or 90 %
> > confidence.
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Sounds about right. Each time you run Grid-Stat or Point-
Stat
> you
> > > can
> > > > > > write the CNT output line type which contains stats like
MSE, ME,
> > > MAE,
> > > > > and
> > > > > > RMSE. And I'm recommended that you also write the SL1L2
line
> type
> > as
> > > > > well.
> > > > > >
> > > > > > Then you'd run a stat_analysis job like this:
> > > > > >
> > > > > > stat_analysis -lookin /path/to/stat/data -job
aggregate_stat
> > > -line_type
> > > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
> -out_stat
> > > > > > cnt_out.stat
> > > > > >
> > > > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> > > reads
> > > > > the
> > > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > > FCST_LEV,
> > > > > and
> > > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial
sums
> > together
> > > > and
> > > > > > write out the corresponding CNT line type to the output
file
> named
> > > > > > cnt_out.stat.
> > > > > >
> > > > > > John
> > > > > >
> > > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > So if I understand what you're saying correctly, then if
I
> wanted
> > > to
> > > > an
> > > > > > > average of 24 hour forecasts over a month long run, then
I
> would
> > > use
> > > > > the
> > > > > > > SL1L2 output to aggregate and produce this average?
Whereas
> if I
> > > > used
> > > > > > CNT,
> > > > > > > this would just provide me ~30 individual (per day over
a
> month)
> > 24
> > > > > hour
> > > > > > > forecast verifications?
> > > > > > >
> > > > > > > On a side note, did we ever go over how to plot the
SL1L2 MSE
> and
> > > > > biases?
> > > > > > > I am forgetting if we used stat_analysis to produce a
plot or
> if
> > > the
> > > > > plot
> > > > > > > you showed me was just something you guys post processed
using
> > > python
> > > > > or
> > > > > > > whatnot.
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > We wrote the SL1L2 partial sums from Point-Stat because
they
> can
> > be
> > > > > > > aggregated together by the stat-analysis tool over
multiple
> days
> > or
> > > > > > cases.
> > > > > > >
> > > > > > > If you're interested in continuous statistics from
Point-Stat,
> > I'd
> > > > > > > recommend writing the CNT line type (which has the stats
> computed
> > > for
> > > > > > that
> > > > > > > single run) and the SL1L2 line type (so that you can
aggregate
> > them
> > > > > > > together in stat-analysis or METviewer).
> > > > > > >
> > > > > > > The other alternative is looking at the average of the
daily
> > > > statistics
> > > > > > > scores. For RMSE, the average of the daily RMSE is
equal to
> the
> > > > > > aggregated
> > > > > > > score... as long as the number of matched pairs remains
> constant
> > > day
> > > > to
> > > > > > > day. But if one today you have 98 matched pairs and
tomorrow
> you
> > > > have
> > > > > > 105,
> > > > > > > then tomorrow's score will have slightly more weight.
The
> SL1L2
> > > > lines
> > > > > > are
> > > > > > > aggregated as weighted averages, where the TOTAL column
is the
> > > > weight.
> > > > > > And
> > > > > > > then stats (like RMSE and MSE) are recomputed from those
> > aggregated
> > > > > > > scores. Generally, the statisticians recommend this
method
> over
> > > the
> > > > > mean
> > > > > > > of the daily scores. Neither is "wrong", they just give
you
> > > slightly
> > > > > > > different information.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Thanks John.
> > > > > > > >
> > > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > > nearing
> > > > > the
> > > > > > > end
> > > > > > > > of FY19 so I have been finalizing several transition
projects
> > and
> > > > > > haven’t
> > > > > > > > had much time to work on MET recently. I just picked
this
> back
> > > up
> > > > > and
> > > > > > > have
> > > > > > > > loaded a couple new modules. Here is what I have to
work
> with
> > > now:
> > > > > > > >
> > > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > > >
> > > > > > > >
> > > > > > > > Running
> > > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig
> -v
> > 3
> > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > > >
> > > > > > > > I get many matched pairs. Here is a sample of what
the log
> > file
> > > > > looks
> > > > > > > > like for one of the pressure ranges I am verifying on:
> > > > > > > >
> > > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> > dptd/P425-376,
> > > > for
> > > > > > > > observation type radiosonde, over region FULL, for
> > interpolation
> > > > > method
> > > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > >=0,
> > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field logic
> > > > UNION.
> > > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> logic
> > > > > UNION.
> > > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > >=0,
> > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field logic
> > > > UNION.
> > > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> logic
> > > > > UNION.
> > > > > > > > 15280 DEBUG 2:
> > > > > > > > 15281 DEBUG 2:
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > > >
> > > > > > > > I am going to work on processing these point stat
files to
> > create
> > > > > those
> > > > > > > > vertical raob plots we had a discussion about. I
remember us
> > > > talking
> > > > > > > about
> > > > > > > > the partial sums file. Why did we choose to go the
route of
> > > > > producing
> > > > > > > > partial sums then feeding that into series analysis to
> generate
> > > > bias
> > > > > > and
> > > > > > > > MSE? It looks like bias and MSE both exist within the
CNT
> line
> > > > type
> > > > > > > (MBIAS
> > > > > > > > and MSE)?
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Great, thanks for sending me the sample data. Yes, I
was
> able
> > to
> > > > > > > replicate
> > > > > > > > the segfault. The good news is that this is caused by
a
> simple
> > > > typo
> > > > > > > that's
> > > > > > > > easy to fix. If you look in the "obs.field" entry of
the
> > > > > relhumConfig
> > > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > > >
> > > > > > > > *obs = { field = [*
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > * ... {name = "dptd";level = ["P988-
1006"];},
> > > > > > > {name =
> > > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > > If you change that empty string to "dptd", the
segfault will
> go
> > > > > away:*
> > > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
> > completion
> > > > (in
> > > > > 2
> > > > > > > > minutes 48 seconds on my desktop machine), but it
produced 0
> > > > matched
> > > > > > > > pairs. They were discarded because of the valid times
(seen
> > > using
> > > > > -v 3
> > > > > > > > command line option to Point-Stat). The ob file you
sent is
> > > named
> > > > "
> > > > > > > > raob_2015020412.nc" but the actual times in that file
are
> for
> > > > > > > > "20190426_120000":
> > > > > > > >
> > > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > > http://raob_2015020412.nc
> > > > > > >*
> > > > > > > >
> > > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > > >
> > > > > > > > So please be aware of that discrepancy. To just
produce some
> > > > matched
> > > > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY
raob_2015020412.nc
> > > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> > 20190426_120000
> > > > > > > > -obs_valid_end 20190426_120000*
> > > > > > > >
> > > > > > > > But I still get 0 matched pairs. This time, it's
because of
> > bad
> > > > > > forecast
> > > > > > > > values:
> > > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > > >
> > > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > > plot_data_plane, which results in an error:
> > > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > > http://plot.ps>
> > > > > > > > 'name="./read_NRL_binary.py
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > > ERROR : DataPlane::two_to_one() -> range check error:
(Nx,
> > Ny) =
> > > > > (97,
> > > > > > > 97),
> > > > > > > > (x, y) = (97, 0)
> > > > > > > >
> > > > > > > > While the numpy object is 97x97, the grid is specified
as
> being
> > > > > 118x118
> > > > > > > in
> > > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > > >
> > > > > > > > Just to get something working, I modified the nx and
ny in
> the
> > > > python
> > > > > > > > script:
> > > > > > > > 'nx':97,
> > > > > > > > 'ny':97,
> > > > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > > > >
> > > > > > > > So I'd suggest...
> > > > > > > > - Fix the typo in the config file.
> > > > > > > > - Figure out the discrepancy between the obs file name
> > timestamp
> > > > and
> > > > > > the
> > > > > > > > data in that file.
> > > > > > > > - Make sure the grid information is consistent with
the data
> in
> > > the
> > > > > > > python
> > > > > > > > script.
> > > > > > > >
> > > > > > > > Obviously though, we don't want to code to be
segfaulting in
> > any
> > > > > > > > condition. So next, I tested using met-8.1 with that
empty
> > > string.
> > > > > > This
> > > > > > > > time it does run with no segfault, but prints a
warning about
> > the
> > > > > empty
> > > > > > > > string.
> > > > > > > >
> > > > > > > > Hope that helps.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > > > > > >
> > > > > > > > > Hey John,
> > > > > > > > >
> > > > > > > > > Ive put my data in tsu_data_20190815/ under
met_help.
> > > > > > > > >
> > > > > > > > > I am running met-8.0/met-8.0-with-grib2-support and
have
> > > > provided
> > > > > > > > > everything
> > > > > > > > > on that list you've provided me. Let me know if
you're
> able
> > to
> > > > > > > replicate
> > > > > > > > > it
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> > faulting
> > > > > > > > >
> > > > > > > > > Justin,
> > > > > > > > >
> > > > > > > > > Well that doesn't seem to be very helpful of Point-
Stat at
> > all.
> > > > > > There
> > > > > > > > > isn't much jumping out at me from the log messages
you
> sent.
> > > In
> > > > > > fact,
> > > > > > > I
> > > > > > > > > hunted around for the DEBUG(7) log message but
couldn't
> find
> > > > where
> > > > > in
> > > > > > > the
> > > > > > > > > code it's being written. Are you able to send me
some
> sample
> > > > data
> > > > > to
> > > > > > > > > replicate this behavior?
> > > > > > > > >
> > > > > > > > > I'd need to know...
> > > > > > > > > - What version of MET are you running.
> > > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > > - The python script that you're running.
> > > > > > > > > - The input file for that python script.
> > > > > > > > > - The NetCDF point observation file you're passing
to
> > > Point-Stat.
> > > > > > > > >
> > > > > > > > > If I can replicate the behavior here, it should be
easy to
> > run
> > > it
> > > > > in
> > > > > > > the
> > > > > > > > > debugger and figure it out.
> > > > > > > > >
> > > > > > > > > You can post data to our anonymous ftp site as
described in
> > > "How
> > > > to
> > > > > > > send
> > > > > > > > us
> > > > > > > > > data":
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > John
> > > > > > > > >
> > > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via
RT <
> > > > > > > > met_help at ucar.edu>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > > > Transaction: Ticket created by
> justin.tsu at nrlmry.navy.mil
> > > > > > > > > > Queue: met_help
> > > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > > Owner: Nobody
> > > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > > Status: new
> > > > > > > > > > Ticket <URL:
> > > > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Hey John,
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > I'm trying to extrapolate the production of
vertical raob
> > > > > > > verification
> > > > > > > > > > plots
> > > > > > > > > > using point_stat and stat_analysis like we did
together
> for
> > > > winds
> > > > > > but
> > > > > > > > for
> > > > > > > > > > relative humidity now. But when I run point_stat,
it seg
> > > > faults
> > > > > > > > without
> > > > > > > > > > much explanation
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > ----
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > > >
> > > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > > > climatology
> > > > > > > > > mean
> > > > > > > > > > levels, and 0 climatology standard deviation
levels.
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > ----
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
> messages.
> > > > > > > > > >
> > > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > > valid_time: 1
> > > > > > > > > >
> > > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > > point_stat
> > > > > > > > > > PYTHON_NUMPY
> > > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > > > ./out/point_stat.log
> > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > From my log file:
> > > > > > > > > >
> > > > > > > > > > 607 DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from
617
> > > messages.
> > > > > > > > > >
> > > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station
id:
> 617
> > > > > > > > valid_time: 1
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Any help would be much appreciated
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Justin
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Justin Tsu
> > > > > > > > > >
> > > > > > > > > > Marine Meteorology Division
> > > > > > > > > >
> > > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > > >
> > > > > > > > > > Building 704 Room 212
> > > > > > > > > >
> > > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > > >
> > > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > > >
> > > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Ph. (831) 656-4111
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Thu Oct 17 11:33:33 2019
Unfortunately this did not fix it
[tsu at maury2 01_POINT_STAT_WORK]$ echo $MET_GRIB_TABLES
/users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
/users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
DEBUG 1: Default Config File: /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
DEBUG 1: User Config File: dwptdpConfig
ERROR :
ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
abbreviation 'dptd' for table version 2
ERROR :
Could it be an issue between GRIB 1 and GRIB 2? What about the fact
that I am using netCDF as my input data format?
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, October 17, 2019 8:26 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
When MET_GRIB_TABLES is set to a directory, MET tries to process all
files
in that directory. Please try to instead set it explicitly to your
single
filename:
setenv MET_GRIB_TABLES `pwd`/grib1_nrl_v2_2.txt
... or ...
export MET_GRIB_TABLES=`pwd`/grib1_nrl_v2_2.txt
Does that work any better?
Thanks,
John
On Wed, Oct 16, 2019 at 6:20 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hi John,
>
> I also created my own grib table file named grib1_nrl_v2_2.txt and
added
> the following:
>
> [tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
> 256 128 98 -1 "wdir" "NRL WIND DIRECTION"
> 256 128 98 -1 "t" "NRL TEMPERATURE"
> 256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
> 256 128 98 -1 "pres" "NRL PRESSURE"
> 256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
>
> Which are the names of the variables I am using in my netcdf file.
> Setting export MET_GRIB_TABLES=$(pwd) then running point_stat I get:
>
> ERROR :
> ERROR : get_filenames_from_dir() -> can't stat
> "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> ERROR :
>
> Justin
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Wednesday, October 2, 2019 11:14 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> This means that you're requesting a variable named "dpdt" in the
Point-Stat
> config file. MET looks for a definition of that string in it's
default
> GRIB1 tables:
> grep dpdt met-8.1/share/met/table_files/*
>
> But that returns 0 matches. So this error message is telling you
that MET
> doesn't know how to interpret that variable name.
>
> Here's what I'd suggest:
> (1) Run the input GRIB1 file through the "wgrib" utility. If
"wgrib" knows
> about this variable, it will report the name... and most likely,
that's the
> same name that MET will know. If so, switch from using "dpdt" to
using
> whatever name wgrib reports.
>
> (2) If "wgrib" does NOT know about this variable, it'll just list
out the
> corresponding GRIB1 codes instead. That means we'll need to go
create a
> small GRIB table to define these strings. Take a look in:
> met-8.1/share/met/table_files
>
> We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt"
where
> CENTER is the number encoded in your GRIB file to define NRL and PTV
is the
> parameter table version number used in your GRIB file. In that,
you'll
> define the mapping of GRIB1 codes to strings (like "dpdt"). And for
now,
> we'll need to set the "MET_GRIB_TABLES" environment variable to the
> location of that file. But in the long run, you can send me that
file, and
> we'll add it to "table_files" directory to be included in the next
release
> of MET.
>
> If you have trouble creating a new GRIB table file, just let me know
and
> send me a sample GRIB file.
>
> Thanks,
> John
>
>
> On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hi John,
> >
> > Apologies for taking such a long time getting back to you. End of
fiscal
> > year things have consumed much of my time and I have not had much
time to
> > work on any of this.
> >
> > Before proceeding to the planning process of determining how to
call
> > point_stat to deal with the vertical levels, I need to fix what is
going
> on
> > with my GRIB1 variables. When I run point_stat, I keep getting
this
> error:
> >
> > DEBUG 1: Default Config File:
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > DEBUG 1: User Config File: dwptdpConfig
> > ERROR :
> > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > ERROR :
> >
> > I remember getting this before but don't remember how we fixed it.
> > I am using met-8.1/met-8.1a-with-grib2-support
> >
> > Justin
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Friday, September 13, 2019 3:46 PM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > Sorry for the delay. I was in DC on travel this week until today.
> >
> > It's really up to you how you'd like to configure it. Unless it's
too
> > unwieldy, I do think I'd try verifying all levels at once in a
single
> call
> > to Point-Stat. All those observations are contained in the same
point
> > observation file. If you verify each level in a separate call to
> > Point-Stat, you'll be looping through and processing those obs
many, many
> > times, which will be relatively slow. From a processing
perspective,
> it'd
> > be more efficient to process them all at once, in a single call to
> > Point-Stat.
> >
> > But you balance runtime efficiency versus ease of scripting and
> > configuration. And that's why it's up to you to decide which you
prefer.
> >
> > Hope that helps.
> >
> > Thanks,
> > John
> >
> > On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hey John,
> > >
> > > That makes sense. The way that I've set up my config file is as
> follows:
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > > ];
> > > }
> > > obs = {
> > > field = [
> > > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > > ];
> > > }
> > > message_type = [ "${MSG_TYPE}" ];
> > >
> > > The environmental variables I'm setting in the wrapper script
are LEV,
> > > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it
seems
> > like I
> > > will only be able to run point_Stat for a single elevation and a
single
> > > lead time. Do you recommend this? Or Should I put all the
elevations
> > for a
> > > single lead time in one pass of point_stat?
> > >
> > > So my config file will look like something like this...
> > > fcst = {
> > > field = [
> > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > ... etc.
> > > ];
> > > }
> > >
> > > Also, I am not sure what happened by when I run point_stat now I
am
> > > getting that error
> > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > abbreviation 'dptd' for table version 2
> > > Again. This makes me think that the obs_var name is wrong, but
ncdump
> > -v
> > > obs_var raob_*.nc gives me obs_var =
> > > "ws",
> > > "wdir",
> > > "t",
> > > "dptd",
> > > "pres",
> > > "ght" ;
> > > So clearly dptd exists.
> > >
> > > Justin
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 6, 2019 1:40 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Here's a sample Point-Stat output file name:
> > > point_stat_360000L_20070331_120000V.stat
> > >
> > > The "360000L" indicates that this is output for a 36-hour
forecast.
> And
> > > the "20070331_120000V" timestamp is the valid time.
> > >
> > > If you run Point-Stat once for each forecast lead time, the
timestamps
> > > should be different and they should not clobber eachother.
> > >
> > > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> > times
> > > with the same timing info. The "output_prefix" config file
entry is
> used
> > > to customize the output file names to prevent them from
clobbering
> > > eachother. For example, setting:
> > > output_prefix="RUN1";
> > > Would result in files named "
> > > point_stat_RUN1_360000L_20070331_120000V.stat".
> > >
> > > Make sense?
> > >
> > > Thanks,
> > > John
> > >
> > > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Invoking point_stat multiple times will create and replace the
old
> _cnt
> > > > and _sl1l2 files right? At that point, I'll have a bunch of
CNT and
> > > SL1L2
> > > > files and then use stat_analysis to aggregate them?
> > > >
> > > > Justin
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, September 6, 2019 1:11 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Yes, that is a long list of fields, but I don't see a way
obvious way
> > of
> > > > shortening that. But to do multiple lead times, I'd just call
> > Point-Stat
> > > > multiple times, once for each lead time, and update the config
file
> to
> > > use
> > > > environment variables for the current time:
> > > >
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > },
> > > > ...
> > > >
> > > > Where the calling scripts sets the ${INIT_TIME} and ${FCST_HR}
> > > environment
> > > > variables.
> > > >
> > > > John
> > > >
> > > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Thanks John,
> > > > >
> > > > > I managed to scrap together some code to get RAOB stats from
CNT
> > > plotted
> > > > > with 95% CI. Working on Surface stats now.
> > > > >
> > > > > So my configuration file looks like this right now:
> > > > >
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > >
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > > ];
> > > > > }
> > > > >
> > > > > obs = {
> > > > > field = [
> > > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > > {name = "dptd";level = ["P4.6-6"];},
> > > > > {name = "dptd";level = ["P6.1-8"];},
> > > > > {name = "dptd";level = ["P9-15"];},
> > > > > {name = "dptd";level = ["P16-25"];},
> > > > > {name = "dptd";level = ["P26-40"];},
> > > > > {name = "dptd";level = ["P41-65"];},
> > > > > {name = "dptd";level = ["P66-85"];},
> > > > > {name = "dptd";level = ["P86-125"];},
> > > > > {name = "dptd";level = ["P126-175"];},
> > > > > {name = "dptd";level = ["P176-225"];},
> > > > > {name = "dptd";level = ["P226-275"];},
> > > > > {name = "dptd";level = ["P276-325"];},
> > > > > {name = "dptd";level = ["P326-375"];},
> > > > > {name = "dptd";level = ["P376-425"];},
> > > > > {name = "dptd";level = ["P426-475"];},
> > > > > {name = "dptd";level = ["P476-525"];},
> > > > > {name = "dptd";level = ["P526-575"];},
> > > > > {name = "dptd";level = ["P576-625"];},
> > > > > {name = "dptd";level = ["P626-675"];},
> > > > > {name = "dptd";level = ["P676-725"];},
> > > > > {name = "dptd";level = ["P726-775"];},
> > > > > {name = "dptd";level = ["P776-825"];},
> > > > > {name = "dptd";level = ["P826-875"];},
> > > > > {name = "dptd";level = ["P876-912"];},
> > > > > {name = "dptd";level = ["P913-936"];},
> > > > > {name = "dptd";level = ["P937-962"];},
> > > > > {name = "dptd";level = ["P963-987"];},
> > > > > {name = "dptd";level = ["P988-1006"];},
> > > > > {name = "dptd";level = ["P1007-1013"];}
> > > > >
> > > > > And I have the data:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > > >
> > > > > for a particular DTG and vertical level. If I want to run
multiple
> > > lead
> > > > > times, it seems like I'll have to copy that long list of
fields for
> > > each
> > > > > lead time in the fcst dict and then duplicate the obs
dictionary so
> > > that
> > > > > each forecast entry has a corresponding obs level matching
range.
> Is
> > > > this
> > > > > correct or is there a shorter/better way to do this?
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > I see that you're plotting RMSE and bias (called ME for Mean
Error
> in
> > > > MET)
> > > > > in the plots you sent.
> > > > >
> > > > > Table 7.6 of the MET User's Guide (
> > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > > )
> > > > > describes the contents of the CNT line type type. Bot the
columns
> for
> > > > RMSE
> > > > > and ME are followed by _NCL and _NCU columns which give the
> > parametric
> > > > > approximation of the confidence interval for those scores.
So yes,
> > you
> > > > can
> > > > > run Stat-Analysis to aggregate SL1L2 lines together and
write the
> > > > > corresponding CNT output line type.
> > > > >
> > > > > The RMSE_NCL and RMSE_NCU columns contain the lower and
upper
> > > parametric
> > > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > > columns
> > > > > for the ME statistic.
> > > > >
> > > > > You can change the alpha value for those confidence
intervals by
> > > setting:
> > > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95%
CI).
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > >
> > > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu>
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John,
> > > > > >
> > > > > > This all helps me greatly. One more questions: is there
any
> > > > information
> > > > > > in either the CNT or SL1L2 that could give me confidence
> intervals
> > > for
> > > > > > each data point? I'm looking to replicate the attached
plot.
> > Notice
> > > > > that
> > > > > > the individual points could have either a 99, 95 or 90 %
> > confidence.
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Sounds about right. Each time you run Grid-Stat or Point-
Stat
> you
> > > can
> > > > > > write the CNT output line type which contains stats like
MSE, ME,
> > > MAE,
> > > > > and
> > > > > > RMSE. And I'm recommended that you also write the SL1L2
line
> type
> > as
> > > > > well.
> > > > > >
> > > > > > Then you'd run a stat_analysis job like this:
> > > > > >
> > > > > > stat_analysis -lookin /path/to/stat/data -job
aggregate_stat
> > > -line_type
> > > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
> -out_stat
> > > > > > cnt_out.stat
> > > > > >
> > > > > > This job reads any .stat files it finds in
"/path/to/stat/data",
> > > reads
> > > > > the
> > > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > > FCST_LEV,
> > > > > and
> > > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial
sums
> > together
> > > > and
> > > > > > write out the corresponding CNT line type to the output
file
> named
> > > > > > cnt_out.stat.
> > > > > >
> > > > > > John
> > > > > >
> > > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > So if I understand what you're saying correctly, then if
I
> wanted
> > > to
> > > > an
> > > > > > > average of 24 hour forecasts over a month long run, then
I
> would
> > > use
> > > > > the
> > > > > > > SL1L2 output to aggregate and produce this average?
Whereas
> if I
> > > > used
> > > > > > CNT,
> > > > > > > this would just provide me ~30 individual (per day over
a
> month)
> > 24
> > > > > hour
> > > > > > > forecast verifications?
> > > > > > >
> > > > > > > On a side note, did we ever go over how to plot the
SL1L2 MSE
> and
> > > > > biases?
> > > > > > > I am forgetting if we used stat_analysis to produce a
plot or
> if
> > > the
> > > > > plot
> > > > > > > you showed me was just something you guys post processed
using
> > > python
> > > > > or
> > > > > > > whatnot.
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > We wrote the SL1L2 partial sums from Point-Stat because
they
> can
> > be
> > > > > > > aggregated together by the stat-analysis tool over
multiple
> days
> > or
> > > > > > cases.
> > > > > > >
> > > > > > > If you're interested in continuous statistics from
Point-Stat,
> > I'd
> > > > > > > recommend writing the CNT line type (which has the stats
> computed
> > > for
> > > > > > that
> > > > > > > single run) and the SL1L2 line type (so that you can
aggregate
> > them
> > > > > > > together in stat-analysis or METviewer).
> > > > > > >
> > > > > > > The other alternative is looking at the average of the
daily
> > > > statistics
> > > > > > > scores. For RMSE, the average of the daily RMSE is
equal to
> the
> > > > > > aggregated
> > > > > > > score... as long as the number of matched pairs remains
> constant
> > > day
> > > > to
> > > > > > > day. But if one today you have 98 matched pairs and
tomorrow
> you
> > > > have
> > > > > > 105,
> > > > > > > then tomorrow's score will have slightly more weight.
The
> SL1L2
> > > > lines
> > > > > > are
> > > > > > > aggregated as weighted averages, where the TOTAL column
is the
> > > > weight.
> > > > > > And
> > > > > > > then stats (like RMSE and MSE) are recomputed from those
> > aggregated
> > > > > > > scores. Generally, the statisticians recommend this
method
> over
> > > the
> > > > > mean
> > > > > > > of the daily scores. Neither is "wrong", they just give
you
> > > slightly
> > > > > > > different information.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Thanks John.
> > > > > > > >
> > > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > > nearing
> > > > > the
> > > > > > > end
> > > > > > > > of FY19 so I have been finalizing several transition
projects
> > and
> > > > > > haven’t
> > > > > > > > had much time to work on MET recently. I just picked
this
> back
> > > up
> > > > > and
> > > > > > > have
> > > > > > > > loaded a couple new modules. Here is what I have to
work
> with
> > > now:
> > > > > > > >
> > > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > > >
> > > > > > > >
> > > > > > > > Running
> > > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig
> -v
> > 3
> > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > > >
> > > > > > > > I get many matched pairs. Here is a sample of what
the log
> > file
> > > > > looks
> > > > > > > > like for one of the pressure ranges I am verifying on:
> > > > > > > >
> > > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> > dptd/P425-376,
> > > > for
> > > > > > > > observation type radiosonde, over region FULL, for
> > interpolation
> > > > > method
> > > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > >=0,
> > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field logic
> > > > UNION.
> > > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> logic
> > > > > UNION.
> > > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > >=0,
> > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field logic
> > > > UNION.
> > > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > threshold
> > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> logic
> > > > > UNION.
> > > > > > > > 15280 DEBUG 2:
> > > > > > > > 15281 DEBUG 2:
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > > >
> > > > > > > > I am going to work on processing these point stat
files to
> > create
> > > > > those
> > > > > > > > vertical raob plots we had a discussion about. I
remember us
> > > > talking
> > > > > > > about
> > > > > > > > the partial sums file. Why did we choose to go the
route of
> > > > > producing
> > > > > > > > partial sums then feeding that into series analysis to
> generate
> > > > bias
> > > > > > and
> > > > > > > > MSE? It looks like bias and MSE both exist within the
CNT
> line
> > > > type
> > > > > > > (MBIAS
> > > > > > > > and MSE)?
> > > > > > > >
> > > > > > > >
> > > > > > > > Justin
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Great, thanks for sending me the sample data. Yes, I
was
> able
> > to
> > > > > > > replicate
> > > > > > > > the segfault. The good news is that this is caused by
a
> simple
> > > > typo
> > > > > > > that's
> > > > > > > > easy to fix. If you look in the "obs.field" entry of
the
> > > > > relhumConfig
> > > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > > >
> > > > > > > > *obs = { field = [*
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > * ... {name = "dptd";level = ["P988-
1006"];},
> > > > > > > {name =
> > > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > > If you change that empty string to "dptd", the
segfault will
> go
> > > > > away:*
> > > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > > Rerunning met-8.0 with that change, Point-Stat ran to
> > completion
> > > > (in
> > > > > 2
> > > > > > > > minutes 48 seconds on my desktop machine), but it
produced 0
> > > > matched
> > > > > > > > pairs. They were discarded because of the valid times
(seen
> > > using
> > > > > -v 3
> > > > > > > > command line option to Point-Stat). The ob file you
sent is
> > > named
> > > > "
> > > > > > > > raob_2015020412.nc" but the actual times in that file
are
> for
> > > > > > > > "20190426_120000":
> > > > > > > >
> > > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > > http://raob_2015020412.nc
> > > > > > >*
> > > > > > > >
> > > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > > >
> > > > > > > > So please be aware of that discrepancy. To just
produce some
> > > > matched
> > > > > > > > pairs, I told Point-Stat to use the valid times of the
data:
> > > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY
raob_2015020412.nc
> > > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> > 20190426_120000
> > > > > > > > -obs_valid_end 20190426_120000*
> > > > > > > >
> > > > > > > > But I still get 0 matched pairs. This time, it's
because of
> > bad
> > > > > > forecast
> > > > > > > > values:
> > > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > > >
> > > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > > plot_data_plane, which results in an error:
> > > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > > http://plot.ps>
> > > > > > > > 'name="./read_NRL_binary.py
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > > ERROR : DataPlane::two_to_one() -> range check error:
(Nx,
> > Ny) =
> > > > > (97,
> > > > > > > 97),
> > > > > > > > (x, y) = (97, 0)
> > > > > > > >
> > > > > > > > While the numpy object is 97x97, the grid is specified
as
> being
> > > > > 118x118
> > > > > > > in
> > > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > > >
> > > > > > > > Just to get something working, I modified the nx and
ny in
> the
> > > > python
> > > > > > > > script:
> > > > > > > > 'nx':97,
> > > > > > > > 'ny':97,
> > > > > > > > Rerunning again, I still didn't get any matched pairs.
> > > > > > > >
> > > > > > > > So I'd suggest...
> > > > > > > > - Fix the typo in the config file.
> > > > > > > > - Figure out the discrepancy between the obs file name
> > timestamp
> > > > and
> > > > > > the
> > > > > > > > data in that file.
> > > > > > > > - Make sure the grid information is consistent with
the data
> in
> > > the
> > > > > > > python
> > > > > > > > script.
> > > > > > > >
> > > > > > > > Obviously though, we don't want to code to be
segfaulting in
> > any
> > > > > > > > condition. So next, I tested using met-8.1 with that
empty
> > > string.
> > > > > > This
> > > > > > > > time it does run with no segfault, but prints a
warning about
> > the
> > > > > empty
> > > > > > > > string.
> > > > > > > >
> > > > > > > > Hope that helps.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > > > > > >
> > > > > > > > > Hey John,
> > > > > > > > >
> > > > > > > > > Ive put my data in tsu_data_20190815/ under
met_help.
> > > > > > > > >
> > > > > > > > > I am running met-8.0/met-8.0-with-grib2-support and
have
> > > > provided
> > > > > > > > > everything
> > > > > > > > > on that list you've provided me. Let me know if
you're
> able
> > to
> > > > > > > replicate
> > > > > > > > > it
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> > faulting
> > > > > > > > >
> > > > > > > > > Justin,
> > > > > > > > >
> > > > > > > > > Well that doesn't seem to be very helpful of Point-
Stat at
> > all.
> > > > > > There
> > > > > > > > > isn't much jumping out at me from the log messages
you
> sent.
> > > In
> > > > > > fact,
> > > > > > > I
> > > > > > > > > hunted around for the DEBUG(7) log message but
couldn't
> find
> > > > where
> > > > > in
> > > > > > > the
> > > > > > > > > code it's being written. Are you able to send me
some
> sample
> > > > data
> > > > > to
> > > > > > > > > replicate this behavior?
> > > > > > > > >
> > > > > > > > > I'd need to know...
> > > > > > > > > - What version of MET are you running.
> > > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > > - The python script that you're running.
> > > > > > > > > - The input file for that python script.
> > > > > > > > > - The NetCDF point observation file you're passing
to
> > > Point-Stat.
> > > > > > > > >
> > > > > > > > > If I can replicate the behavior here, it should be
easy to
> > run
> > > it
> > > > > in
> > > > > > > the
> > > > > > > > > debugger and figure it out.
> > > > > > > > >
> > > > > > > > > You can post data to our anonymous ftp site as
described in
> > > "How
> > > > to
> > > > > > > send
> > > > > > > > us
> > > > > > > > > data":
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > John
> > > > > > > > >
> > > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin via
RT <
> > > > > > > > met_help at ucar.edu>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was acted
upon.
> > > > > > > > > > Transaction: Ticket created by
> justin.tsu at nrlmry.navy.mil
> > > > > > > > > > Queue: met_help
> > > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > > Owner: Nobody
> > > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > > Status: new
> > > > > > > > > > Ticket <URL:
> > > > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Hey John,
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > I'm trying to extrapolate the production of
vertical raob
> > > > > > > verification
> > > > > > > > > > plots
> > > > > > > > > > using point_stat and stat_analysis like we did
together
> for
> > > > winds
> > > > > > but
> > > > > > > > for
> > > > > > > > > > relative humidity now. But when I run point_stat,
it seg
> > > > faults
> > > > > > > > without
> > > > > > > > > > much explanation
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > ----
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > > >
> > > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
levels, 0
> > > > > > climatology
> > > > > > > > > mean
> > > > > > > > > > levels, and 0 climatology standard deviation
levels.
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > ----
> > > > > > > > > >
> > > > > > > > > > DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
> messages.
> > > > > > > > > >
> > > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station id:
617
> > > > > > valid_time: 1
> > > > > > > > > >
> > > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > > point_stat
> > > > > > > > > > PYTHON_NUMPY
> > > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir ./out/point_stat
-log
> > > > > > > > > > ./out/point_stat.log
> > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > From my log file:
> > > > > > > > > >
> > > > > > > > > > 607 DEBUG 2:
> > > > > > > > > >
> > > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from
617
> > > messages.
> > > > > > > > > >
> > > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1 station
id:
> 617
> > > > > > > > valid_time: 1
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Any help would be much appreciated
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Justin
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Justin Tsu
> > > > > > > > > >
> > > > > > > > > > Marine Meteorology Division
> > > > > > > > > >
> > > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > > >
> > > > > > > > > > Building 704 Room 212
> > > > > > > > > >
> > > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > > >
> > > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > > >
> > > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Ph. (831) 656-4111
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Thu Oct 17 12:50:38 2019
Justin,
It looks like that change in setting MET_GRIB_TABLES did fix the
immediate
problem:
ERROR : get_filenames_from_dir() -> can't stat
"/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
Now, we just need to get the GRIB table lookup working as expected.
Perhaps it'd be more efficient for you to send me sample data so I can
replicate the problem here and then debug it. You could post data to
our
ftp site following these instructions:
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk#ftp
I'd need the input files for Point-Stat (forecast file or python
embedding
script/data, NetCDF observation file, Point-Stat config file, and your
custom GRIB table (grib1_nrl_v2_2.txt).
As for why GRIB would be involved... in earlier versions of MET, we
interpreted point data using the GRIB1 conventions. We have since
shifted
away from that and process point observation variables by their name,
rather than referring the GRIB1 conventions. But that could explain
why a
GRIB table lookup is being performed.
Thanks,
John
On Thu, Oct 17, 2019 at 11:34 AM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Unfortunately this did not fix it
>
> [tsu at maury2 01_POINT_STAT_WORK]$ echo $MET_GRIB_TABLES
> /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
>
> DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
> /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> DEBUG 1: Default Config File:
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> DEBUG 1: User Config File: dwptdpConfig
> ERROR :
> ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> ERROR :
>
> Could it be an issue between GRIB 1 and GRIB 2? What about the fact
that I
> am using netCDF as my input data format?
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, October 17, 2019 8:26 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> When MET_GRIB_TABLES is set to a directory, MET tries to process all
files
> in that directory. Please try to instead set it explicitly to your
single
> filename:
>
> setenv MET_GRIB_TABLES `pwd`/grib1_nrl_v2_2.txt
> ... or ...
> export MET_GRIB_TABLES=`pwd`/grib1_nrl_v2_2.txt
>
> Does that work any better?
>
> Thanks,
> John
>
> On Wed, Oct 16, 2019 at 6:20 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hi John,
> >
> > I also created my own grib table file named grib1_nrl_v2_2.txt
and added
> > the following:
> >
> > [tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
> > 256 128 98 -1 "wdir" "NRL WIND DIRECTION"
> > 256 128 98 -1 "t" "NRL TEMPERATURE"
> > 256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
> > 256 128 98 -1 "pres" "NRL PRESSURE"
> > 256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
> >
> > Which are the names of the variables I am using in my netcdf file.
> > Setting export MET_GRIB_TABLES=$(pwd) then running point_stat I
get:
> >
> > ERROR :
> > ERROR : get_filenames_from_dir() -> can't stat
> > "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> > ERROR :
> >
> > Justin
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Wednesday, October 2, 2019 11:14 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > This means that you're requesting a variable named "dpdt" in the
> Point-Stat
> > config file. MET looks for a definition of that string in it's
default
> > GRIB1 tables:
> > grep dpdt met-8.1/share/met/table_files/*
> >
> > But that returns 0 matches. So this error message is telling you
that
> MET
> > doesn't know how to interpret that variable name.
> >
> > Here's what I'd suggest:
> > (1) Run the input GRIB1 file through the "wgrib" utility. If
"wgrib"
> knows
> > about this variable, it will report the name... and most likely,
that's
> the
> > same name that MET will know. If so, switch from using "dpdt" to
using
> > whatever name wgrib reports.
> >
> > (2) If "wgrib" does NOT know about this variable, it'll just list
out the
> > corresponding GRIB1 codes instead. That means we'll need to go
create a
> > small GRIB table to define these strings. Take a look in:
> > met-8.1/share/met/table_files
> >
> > We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt"
where
> > CENTER is the number encoded in your GRIB file to define NRL and
PTV is
> the
> > parameter table version number used in your GRIB file. In that,
you'll
> > define the mapping of GRIB1 codes to strings (like "dpdt"). And
for now,
> > we'll need to set the "MET_GRIB_TABLES" environment variable to
the
> > location of that file. But in the long run, you can send me that
file,
> and
> > we'll add it to "table_files" directory to be included in the next
> release
> > of MET.
> >
> > If you have trouble creating a new GRIB table file, just let me
know and
> > send me a sample GRIB file.
> >
> > Thanks,
> > John
> >
> >
> > On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hi John,
> > >
> > > Apologies for taking such a long time getting back to you. End
of
> fiscal
> > > year things have consumed much of my time and I have not had
much time
> to
> > > work on any of this.
> > >
> > > Before proceeding to the planning process of determining how to
call
> > > point_stat to deal with the vertical levels, I need to fix what
is
> going
> > on
> > > with my GRIB1 variables. When I run point_stat, I keep getting
this
> > error:
> > >
> > > DEBUG 1: Default Config File:
> > >
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > > DEBUG 1: User Config File: dwptdpConfig
> > > ERROR :
> > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > abbreviation 'dptd' for table version 2
> > > ERROR :
> > >
> > > I remember getting this before but don't remember how we fixed
it.
> > > I am using met-8.1/met-8.1a-with-grib2-support
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 13, 2019 3:46 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Sorry for the delay. I was in DC on travel this week until
today.
> > >
> > > It's really up to you how you'd like to configure it. Unless
it's too
> > > unwieldy, I do think I'd try verifying all levels at once in a
single
> > call
> > > to Point-Stat. All those observations are contained in the same
point
> > > observation file. If you verify each level in a separate call
to
> > > Point-Stat, you'll be looping through and processing those obs
many,
> many
> > > times, which will be relatively slow. From a processing
perspective,
> > it'd
> > > be more efficient to process them all at once, in a single call
to
> > > Point-Stat.
> > >
> > > But you balance runtime efficiency versus ease of scripting and
> > > configuration. And that's why it's up to you to decide which
you
> prefer.
> > >
> > > Hope that helps.
> > >
> > > Thanks,
> > > John
> > >
> > > On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hey John,
> > > >
> > > > That makes sense. The way that I've set up my config file is
as
> > follows:
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > > > ];
> > > > }
> > > > obs = {
> > > > field = [
> > > > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > > > ];
> > > > }
> > > > message_type = [ "${MSG_TYPE}" ];
> > > >
> > > > The environmental variables I'm setting in the wrapper script
are
> LEV,
> > > > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it
seems
> > > like I
> > > > will only be able to run point_Stat for a single elevation and
a
> single
> > > > lead time. Do you recommend this? Or Should I put all the
elevations
> > > for a
> > > > single lead time in one pass of point_stat?
> > > >
> > > > So my config file will look like something like this...
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > ... etc.
> > > > ];
> > > > }
> > > >
> > > > Also, I am not sure what happened by when I run point_stat now
I am
> > > > getting that error
> > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > > abbreviation 'dptd' for table version 2
> > > > Again. This makes me think that the obs_var name is wrong,
but
> ncdump
> > > -v
> > > > obs_var raob_*.nc gives me obs_var =
> > > > "ws",
> > > > "wdir",
> > > > "t",
> > > > "dptd",
> > > > "pres",
> > > > "ght" ;
> > > > So clearly dptd exists.
> > > >
> > > > Justin
> > > >
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, September 6, 2019 1:40 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Here's a sample Point-Stat output file name:
> > > > point_stat_360000L_20070331_120000V.stat
> > > >
> > > > The "360000L" indicates that this is output for a 36-hour
forecast.
> > And
> > > > the "20070331_120000V" timestamp is the valid time.
> > > >
> > > > If you run Point-Stat once for each forecast lead time, the
> timestamps
> > > > should be different and they should not clobber eachother.
> > > >
> > > > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> > > times
> > > > with the same timing info. The "output_prefix" config file
entry is
> > used
> > > > to customize the output file names to prevent them from
clobbering
> > > > eachother. For example, setting:
> > > > output_prefix="RUN1";
> > > > Would result in files named "
> > > > point_stat_RUN1_360000L_20070331_120000V.stat".
> > > >
> > > > Make sense?
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Invoking point_stat multiple times will create and replace
the old
> > _cnt
> > > > > and _sl1l2 files right? At that point, I'll have a bunch of
CNT
> and
> > > > SL1L2
> > > > > files and then use stat_analysis to aggregate them?
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, September 6, 2019 1:11 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Yes, that is a long list of fields, but I don't see a way
obvious
> way
> > > of
> > > > > shortening that. But to do multiple lead times, I'd just
call
> > > Point-Stat
> > > > > multiple times, once for each lead time, and update the
config file
> > to
> > > > use
> > > > > environment variables for the current time:
> > > > >
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > ...
> > > > >
> > > > > Where the calling scripts sets the ${INIT_TIME} and
${FCST_HR}
> > > > environment
> > > > > variables.
> > > > >
> > > > > John
> > > > >
> > > > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John,
> > > > > >
> > > > > > I managed to scrap together some code to get RAOB stats
from CNT
> > > > plotted
> > > > > > with 95% CI. Working on Surface stats now.
> > > > > >
> > > > > > So my configuration file looks like this right now:
> > > > > >
> > > > > > fcst = {
> > > > > > field = [
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > >
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > > > ];
> > > > > > }
> > > > > >
> > > > > > obs = {
> > > > > > field = [
> > > > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > > > {name = "dptd";level = ["P4.6-6"];},
> > > > > > {name = "dptd";level = ["P6.1-8"];},
> > > > > > {name = "dptd";level = ["P9-15"];},
> > > > > > {name = "dptd";level = ["P16-25"];},
> > > > > > {name = "dptd";level = ["P26-40"];},
> > > > > > {name = "dptd";level = ["P41-65"];},
> > > > > > {name = "dptd";level = ["P66-85"];},
> > > > > > {name = "dptd";level = ["P86-125"];},
> > > > > > {name = "dptd";level = ["P126-175"];},
> > > > > > {name = "dptd";level = ["P176-225"];},
> > > > > > {name = "dptd";level = ["P226-275"];},
> > > > > > {name = "dptd";level = ["P276-325"];},
> > > > > > {name = "dptd";level = ["P326-375"];},
> > > > > > {name = "dptd";level = ["P376-425"];},
> > > > > > {name = "dptd";level = ["P426-475"];},
> > > > > > {name = "dptd";level = ["P476-525"];},
> > > > > > {name = "dptd";level = ["P526-575"];},
> > > > > > {name = "dptd";level = ["P576-625"];},
> > > > > > {name = "dptd";level = ["P626-675"];},
> > > > > > {name = "dptd";level = ["P676-725"];},
> > > > > > {name = "dptd";level = ["P726-775"];},
> > > > > > {name = "dptd";level = ["P776-825"];},
> > > > > > {name = "dptd";level = ["P826-875"];},
> > > > > > {name = "dptd";level = ["P876-912"];},
> > > > > > {name = "dptd";level = ["P913-936"];},
> > > > > > {name = "dptd";level = ["P937-962"];},
> > > > > > {name = "dptd";level = ["P963-987"];},
> > > > > > {name = "dptd";level = ["P988-1006"];},
> > > > > > {name = "dptd";level = ["P1007-1013"];}
> > > > > >
> > > > > > And I have the data:
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > > > >
> > > > > > for a particular DTG and vertical level. If I want to run
> multiple
> > > > lead
> > > > > > times, it seems like I'll have to copy that long list of
fields
> for
> > > > each
> > > > > > lead time in the fcst dict and then duplicate the obs
dictionary
> so
> > > > that
> > > > > > each forecast entry has a corresponding obs level matching
range.
> > Is
> > > > > this
> > > > > > correct or is there a shorter/better way to do this?
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > I see that you're plotting RMSE and bias (called ME for
Mean
> Error
> > in
> > > > > MET)
> > > > > > in the plots you sent.
> > > > > >
> > > > > > Table 7.6 of the MET User's Guide (
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > > > )
> > > > > > describes the contents of the CNT line type type. Bot the
columns
> > for
> > > > > RMSE
> > > > > > and ME are followed by _NCL and _NCU columns which give
the
> > > parametric
> > > > > > approximation of the confidence interval for those scores.
So
> yes,
> > > you
> > > > > can
> > > > > > run Stat-Analysis to aggregate SL1L2 lines together and
write the
> > > > > > corresponding CNT output line type.
> > > > > >
> > > > > > The RMSE_NCL and RMSE_NCU columns contain the lower and
upper
> > > > parametric
> > > > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > > > columns
> > > > > > for the ME statistic.
> > > > > >
> > > > > > You can change the alpha value for those confidence
intervals by
> > > > setting:
> > > > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95%
CI).
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > >
> > > > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John,
> > > > > > >
> > > > > > > This all helps me greatly. One more questions: is there
any
> > > > > information
> > > > > > > in either the CNT or SL1L2 that could give me
confidence
> > intervals
> > > > for
> > > > > > > each data point? I'm looking to replicate the attached
plot.
> > > Notice
> > > > > > that
> > > > > > > the individual points could have either a 99, 95 or 90 %
> > > confidence.
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Sounds about right. Each time you run Grid-Stat or
Point-Stat
> > you
> > > > can
> > > > > > > write the CNT output line type which contains stats like
MSE,
> ME,
> > > > MAE,
> > > > > > and
> > > > > > > RMSE. And I'm recommended that you also write the SL1L2
line
> > type
> > > as
> > > > > > well.
> > > > > > >
> > > > > > > Then you'd run a stat_analysis job like this:
> > > > > > >
> > > > > > > stat_analysis -lookin /path/to/stat/data -job
aggregate_stat
> > > > -line_type
> > > > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
> > -out_stat
> > > > > > > cnt_out.stat
> > > > > > >
> > > > > > > This job reads any .stat files it finds in
> "/path/to/stat/data",
> > > > reads
> > > > > > the
> > > > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > > > FCST_LEV,
> > > > > > and
> > > > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial
sums
> > > together
> > > > > and
> > > > > > > write out the corresponding CNT line type to the output
file
> > named
> > > > > > > cnt_out.stat.
> > > > > > >
> > > > > > > John
> > > > > > >
> > > > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<
> > > > > > met_help at ucar.edu
> > > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > So if I understand what you're saying correctly, then
if I
> > wanted
> > > > to
> > > > > an
> > > > > > > > average of 24 hour forecasts over a month long run,
then I
> > would
> > > > use
> > > > > > the
> > > > > > > > SL1L2 output to aggregate and produce this average?
Whereas
> > if I
> > > > > used
> > > > > > > CNT,
> > > > > > > > this would just provide me ~30 individual (per day
over a
> > month)
> > > 24
> > > > > > hour
> > > > > > > > forecast verifications?
> > > > > > > >
> > > > > > > > On a side note, did we ever go over how to plot the
SL1L2 MSE
> > and
> > > > > > biases?
> > > > > > > > I am forgetting if we used stat_analysis to produce a
plot or
> > if
> > > > the
> > > > > > plot
> > > > > > > > you showed me was just something you guys post
processed
> using
> > > > python
> > > > > > or
> > > > > > > > whatnot.
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > We wrote the SL1L2 partial sums from Point-Stat
because they
> > can
> > > be
> > > > > > > > aggregated together by the stat-analysis tool over
multiple
> > days
> > > or
> > > > > > > cases.
> > > > > > > >
> > > > > > > > If you're interested in continuous statistics from
> Point-Stat,
> > > I'd
> > > > > > > > recommend writing the CNT line type (which has the
stats
> > computed
> > > > for
> > > > > > > that
> > > > > > > > single run) and the SL1L2 line type (so that you can
> aggregate
> > > them
> > > > > > > > together in stat-analysis or METviewer).
> > > > > > > >
> > > > > > > > The other alternative is looking at the average of the
daily
> > > > > statistics
> > > > > > > > scores. For RMSE, the average of the daily RMSE is
equal to
> > the
> > > > > > > aggregated
> > > > > > > > score... as long as the number of matched pairs
remains
> > constant
> > > > day
> > > > > to
> > > > > > > > day. But if one today you have 98 matched pairs and
tomorrow
> > you
> > > > > have
> > > > > > > 105,
> > > > > > > > then tomorrow's score will have slightly more weight.
The
> > SL1L2
> > > > > lines
> > > > > > > are
> > > > > > > > aggregated as weighted averages, where the TOTAL
column is
> the
> > > > > weight.
> > > > > > > And
> > > > > > > > then stats (like RMSE and MSE) are recomputed from
those
> > > aggregated
> > > > > > > > scores. Generally, the statisticians recommend this
method
> > over
> > > > the
> > > > > > mean
> > > > > > > > of the daily scores. Neither is "wrong", they just
give you
> > > > slightly
> > > > > > > > different information.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > > > > > >
> > > > > > > > > Thanks John.
> > > > > > > > >
> > > > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > > > nearing
> > > > > > the
> > > > > > > > end
> > > > > > > > > of FY19 so I have been finalizing several transition
> projects
> > > and
> > > > > > > haven’t
> > > > > > > > > had much time to work on MET recently. I just
picked this
> > back
> > > > up
> > > > > > and
> > > > > > > > have
> > > > > > > > > loaded a couple new modules. Here is what I have to
work
> > with
> > > > now:
> > > > > > > > >
> > > > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Running
> > > > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig
> > -v
> > > 3
> > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > > > >
> > > > > > > > > I get many matched pairs. Here is a sample of what
the log
> > > file
> > > > > > looks
> > > > > > > > > like for one of the pressure ranges I am verifying
on:
> > > > > > > > >
> > > > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> > > dptd/P425-376,
> > > > > for
> > > > > > > > > observation type radiosonde, over region FULL, for
> > > interpolation
> > > > > > method
> > > > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > >=0,
> > > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> logic
> > > > > UNION.
> > > > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > >=0,
> > > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> logic
> > > > > UNION.
> > > > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > 15280 DEBUG 2:
> > > > > > > > > 15281 DEBUG 2:
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > > > >
> > > > > > > > > I am going to work on processing these point stat
files to
> > > create
> > > > > > those
> > > > > > > > > vertical raob plots we had a discussion about. I
remember
> us
> > > > > talking
> > > > > > > > about
> > > > > > > > > the partial sums file. Why did we choose to go the
route
> of
> > > > > > producing
> > > > > > > > > partial sums then feeding that into series analysis
to
> > generate
> > > > > bias
> > > > > > > and
> > > > > > > > > MSE? It looks like bias and MSE both exist within
the CNT
> > line
> > > > > type
> > > > > > > > (MBIAS
> > > > > > > > > and MSE)?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> > faulting
> > > > > > > > >
> > > > > > > > > Justin,
> > > > > > > > >
> > > > > > > > > Great, thanks for sending me the sample data. Yes,
I was
> > able
> > > to
> > > > > > > > replicate
> > > > > > > > > the segfault. The good news is that this is caused
by a
> > simple
> > > > > typo
> > > > > > > > that's
> > > > > > > > > easy to fix. If you look in the "obs.field" entry
of the
> > > > > > relhumConfig
> > > > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > > > >
> > > > > > > > > *obs = { field = [*
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > * ... {name = "dptd";level =
> ["P988-1006"];},
> > > > > > > > {name =
> > > > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > > > If you change that empty string to "dptd", the
segfault
> will
> > go
> > > > > > away:*
> > > > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > > > Rerunning met-8.0 with that change, Point-Stat ran
to
> > > completion
> > > > > (in
> > > > > > 2
> > > > > > > > > minutes 48 seconds on my desktop machine), but it
produced
> 0
> > > > > matched
> > > > > > > > > pairs. They were discarded because of the valid
times
> (seen
> > > > using
> > > > > > -v 3
> > > > > > > > > command line option to Point-Stat). The ob file you
sent
> is
> > > > named
> > > > > "
> > > > > > > > > raob_2015020412.nc" but the actual times in that
file are
> > for
> > > > > > > > > "20190426_120000":
> > > > > > > > >
> > > > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > > > http://raob_2015020412.nc
> > > > > > > >*
> > > > > > > > >
> > > > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > > > >
> > > > > > > > > So please be aware of that discrepancy. To just
produce
> some
> > > > > matched
> > > > > > > > > pairs, I told Point-Stat to use the valid times of
the
> data:
> > > > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY
raob_2015020412.nc
> > > > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> > > 20190426_120000
> > > > > > > > > -obs_valid_end 20190426_120000*
> > > > > > > > >
> > > > > > > > > But I still get 0 matched pairs. This time, it's
because
> of
> > > bad
> > > > > > > forecast
> > > > > > > > > values:
> > > > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > > > >
> > > > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > > > plot_data_plane, which results in an error:
> > > > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > > > http://plot.ps>
> > > > > > > > > 'name="./read_NRL_binary.py
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > > > ERROR : DataPlane::two_to_one() -> range check
error: (Nx,
> > > Ny) =
> > > > > > (97,
> > > > > > > > 97),
> > > > > > > > > (x, y) = (97, 0)
> > > > > > > > >
> > > > > > > > > While the numpy object is 97x97, the grid is
specified as
> > being
> > > > > > 118x118
> > > > > > > > in
> > > > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > > > >
> > > > > > > > > Just to get something working, I modified the nx and
ny in
> > the
> > > > > python
> > > > > > > > > script:
> > > > > > > > > 'nx':97,
> > > > > > > > > 'ny':97,
> > > > > > > > > Rerunning again, I still didn't get any matched
pairs.
> > > > > > > > >
> > > > > > > > > So I'd suggest...
> > > > > > > > > - Fix the typo in the config file.
> > > > > > > > > - Figure out the discrepancy between the obs file
name
> > > timestamp
> > > > > and
> > > > > > > the
> > > > > > > > > data in that file.
> > > > > > > > > - Make sure the grid information is consistent with
the
> data
> > in
> > > > the
> > > > > > > > python
> > > > > > > > > script.
> > > > > > > > >
> > > > > > > > > Obviously though, we don't want to code to be
segfaulting
> in
> > > any
> > > > > > > > > condition. So next, I tested using met-8.1 with
that empty
> > > > string.
> > > > > > > This
> > > > > > > > > time it does run with no segfault, but prints a
warning
> about
> > > the
> > > > > > empty
> > > > > > > > > string.
> > > > > > > > >
> > > > > > > > > Hope that helps.
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > John
> > > > > > > > >
> > > > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via
RT <
> > > > > > > > met_help at ucar.edu>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > > > > > >
> > > > > > > > > > Hey John,
> > > > > > > > > >
> > > > > > > > > > Ive put my data in tsu_data_20190815/ under
met_help.
> > > > > > > > > >
> > > > > > > > > > I am running met-8.0/met-8.0-with-grib2-support
and have
> > > > > provided
> > > > > > > > > > everything
> > > > > > > > > > on that list you've provided me. Let me know if
you're
> > able
> > > to
> > > > > > > > replicate
> > > > > > > > > > it
> > > > > > > > > >
> > > > > > > > > > Justin
> > > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: John Halley Gotway via RT [mailto:
> met_help at ucar.edu]
> > > > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > faulting
> > > > > > > > > >
> > > > > > > > > > Justin,
> > > > > > > > > >
> > > > > > > > > > Well that doesn't seem to be very helpful of
Point-Stat
> at
> > > all.
> > > > > > > There
> > > > > > > > > > isn't much jumping out at me from the log messages
you
> > sent.
> > > > In
> > > > > > > fact,
> > > > > > > > I
> > > > > > > > > > hunted around for the DEBUG(7) log message but
couldn't
> > find
> > > > > where
> > > > > > in
> > > > > > > > the
> > > > > > > > > > code it's being written. Are you able to send me
some
> > sample
> > > > > data
> > > > > > to
> > > > > > > > > > replicate this behavior?
> > > > > > > > > >
> > > > > > > > > > I'd need to know...
> > > > > > > > > > - What version of MET are you running.
> > > > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > > > - The python script that you're running.
> > > > > > > > > > - The input file for that python script.
> > > > > > > > > > - The NetCDF point observation file you're passing
to
> > > > Point-Stat.
> > > > > > > > > >
> > > > > > > > > > If I can replicate the behavior here, it should be
easy
> to
> > > run
> > > > it
> > > > > > in
> > > > > > > > the
> > > > > > > > > > debugger and figure it out.
> > > > > > > > > >
> > > > > > > > > > You can post data to our anonymous ftp site as
described
> in
> > > > "How
> > > > > to
> > > > > > > > send
> > > > > > > > > us
> > > > > > > > > > data":
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > John
> > > > > > > > > >
> > > > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin
via RT <
> > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was
acted upon.
> > > > > > > > > > > Transaction: Ticket created by
> > justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > Queue: met_help
> > > > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > > > Owner: Nobody
> > > > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > Status: new
> > > > > > > > > > > Ticket <URL:
> > > > > > > >
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Hey John,
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > I'm trying to extrapolate the production of
vertical
> raob
> > > > > > > > verification
> > > > > > > > > > > plots
> > > > > > > > > > > using point_stat and stat_analysis like we did
together
> > for
> > > > > winds
> > > > > > > but
> > > > > > > > > for
> > > > > > > > > > > relative humidity now. But when I run
point_stat, it
> seg
> > > > > faults
> > > > > > > > > without
> > > > > > > > > > > much explanation
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > ----
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
> levels, 0
> > > > > > > climatology
> > > > > > > > > > mean
> > > > > > > > > > > levels, and 0 climatology standard deviation
levels.
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > ----
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
> > messages.
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station
id: 617
> > > > > > > valid_time: 1
> > > > > > > > > > >
> > > > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > > > point_stat
> > > > > > > > > > > PYTHON_NUMPY
> > > > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir
./out/point_stat -log
> > > > > > > > > > > ./out/point_stat.log
> > > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > From my log file:
> > > > > > > > > > >
> > > > > > > > > > > 607 DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from
617
> > > > messages.
> > > > > > > > > > >
> > > > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1
station id:
> > 617
> > > > > > > > > valid_time: 1
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Any help would be much appreciated
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Justin
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Justin Tsu
> > > > > > > > > > >
> > > > > > > > > > > Marine Meteorology Division
> > > > > > > > > > >
> > > > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > > > >
> > > > > > > > > > > Building 704 Room 212
> > > > > > > > > > >
> > > > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > > > >
> > > > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > > > >
> > > > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Ph. (831) 656-4111
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Thu Oct 17 13:01:48 2019
John,
Sounds good. Ive put the data on the ftp. This is the same exact
data that I worked with you on before (when we were using MET 8.0).
Point stat has worked on this data previously but I guess with the new
GRIB conventions and new MET code (using MET 8.1A now), things have
broken.
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, October 17, 2019 11:51 AM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
It looks like that change in setting MET_GRIB_TABLES did fix the
immediate
problem:
ERROR : get_filenames_from_dir() -> can't stat
"/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
Now, we just need to get the GRIB table lookup working as expected.
Perhaps it'd be more efficient for you to send me sample data so I can
replicate the problem here and then debug it. You could post data to
our
ftp site following these instructions:
https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk#ftp
I'd need the input files for Point-Stat (forecast file or python
embedding
script/data, NetCDF observation file, Point-Stat config file, and your
custom GRIB table (grib1_nrl_v2_2.txt).
As for why GRIB would be involved... in earlier versions of MET, we
interpreted point data using the GRIB1 conventions. We have since
shifted
away from that and process point observation variables by their name,
rather than referring the GRIB1 conventions. But that could explain
why a
GRIB table lookup is being performed.
Thanks,
John
On Thu, Oct 17, 2019 at 11:34 AM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Unfortunately this did not fix it
>
> [tsu at maury2 01_POINT_STAT_WORK]$ echo $MET_GRIB_TABLES
> /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
>
> DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
> /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> DEBUG 1: Default Config File:
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> DEBUG 1: User Config File: dwptdpConfig
> ERROR :
> ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> abbreviation 'dptd' for table version 2
> ERROR :
>
> Could it be an issue between GRIB 1 and GRIB 2? What about the fact
that I
> am using netCDF as my input data format?
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, October 17, 2019 8:26 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> When MET_GRIB_TABLES is set to a directory, MET tries to process all
files
> in that directory. Please try to instead set it explicitly to your
single
> filename:
>
> setenv MET_GRIB_TABLES `pwd`/grib1_nrl_v2_2.txt
> ... or ...
> export MET_GRIB_TABLES=`pwd`/grib1_nrl_v2_2.txt
>
> Does that work any better?
>
> Thanks,
> John
>
> On Wed, Oct 16, 2019 at 6:20 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Hi John,
> >
> > I also created my own grib table file named grib1_nrl_v2_2.txt
and added
> > the following:
> >
> > [tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
> > 256 128 98 -1 "wdir" "NRL WIND DIRECTION"
> > 256 128 98 -1 "t" "NRL TEMPERATURE"
> > 256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
> > 256 128 98 -1 "pres" "NRL PRESSURE"
> > 256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
> >
> > Which are the names of the variables I am using in my netcdf file.
> > Setting export MET_GRIB_TABLES=$(pwd) then running point_stat I
get:
> >
> > ERROR :
> > ERROR : get_filenames_from_dir() -> can't stat
> > "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> > ERROR :
> >
> > Justin
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Wednesday, October 2, 2019 11:14 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > This means that you're requesting a variable named "dpdt" in the
> Point-Stat
> > config file. MET looks for a definition of that string in it's
default
> > GRIB1 tables:
> > grep dpdt met-8.1/share/met/table_files/*
> >
> > But that returns 0 matches. So this error message is telling you
that
> MET
> > doesn't know how to interpret that variable name.
> >
> > Here's what I'd suggest:
> > (1) Run the input GRIB1 file through the "wgrib" utility. If
"wgrib"
> knows
> > about this variable, it will report the name... and most likely,
that's
> the
> > same name that MET will know. If so, switch from using "dpdt" to
using
> > whatever name wgrib reports.
> >
> > (2) If "wgrib" does NOT know about this variable, it'll just list
out the
> > corresponding GRIB1 codes instead. That means we'll need to go
create a
> > small GRIB table to define these strings. Take a look in:
> > met-8.1/share/met/table_files
> >
> > We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt"
where
> > CENTER is the number encoded in your GRIB file to define NRL and
PTV is
> the
> > parameter table version number used in your GRIB file. In that,
you'll
> > define the mapping of GRIB1 codes to strings (like "dpdt"). And
for now,
> > we'll need to set the "MET_GRIB_TABLES" environment variable to
the
> > location of that file. But in the long run, you can send me that
file,
> and
> > we'll add it to "table_files" directory to be included in the next
> release
> > of MET.
> >
> > If you have trouble creating a new GRIB table file, just let me
know and
> > send me a sample GRIB file.
> >
> > Thanks,
> > John
> >
> >
> > On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hi John,
> > >
> > > Apologies for taking such a long time getting back to you. End
of
> fiscal
> > > year things have consumed much of my time and I have not had
much time
> to
> > > work on any of this.
> > >
> > > Before proceeding to the planning process of determining how to
call
> > > point_stat to deal with the vertical levels, I need to fix what
is
> going
> > on
> > > with my GRIB1 variables. When I run point_stat, I keep getting
this
> > error:
> > >
> > > DEBUG 1: Default Config File:
> > >
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > > DEBUG 1: User Config File: dwptdpConfig
> > > ERROR :
> > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > abbreviation 'dptd' for table version 2
> > > ERROR :
> > >
> > > I remember getting this before but don't remember how we fixed
it.
> > > I am using met-8.1/met-8.1a-with-grib2-support
> > >
> > > Justin
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Friday, September 13, 2019 3:46 PM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > Sorry for the delay. I was in DC on travel this week until
today.
> > >
> > > It's really up to you how you'd like to configure it. Unless
it's too
> > > unwieldy, I do think I'd try verifying all levels at once in a
single
> > call
> > > to Point-Stat. All those observations are contained in the same
point
> > > observation file. If you verify each level in a separate call
to
> > > Point-Stat, you'll be looping through and processing those obs
many,
> many
> > > times, which will be relatively slow. From a processing
perspective,
> > it'd
> > > be more efficient to process them all at once, in a single call
to
> > > Point-Stat.
> > >
> > > But you balance runtime efficiency versus ease of scripting and
> > > configuration. And that's why it's up to you to decide which
you
> prefer.
> > >
> > > Hope that helps.
> > >
> > > Thanks,
> > > John
> > >
> > > On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hey John,
> > > >
> > > > That makes sense. The way that I've set up my config file is
as
> > follows:
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > > > ];
> > > > }
> > > > obs = {
> > > > field = [
> > > > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > > > ];
> > > > }
> > > > message_type = [ "${MSG_TYPE}" ];
> > > >
> > > > The environmental variables I'm setting in the wrapper script
are
> LEV,
> > > > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way, it
seems
> > > like I
> > > > will only be able to run point_Stat for a single elevation and
a
> single
> > > > lead time. Do you recommend this? Or Should I put all the
elevations
> > > for a
> > > > single lead time in one pass of point_stat?
> > > >
> > > > So my config file will look like something like this...
> > > > fcst = {
> > > > field = [
> > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > ... etc.
> > > > ];
> > > > }
> > > >
> > > > Also, I am not sure what happened by when I run point_stat now
I am
> > > > getting that error
> > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > > abbreviation 'dptd' for table version 2
> > > > Again. This makes me think that the obs_var name is wrong,
but
> ncdump
> > > -v
> > > > obs_var raob_*.nc gives me obs_var =
> > > > "ws",
> > > > "wdir",
> > > > "t",
> > > > "dptd",
> > > > "pres",
> > > > "ght" ;
> > > > So clearly dptd exists.
> > > >
> > > > Justin
> > > >
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, September 6, 2019 1:40 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Here's a sample Point-Stat output file name:
> > > > point_stat_360000L_20070331_120000V.stat
> > > >
> > > > The "360000L" indicates that this is output for a 36-hour
forecast.
> > And
> > > > the "20070331_120000V" timestamp is the valid time.
> > > >
> > > > If you run Point-Stat once for each forecast lead time, the
> timestamps
> > > > should be different and they should not clobber eachother.
> > > >
> > > > But let's say you don't want to run Point-Stat or Grid-Stat
multiple
> > > times
> > > > with the same timing info. The "output_prefix" config file
entry is
> > used
> > > > to customize the output file names to prevent them from
clobbering
> > > > eachother. For example, setting:
> > > > output_prefix="RUN1";
> > > > Would result in files named "
> > > > point_stat_RUN1_360000L_20070331_120000V.stat".
> > > >
> > > > Make sense?
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Invoking point_stat multiple times will create and replace
the old
> > _cnt
> > > > > and _sl1l2 files right? At that point, I'll have a bunch of
CNT
> and
> > > > SL1L2
> > > > > files and then use stat_analysis to aggregate them?
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, September 6, 2019 1:11 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Yes, that is a long list of fields, but I don't see a way
obvious
> way
> > > of
> > > > > shortening that. But to do multiple lead times, I'd just
call
> > > Point-Stat
> > > > > multiple times, once for each lead time, and update the
config file
> > to
> > > > use
> > > > > environment variables for the current time:
> > > > >
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > },
> > > > > ...
> > > > >
> > > > > Where the calling scripts sets the ${INIT_TIME} and
${FCST_HR}
> > > > environment
> > > > > variables.
> > > > >
> > > > > John
> > > > >
> > > > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Thanks John,
> > > > > >
> > > > > > I managed to scrap together some code to get RAOB stats
from CNT
> > > > plotted
> > > > > > with 95% CI. Working on Surface stats now.
> > > > > >
> > > > > > So my configuration file looks like this right now:
> > > > > >
> > > > > > fcst = {
> > > > > > field = [
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > >
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > > > ];
> > > > > > }
> > > > > >
> > > > > > obs = {
> > > > > > field = [
> > > > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > > > {name = "dptd";level = ["P4.6-6"];},
> > > > > > {name = "dptd";level = ["P6.1-8"];},
> > > > > > {name = "dptd";level = ["P9-15"];},
> > > > > > {name = "dptd";level = ["P16-25"];},
> > > > > > {name = "dptd";level = ["P26-40"];},
> > > > > > {name = "dptd";level = ["P41-65"];},
> > > > > > {name = "dptd";level = ["P66-85"];},
> > > > > > {name = "dptd";level = ["P86-125"];},
> > > > > > {name = "dptd";level = ["P126-175"];},
> > > > > > {name = "dptd";level = ["P176-225"];},
> > > > > > {name = "dptd";level = ["P226-275"];},
> > > > > > {name = "dptd";level = ["P276-325"];},
> > > > > > {name = "dptd";level = ["P326-375"];},
> > > > > > {name = "dptd";level = ["P376-425"];},
> > > > > > {name = "dptd";level = ["P426-475"];},
> > > > > > {name = "dptd";level = ["P476-525"];},
> > > > > > {name = "dptd";level = ["P526-575"];},
> > > > > > {name = "dptd";level = ["P576-625"];},
> > > > > > {name = "dptd";level = ["P626-675"];},
> > > > > > {name = "dptd";level = ["P676-725"];},
> > > > > > {name = "dptd";level = ["P726-775"];},
> > > > > > {name = "dptd";level = ["P776-825"];},
> > > > > > {name = "dptd";level = ["P826-875"];},
> > > > > > {name = "dptd";level = ["P876-912"];},
> > > > > > {name = "dptd";level = ["P913-936"];},
> > > > > > {name = "dptd";level = ["P937-962"];},
> > > > > > {name = "dptd";level = ["P963-987"];},
> > > > > > {name = "dptd";level = ["P988-1006"];},
> > > > > > {name = "dptd";level = ["P1007-1013"];}
> > > > > >
> > > > > > And I have the data:
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > > > >
> > > > > > for a particular DTG and vertical level. If I want to run
> multiple
> > > > lead
> > > > > > times, it seems like I'll have to copy that long list of
fields
> for
> > > > each
> > > > > > lead time in the fcst dict and then duplicate the obs
dictionary
> so
> > > > that
> > > > > > each forecast entry has a corresponding obs level matching
range.
> > Is
> > > > > this
> > > > > > correct or is there a shorter/better way to do this?
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > I see that you're plotting RMSE and bias (called ME for
Mean
> Error
> > in
> > > > > MET)
> > > > > > in the plots you sent.
> > > > > >
> > > > > > Table 7.6 of the MET User's Guide (
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > > > )
> > > > > > describes the contents of the CNT line type type. Bot the
columns
> > for
> > > > > RMSE
> > > > > > and ME are followed by _NCL and _NCU columns which give
the
> > > parametric
> > > > > > approximation of the confidence interval for those scores.
So
> yes,
> > > you
> > > > > can
> > > > > > run Stat-Analysis to aggregate SL1L2 lines together and
write the
> > > > > > corresponding CNT output line type.
> > > > > >
> > > > > > The RMSE_NCL and RMSE_NCU columns contain the lower and
upper
> > > > parametric
> > > > > > confidence intervals for the RMSE statistic and ME_NCL and
ME_NCU
> > > > columns
> > > > > > for the ME statistic.
> > > > > >
> > > > > > You can change the alpha value for those confidence
intervals by
> > > > setting:
> > > > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95%
CI).
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > >
> > > > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu>
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John,
> > > > > > >
> > > > > > > This all helps me greatly. One more questions: is there
any
> > > > > information
> > > > > > > in either the CNT or SL1L2 that could give me
confidence
> > intervals
> > > > for
> > > > > > > each data point? I'm looking to replicate the attached
plot.
> > > Notice
> > > > > > that
> > > > > > > the individual points could have either a 99, 95 or 90 %
> > > confidence.
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Sounds about right. Each time you run Grid-Stat or
Point-Stat
> > you
> > > > can
> > > > > > > write the CNT output line type which contains stats like
MSE,
> ME,
> > > > MAE,
> > > > > > and
> > > > > > > RMSE. And I'm recommended that you also write the SL1L2
line
> > type
> > > as
> > > > > > well.
> > > > > > >
> > > > > > > Then you'd run a stat_analysis job like this:
> > > > > > >
> > > > > > > stat_analysis -lookin /path/to/stat/data -job
aggregate_stat
> > > > -line_type
> > > > > > > SL1L2 -out_line_type CNT -by FCST_VAR,FCST_LEV,FCST_LEAD
> > -out_stat
> > > > > > > cnt_out.stat
> > > > > > >
> > > > > > > This job reads any .stat files it finds in
> "/path/to/stat/data",
> > > > reads
> > > > > > the
> > > > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > > > FCST_LEV,
> > > > > > and
> > > > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial
sums
> > > together
> > > > > and
> > > > > > > write out the corresponding CNT line type to the output
file
> > named
> > > > > > > cnt_out.stat.
> > > > > > >
> > > > > > > John
> > > > > > >
> > > > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via RT
<
> > > > > > met_help at ucar.edu
> > > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > So if I understand what you're saying correctly, then
if I
> > wanted
> > > > to
> > > > > an
> > > > > > > > average of 24 hour forecasts over a month long run,
then I
> > would
> > > > use
> > > > > > the
> > > > > > > > SL1L2 output to aggregate and produce this average?
Whereas
> > if I
> > > > > used
> > > > > > > CNT,
> > > > > > > > this would just provide me ~30 individual (per day
over a
> > month)
> > > 24
> > > > > > hour
> > > > > > > > forecast verifications?
> > > > > > > >
> > > > > > > > On a side note, did we ever go over how to plot the
SL1L2 MSE
> > and
> > > > > > biases?
> > > > > > > > I am forgetting if we used stat_analysis to produce a
plot or
> > if
> > > > the
> > > > > > plot
> > > > > > > > you showed me was just something you guys post
processed
> using
> > > > python
> > > > > > or
> > > > > > > > whatnot.
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > We wrote the SL1L2 partial sums from Point-Stat
because they
> > can
> > > be
> > > > > > > > aggregated together by the stat-analysis tool over
multiple
> > days
> > > or
> > > > > > > cases.
> > > > > > > >
> > > > > > > > If you're interested in continuous statistics from
> Point-Stat,
> > > I'd
> > > > > > > > recommend writing the CNT line type (which has the
stats
> > computed
> > > > for
> > > > > > > that
> > > > > > > > single run) and the SL1L2 line type (so that you can
> aggregate
> > > them
> > > > > > > > together in stat-analysis or METviewer).
> > > > > > > >
> > > > > > > > The other alternative is looking at the average of the
daily
> > > > > statistics
> > > > > > > > scores. For RMSE, the average of the daily RMSE is
equal to
> > the
> > > > > > > aggregated
> > > > > > > > score... as long as the number of matched pairs
remains
> > constant
> > > > day
> > > > > to
> > > > > > > > day. But if one today you have 98 matched pairs and
tomorrow
> > you
> > > > > have
> > > > > > > 105,
> > > > > > > > then tomorrow's score will have slightly more weight.
The
> > SL1L2
> > > > > lines
> > > > > > > are
> > > > > > > > aggregated as weighted averages, where the TOTAL
column is
> the
> > > > > weight.
> > > > > > > And
> > > > > > > > then stats (like RMSE and MSE) are recomputed from
those
> > > aggregated
> > > > > > > > scores. Generally, the statisticians recommend this
method
> > over
> > > > the
> > > > > > mean
> > > > > > > > of the daily scores. Neither is "wrong", they just
give you
> > > > slightly
> > > > > > > > different information.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > > > > > >
> > > > > > > > > Thanks John.
> > > > > > > > >
> > > > > > > > > Sorry it's taken me such a long time to get to this.
It's
> > > > nearing
> > > > > > the
> > > > > > > > end
> > > > > > > > > of FY19 so I have been finalizing several transition
> projects
> > > and
> > > > > > > haven’t
> > > > > > > > > had much time to work on MET recently. I just
picked this
> > back
> > > > up
> > > > > > and
> > > > > > > > have
> > > > > > > > > loaded a couple new modules. Here is what I have to
work
> > with
> > > > now:
> > > > > > > > >
> > > > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Running
> > > > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
dwptdpConfig
> > -v
> > > 3
> > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
log.out
> > > > > > > > >
> > > > > > > > > I get many matched pairs. Here is a sample of what
the log
> > > file
> > > > > > looks
> > > > > > > > > like for one of the pressure ranges I am verifying
on:
> > > > > > > > >
> > > > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> > > dptd/P425-376,
> > > > > for
> > > > > > > > > observation type radiosonde, over region FULL, for
> > > interpolation
> > > > > > method
> > > > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > >=0,
> > > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> logic
> > > > > UNION.
> > > > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > >=0,
> > > > > > > > > observation filtering threshold >=0, and field logic
UNION.
> > > > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> logic
> > > > > UNION.
> > > > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
filtering
> > > > > threshold
> > > > > > > > > >=10.0, observation filtering threshold >=10.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > 15280 DEBUG 2:
> > > > > > > > > 15281 DEBUG 2:
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > > > >
> > > > > > > > > I am going to work on processing these point stat
files to
> > > create
> > > > > > those
> > > > > > > > > vertical raob plots we had a discussion about. I
remember
> us
> > > > > talking
> > > > > > > > about
> > > > > > > > > the partial sums file. Why did we choose to go the
route
> of
> > > > > > producing
> > > > > > > > > partial sums then feeding that into series analysis
to
> > generate
> > > > > bias
> > > > > > > and
> > > > > > > > > MSE? It looks like bias and MSE both exist within
the CNT
> > line
> > > > > type
> > > > > > > > (MBIAS
> > > > > > > > > and MSE)?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> > faulting
> > > > > > > > >
> > > > > > > > > Justin,
> > > > > > > > >
> > > > > > > > > Great, thanks for sending me the sample data. Yes,
I was
> > able
> > > to
> > > > > > > > replicate
> > > > > > > > > the segfault. The good news is that this is caused
by a
> > simple
> > > > > typo
> > > > > > > > that's
> > > > > > > > > easy to fix. If you look in the "obs.field" entry
of the
> > > > > > relhumConfig
> > > > > > > > > file, you'll see an empty string for the last field
listed:
> > > > > > > > >
> > > > > > > > > *obs = { field = [*
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > * ... {name = "dptd";level =
> ["P988-1006"];},
> > > > > > > > {name =
> > > > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > > > If you change that empty string to "dptd", the
segfault
> will
> > go
> > > > > > away:*
> > > > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > > > Rerunning met-8.0 with that change, Point-Stat ran
to
> > > completion
> > > > > (in
> > > > > > 2
> > > > > > > > > minutes 48 seconds on my desktop machine), but it
produced
> 0
> > > > > matched
> > > > > > > > > pairs. They were discarded because of the valid
times
> (seen
> > > > using
> > > > > > -v 3
> > > > > > > > > command line option to Point-Stat). The ob file you
sent
> is
> > > > named
> > > > > "
> > > > > > > > > raob_2015020412.nc" but the actual times in that
file are
> > for
> > > > > > > > > "20190426_120000":
> > > > > > > > >
> > > > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > > > http://raob_2015020412.nc
> > > > > > > >*
> > > > > > > > >
> > > > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > > > >
> > > > > > > > > So please be aware of that discrepancy. To just
produce
> some
> > > > > matched
> > > > > > > > > pairs, I told Point-Stat to use the valid times of
the
> data:
> > > > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY
raob_2015020412.nc
> > > > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> > > 20190426_120000
> > > > > > > > > -obs_valid_end 20190426_120000*
> > > > > > > > >
> > > > > > > > > But I still get 0 matched pairs. This time, it's
because
> of
> > > bad
> > > > > > > forecast
> > > > > > > > > values:
> > > > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > > > >
> > > > > > > > > Taking a step back... let's run one of these fields
through
> > > > > > > > > plot_data_plane, which results in an error:
> > > > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps <
> > > > http://plot.ps>
> > > > > > > > > 'name="./read_NRL_binary.py
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > > > ERROR : DataPlane::two_to_one() -> range check
error: (Nx,
> > > Ny) =
> > > > > > (97,
> > > > > > > > 97),
> > > > > > > > > (x, y) = (97, 0)
> > > > > > > > >
> > > > > > > > > While the numpy object is 97x97, the grid is
specified as
> > being
> > > > > > 118x118
> > > > > > > > in
> > > > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > > > >
> > > > > > > > > Just to get something working, I modified the nx and
ny in
> > the
> > > > > python
> > > > > > > > > script:
> > > > > > > > > 'nx':97,
> > > > > > > > > 'ny':97,
> > > > > > > > > Rerunning again, I still didn't get any matched
pairs.
> > > > > > > > >
> > > > > > > > > So I'd suggest...
> > > > > > > > > - Fix the typo in the config file.
> > > > > > > > > - Figure out the discrepancy between the obs file
name
> > > timestamp
> > > > > and
> > > > > > > the
> > > > > > > > > data in that file.
> > > > > > > > > - Make sure the grid information is consistent with
the
> data
> > in
> > > > the
> > > > > > > > python
> > > > > > > > > script.
> > > > > > > > >
> > > > > > > > > Obviously though, we don't want to code to be
segfaulting
> in
> > > any
> > > > > > > > > condition. So next, I tested using met-8.1 with
that empty
> > > > string.
> > > > > > > This
> > > > > > > > > time it does run with no segfault, but prints a
warning
> about
> > > the
> > > > > > empty
> > > > > > > > > string.
> > > > > > > > >
> > > > > > > > > Hope that helps.
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > John
> > > > > > > > >
> > > > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin via
RT <
> > > > > > > > met_help at ucar.edu>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > > > > > >
> > > > > > > > > > Hey John,
> > > > > > > > > >
> > > > > > > > > > Ive put my data in tsu_data_20190815/ under
met_help.
> > > > > > > > > >
> > > > > > > > > > I am running met-8.0/met-8.0-with-grib2-support
and have
> > > > > provided
> > > > > > > > > > everything
> > > > > > > > > > on that list you've provided me. Let me know if
you're
> > able
> > > to
> > > > > > > > replicate
> > > > > > > > > > it
> > > > > > > > > >
> > > > > > > > > > Justin
> > > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: John Halley Gotway via RT [mailto:
> met_help at ucar.edu]
> > > > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > faulting
> > > > > > > > > >
> > > > > > > > > > Justin,
> > > > > > > > > >
> > > > > > > > > > Well that doesn't seem to be very helpful of
Point-Stat
> at
> > > all.
> > > > > > > There
> > > > > > > > > > isn't much jumping out at me from the log messages
you
> > sent.
> > > > In
> > > > > > > fact,
> > > > > > > > I
> > > > > > > > > > hunted around for the DEBUG(7) log message but
couldn't
> > find
> > > > > where
> > > > > > in
> > > > > > > > the
> > > > > > > > > > code it's being written. Are you able to send me
some
> > sample
> > > > > data
> > > > > > to
> > > > > > > > > > replicate this behavior?
> > > > > > > > > >
> > > > > > > > > > I'd need to know...
> > > > > > > > > > - What version of MET are you running.
> > > > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > > > - The python script that you're running.
> > > > > > > > > > - The input file for that python script.
> > > > > > > > > > - The NetCDF point observation file you're passing
to
> > > > Point-Stat.
> > > > > > > > > >
> > > > > > > > > > If I can replicate the behavior here, it should be
easy
> to
> > > run
> > > > it
> > > > > > in
> > > > > > > > the
> > > > > > > > > > debugger and figure it out.
> > > > > > > > > >
> > > > > > > > > > You can post data to our anonymous ftp site as
described
> in
> > > > "How
> > > > > to
> > > > > > > > send
> > > > > > > > > us
> > > > > > > > > > data":
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > John
> > > > > > > > > >
> > > > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin
via RT <
> > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was
acted upon.
> > > > > > > > > > > Transaction: Ticket created by
> > justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > Queue: met_help
> > > > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > > > Owner: Nobody
> > > > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > Status: new
> > > > > > > > > > > Ticket <URL:
> > > > > > > >
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Hey John,
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > I'm trying to extrapolate the production of
vertical
> raob
> > > > > > > > verification
> > > > > > > > > > > plots
> > > > > > > > > > > using point_stat and stat_analysis like we did
together
> > for
> > > > > winds
> > > > > > > but
> > > > > > > > > for
> > > > > > > > > > > relative humidity now. But when I run
point_stat, it
> seg
> > > > > faults
> > > > > > > > > without
> > > > > > > > > > > much explanation
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > ----
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1 forecast
> levels, 0
> > > > > > > climatology
> > > > > > > > > > mean
> > > > > > > > > > > levels, and 0 climatology standard deviation
levels.
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > ----
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 2: Searching 4680328 observations from 617
> > messages.
> > > > > > > > > > >
> > > > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station
id: 617
> > > > > > > valid_time: 1
> > > > > > > > > > >
> > > > > > > > > > > run_stats.sh: line 26: 40818 Segmentation fault
> > > > point_stat
> > > > > > > > > > > PYTHON_NUMPY
> > > > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir
./out/point_stat -log
> > > > > > > > > > > ./out/point_stat.log
> > > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > From my log file:
> > > > > > > > > > >
> > > > > > > > > > > 607 DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > > > 608 DEBUG 2: Searching 4680328 observations from
617
> > > > messages.
> > > > > > > > > > >
> > > > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1
station id:
> > 617
> > > > > > > > > valid_time: 1
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Any help would be much appreciated
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Justin
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Justin Tsu
> > > > > > > > > > >
> > > > > > > > > > > Marine Meteorology Division
> > > > > > > > > > >
> > > > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > > > >
> > > > > > > > > > > Building 704 Room 212
> > > > > > > > > > >
> > > > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > > > >
> > > > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > > > >
> > > > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Ph. (831) 656-4111
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Thu Oct 17 13:55:03 2019
Justin,
Thanks for sending the sample data. I ran into a few issues, but
worked
around them.
1. I didn't have any of your data files (
dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld).
So I just used a sample temperature data file instead (
trpres_sfc_0000.0_0000.0_glob360x181...).
2. There is a mismatch between the name of the point observation
file
you sent and it's contents. The file raob_2015020412.nc actually
contains data for 20190429_12:
ncdump -v hdr_vld_table raob_2015020412.nc
hdr_vld_table =
"20190426_120000" ;
So I just changed the "trpres_sfc" file name to use the
20190426_120000
timestamp to get matches.
And instead of trying to process 38 fields, I just did one.
But running met-8.1.1, it all ran fine without error. I got 111
matched
pairs. Of course they're bogus because the data types and times don't
match up, but the code is successfully producing matches.
So I'm not able to replicate the problems you're having. In fact, I
didn't
even need to set the MET_GRIB_TABLES environment variable. I ran met-
8.1.1
through the debugger and it doesn't even step into the
VarInfoGrib::add_grib_code() function which is producing the error.
Hmmm, can you please run "point_stat --version" and tell me what it
says?
Also, please check to see if you have the MET_BASE environment
variable
set. If you do, please try unsetting it.
Thanks,
John
On Thu, Oct 17, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> John,
>
> Sounds good. Ive put the data on the ftp. This is the same exact
data
> that I worked with you on before (when we were using MET 8.0).
Point stat
> has worked on this data previously but I guess with the new GRIB
> conventions and new MET code (using MET 8.1A now), things have
broken.
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, October 17, 2019 11:51 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> It looks like that change in setting MET_GRIB_TABLES did fix the
immediate
> problem:
> ERROR : get_filenames_from_dir() -> can't stat
> "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
>
> Now, we just need to get the GRIB table lookup working as expected.
> Perhaps it'd be more efficient for you to send me sample data so I
can
> replicate the problem here and then debug it. You could post data
to our
> ftp site following these instructions:
>
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk#ftp
>
> I'd need the input files for Point-Stat (forecast file or python
embedding
> script/data, NetCDF observation file, Point-Stat config file, and
your
> custom GRIB table (grib1_nrl_v2_2.txt).
>
> As for why GRIB would be involved... in earlier versions of MET, we
> interpreted point data using the GRIB1 conventions. We have since
shifted
> away from that and process point observation variables by their
name,
> rather than referring the GRIB1 conventions. But that could explain
why a
> GRIB table lookup is being performed.
>
> Thanks,
> John
>
> On Thu, Oct 17, 2019 at 11:34 AM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Unfortunately this did not fix it
> >
> > [tsu at maury2 01_POINT_STAT_WORK]$ echo $MET_GRIB_TABLES
> > /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> >
> > DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
> > /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> > DEBUG 1: Default Config File:
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > DEBUG 1: User Config File: dwptdpConfig
> > ERROR :
> > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > ERROR :
> >
> > Could it be an issue between GRIB 1 and GRIB 2? What about the
fact that
> I
> > am using netCDF as my input data format?
> >
> > Justin
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Thursday, October 17, 2019 8:26 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > When MET_GRIB_TABLES is set to a directory, MET tries to process
all
> files
> > in that directory. Please try to instead set it explicitly to
your
> single
> > filename:
> >
> > setenv MET_GRIB_TABLES `pwd`/grib1_nrl_v2_2.txt
> > ... or ...
> > export MET_GRIB_TABLES=`pwd`/grib1_nrl_v2_2.txt
> >
> > Does that work any better?
> >
> > Thanks,
> > John
> >
> > On Wed, Oct 16, 2019 at 6:20 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hi John,
> > >
> > > I also created my own grib table file named grib1_nrl_v2_2.txt
and
> added
> > > the following:
> > >
> > > [tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
> > > 256 128 98 -1 "wdir" "NRL WIND DIRECTION"
> > > 256 128 98 -1 "t" "NRL TEMPERATURE"
> > > 256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
> > > 256 128 98 -1 "pres" "NRL PRESSURE"
> > > 256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
> > >
> > > Which are the names of the variables I am using in my netcdf
file.
> > > Setting export MET_GRIB_TABLES=$(pwd) then running point_stat I
get:
> > >
> > > ERROR :
> > > ERROR : get_filenames_from_dir() -> can't stat
> > > "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> > > ERROR :
> > >
> > > Justin
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Wednesday, October 2, 2019 11:14 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > This means that you're requesting a variable named "dpdt" in the
> > Point-Stat
> > > config file. MET looks for a definition of that string in it's
default
> > > GRIB1 tables:
> > > grep dpdt met-8.1/share/met/table_files/*
> > >
> > > But that returns 0 matches. So this error message is telling
you that
> > MET
> > > doesn't know how to interpret that variable name.
> > >
> > > Here's what I'd suggest:
> > > (1) Run the input GRIB1 file through the "wgrib" utility. If
"wgrib"
> > knows
> > > about this variable, it will report the name... and most likely,
that's
> > the
> > > same name that MET will know. If so, switch from using "dpdt"
to using
> > > whatever name wgrib reports.
> > >
> > > (2) If "wgrib" does NOT know about this variable, it'll just
list out
> the
> > > corresponding GRIB1 codes instead. That means we'll need to go
create
> a
> > > small GRIB table to define these strings. Take a look in:
> > > met-8.1/share/met/table_files
> > >
> > > We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt"
where
> > > CENTER is the number encoded in your GRIB file to define NRL and
PTV is
> > the
> > > parameter table version number used in your GRIB file. In that,
you'll
> > > define the mapping of GRIB1 codes to strings (like "dpdt"). And
for
> now,
> > > we'll need to set the "MET_GRIB_TABLES" environment variable to
the
> > > location of that file. But in the long run, you can send me
that file,
> > and
> > > we'll add it to "table_files" directory to be included in the
next
> > release
> > > of MET.
> > >
> > > If you have trouble creating a new GRIB table file, just let me
know
> and
> > > send me a sample GRIB file.
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hi John,
> > > >
> > > > Apologies for taking such a long time getting back to you.
End of
> > fiscal
> > > > year things have consumed much of my time and I have not had
much
> time
> > to
> > > > work on any of this.
> > > >
> > > > Before proceeding to the planning process of determining how
to call
> > > > point_stat to deal with the vertical levels, I need to fix
what is
> > going
> > > on
> > > > with my GRIB1 variables. When I run point_stat, I keep
getting this
> > > error:
> > > >
> > > > DEBUG 1: Default Config File:
> > > >
> > >
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > > > DEBUG 1: User Config File: dwptdpConfig
> > > > ERROR :
> > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > > abbreviation 'dptd' for table version 2
> > > > ERROR :
> > > >
> > > > I remember getting this before but don't remember how we fixed
it.
> > > > I am using met-8.1/met-8.1a-with-grib2-support
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, September 13, 2019 3:46 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Sorry for the delay. I was in DC on travel this week until
today.
> > > >
> > > > It's really up to you how you'd like to configure it. Unless
it's
> too
> > > > unwieldy, I do think I'd try verifying all levels at once in a
single
> > > call
> > > > to Point-Stat. All those observations are contained in the
same
> point
> > > > observation file. If you verify each level in a separate call
to
> > > > Point-Stat, you'll be looping through and processing those obs
many,
> > many
> > > > times, which will be relatively slow. From a processing
perspective,
> > > it'd
> > > > be more efficient to process them all at once, in a single
call to
> > > > Point-Stat.
> > > >
> > > > But you balance runtime efficiency versus ease of scripting
and
> > > > configuration. And that's why it's up to you to decide which
you
> > prefer.
> > > >
> > > > Hope that helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Hey John,
> > > > >
> > > > > That makes sense. The way that I've set up my config file
is as
> > > follows:
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > > > > ];
> > > > > }
> > > > > obs = {
> > > > > field = [
> > > > > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > > > > ];
> > > > > }
> > > > > message_type = [ "${MSG_TYPE}" ];
> > > > >
> > > > > The environmental variables I'm setting in the wrapper
script are
> > LEV,
> > > > > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way,
it
> seems
> > > > like I
> > > > > will only be able to run point_Stat for a single elevation
and a
> > single
> > > > > lead time. Do you recommend this? Or Should I put all the
> elevations
> > > > for a
> > > > > single lead time in one pass of point_stat?
> > > > >
> > > > > So my config file will look like something like this...
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > ... etc.
> > > > > ];
> > > > > }
> > > > >
> > > > > Also, I am not sure what happened by when I run point_stat
now I am
> > > > > getting that error
> > > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > > > abbreviation 'dptd' for table version 2
> > > > > Again. This makes me think that the obs_var name is wrong,
but
> > ncdump
> > > > -v
> > > > > obs_var raob_*.nc gives me obs_var =
> > > > > "ws",
> > > > > "wdir",
> > > > > "t",
> > > > > "dptd",
> > > > > "pres",
> > > > > "ght" ;
> > > > > So clearly dptd exists.
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, September 6, 2019 1:40 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Here's a sample Point-Stat output file name:
> > > > > point_stat_360000L_20070331_120000V.stat
> > > > >
> > > > > The "360000L" indicates that this is output for a 36-hour
forecast.
> > > And
> > > > > the "20070331_120000V" timestamp is the valid time.
> > > > >
> > > > > If you run Point-Stat once for each forecast lead time, the
> > timestamps
> > > > > should be different and they should not clobber eachother.
> > > > >
> > > > > But let's say you don't want to run Point-Stat or Grid-Stat
> multiple
> > > > times
> > > > > with the same timing info. The "output_prefix" config file
entry
> is
> > > used
> > > > > to customize the output file names to prevent them from
clobbering
> > > > > eachother. For example, setting:
> > > > > output_prefix="RUN1";
> > > > > Would result in files named "
> > > > > point_stat_RUN1_360000L_20070331_120000V.stat".
> > > > >
> > > > > Make sense?
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Invoking point_stat multiple times will create and replace
the
> old
> > > _cnt
> > > > > > and _sl1l2 files right? At that point, I'll have a bunch
of CNT
> > and
> > > > > SL1L2
> > > > > > files and then use stat_analysis to aggregate them?
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, September 6, 2019 1:11 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Yes, that is a long list of fields, but I don't see a way
obvious
> > way
> > > > of
> > > > > > shortening that. But to do multiple lead times, I'd just
call
> > > > Point-Stat
> > > > > > multiple times, once for each lead time, and update the
config
> file
> > > to
> > > > > use
> > > > > > environment variables for the current time:
> > > > > >
> > > > > > fcst = {
> > > > > > field = [
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > ...
> > > > > >
> > > > > > Where the calling scripts sets the ${INIT_TIME} and
${FCST_HR}
> > > > > environment
> > > > > > variables.
> > > > > >
> > > > > > John
> > > > > >
> > > > > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John,
> > > > > > >
> > > > > > > I managed to scrap together some code to get RAOB stats
from
> CNT
> > > > > plotted
> > > > > > > with 95% CI. Working on Surface stats now.
> > > > > > >
> > > > > > > So my configuration file looks like this right now:
> > > > > > >
> > > > > > > fcst = {
> > > > > > > field = [
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > >
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > > > > ];
> > > > > > > }
> > > > > > >
> > > > > > > obs = {
> > > > > > > field = [
> > > > > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > > > > {name = "dptd";level = ["P4.6-6"];},
> > > > > > > {name = "dptd";level = ["P6.1-8"];},
> > > > > > > {name = "dptd";level = ["P9-15"];},
> > > > > > > {name = "dptd";level = ["P16-25"];},
> > > > > > > {name = "dptd";level = ["P26-40"];},
> > > > > > > {name = "dptd";level = ["P41-65"];},
> > > > > > > {name = "dptd";level = ["P66-85"];},
> > > > > > > {name = "dptd";level = ["P86-125"];},
> > > > > > > {name = "dptd";level = ["P126-175"];},
> > > > > > > {name = "dptd";level = ["P176-225"];},
> > > > > > > {name = "dptd";level = ["P226-275"];},
> > > > > > > {name = "dptd";level = ["P276-325"];},
> > > > > > > {name = "dptd";level = ["P326-375"];},
> > > > > > > {name = "dptd";level = ["P376-425"];},
> > > > > > > {name = "dptd";level = ["P426-475"];},
> > > > > > > {name = "dptd";level = ["P476-525"];},
> > > > > > > {name = "dptd";level = ["P526-575"];},
> > > > > > > {name = "dptd";level = ["P576-625"];},
> > > > > > > {name = "dptd";level = ["P626-675"];},
> > > > > > > {name = "dptd";level = ["P676-725"];},
> > > > > > > {name = "dptd";level = ["P726-775"];},
> > > > > > > {name = "dptd";level = ["P776-825"];},
> > > > > > > {name = "dptd";level = ["P826-875"];},
> > > > > > > {name = "dptd";level = ["P876-912"];},
> > > > > > > {name = "dptd";level = ["P913-936"];},
> > > > > > > {name = "dptd";level = ["P937-962"];},
> > > > > > > {name = "dptd";level = ["P963-987"];},
> > > > > > > {name = "dptd";level = ["P988-1006"];},
> > > > > > > {name = "dptd";level = ["P1007-1013"];}
> > > > > > >
> > > > > > > And I have the data:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > > > > >
> > > > > > > for a particular DTG and vertical level. If I want to
run
> > multiple
> > > > > lead
> > > > > > > times, it seems like I'll have to copy that long list of
fields
> > for
> > > > > each
> > > > > > > lead time in the fcst dict and then duplicate the obs
> dictionary
> > so
> > > > > that
> > > > > > > each forecast entry has a corresponding obs level
matching
> range.
> > > Is
> > > > > > this
> > > > > > > correct or is there a shorter/better way to do this?
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > I see that you're plotting RMSE and bias (called ME for
Mean
> > Error
> > > in
> > > > > > MET)
> > > > > > > in the plots you sent.
> > > > > > >
> > > > > > > Table 7.6 of the MET User's Guide (
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > > > > )
> > > > > > > describes the contents of the CNT line type type. Bot
the
> columns
> > > for
> > > > > > RMSE
> > > > > > > and ME are followed by _NCL and _NCU columns which give
the
> > > > parametric
> > > > > > > approximation of the confidence interval for those
scores. So
> > yes,
> > > > you
> > > > > > can
> > > > > > > run Stat-Analysis to aggregate SL1L2 lines together and
write
> the
> > > > > > > corresponding CNT output line type.
> > > > > > >
> > > > > > > The RMSE_NCL and RMSE_NCU columns contain the lower and
upper
> > > > > parametric
> > > > > > > confidence intervals for the RMSE statistic and ME_NCL
and
> ME_NCU
> > > > > columns
> > > > > > > for the ME statistic.
> > > > > > >
> > > > > > > You can change the alpha value for those confidence
intervals
> by
> > > > > setting:
> > > > > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95%
CI).
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Thanks John,
> > > > > > > >
> > > > > > > > This all helps me greatly. One more questions: is
there any
> > > > > > information
> > > > > > > > in either the CNT or SL1L2 that could give me
confidence
> > > intervals
> > > > > for
> > > > > > > > each data point? I'm looking to replicate the
attached plot.
> > > > Notice
> > > > > > > that
> > > > > > > > the individual points could have either a 99, 95 or 90
%
> > > > confidence.
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Sounds about right. Each time you run Grid-Stat or
> Point-Stat
> > > you
> > > > > can
> > > > > > > > write the CNT output line type which contains stats
like MSE,
> > ME,
> > > > > MAE,
> > > > > > > and
> > > > > > > > RMSE. And I'm recommended that you also write the
SL1L2 line
> > > type
> > > > as
> > > > > > > well.
> > > > > > > >
> > > > > > > > Then you'd run a stat_analysis job like this:
> > > > > > > >
> > > > > > > > stat_analysis -lookin /path/to/stat/data -job
aggregate_stat
> > > > > -line_type
> > > > > > > > SL1L2 -out_line_type CNT -by
FCST_VAR,FCST_LEV,FCST_LEAD
> > > -out_stat
> > > > > > > > cnt_out.stat
> > > > > > > >
> > > > > > > > This job reads any .stat files it finds in
> > "/path/to/stat/data",
> > > > > reads
> > > > > > > the
> > > > > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > > > > FCST_LEV,
> > > > > > > and
> > > > > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial
sums
> > > > together
> > > > > > and
> > > > > > > > write out the corresponding CNT line type to the
output file
> > > named
> > > > > > > > cnt_out.stat.
> > > > > > > >
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via
RT <
> > > > > > > met_help at ucar.edu
> > > > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > > > > > >
> > > > > > > > > So if I understand what you're saying correctly,
then if I
> > > wanted
> > > > > to
> > > > > > an
> > > > > > > > > average of 24 hour forecasts over a month long run,
then I
> > > would
> > > > > use
> > > > > > > the
> > > > > > > > > SL1L2 output to aggregate and produce this average?
> Whereas
> > > if I
> > > > > > used
> > > > > > > > CNT,
> > > > > > > > > this would just provide me ~30 individual (per day
over a
> > > month)
> > > > 24
> > > > > > > hour
> > > > > > > > > forecast verifications?
> > > > > > > > >
> > > > > > > > > On a side note, did we ever go over how to plot the
SL1L2
> MSE
> > > and
> > > > > > > biases?
> > > > > > > > > I am forgetting if we used stat_analysis to produce
a plot
> or
> > > if
> > > > > the
> > > > > > > plot
> > > > > > > > > you showed me was just something you guys post
processed
> > using
> > > > > python
> > > > > > > or
> > > > > > > > > whatnot.
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> > faulting
> > > > > > > > >
> > > > > > > > > Justin,
> > > > > > > > >
> > > > > > > > > We wrote the SL1L2 partial sums from Point-Stat
because
> they
> > > can
> > > > be
> > > > > > > > > aggregated together by the stat-analysis tool over
multiple
> > > days
> > > > or
> > > > > > > > cases.
> > > > > > > > >
> > > > > > > > > If you're interested in continuous statistics from
> > Point-Stat,
> > > > I'd
> > > > > > > > > recommend writing the CNT line type (which has the
stats
> > > computed
> > > > > for
> > > > > > > > that
> > > > > > > > > single run) and the SL1L2 line type (so that you can
> > aggregate
> > > > them
> > > > > > > > > together in stat-analysis or METviewer).
> > > > > > > > >
> > > > > > > > > The other alternative is looking at the average of
the
> daily
> > > > > > statistics
> > > > > > > > > scores. For RMSE, the average of the daily RMSE is
equal
> to
> > > the
> > > > > > > > aggregated
> > > > > > > > > score... as long as the number of matched pairs
remains
> > > constant
> > > > > day
> > > > > > to
> > > > > > > > > day. But if one today you have 98 matched pairs and
> tomorrow
> > > you
> > > > > > have
> > > > > > > > 105,
> > > > > > > > > then tomorrow's score will have slightly more
weight. The
> > > SL1L2
> > > > > > lines
> > > > > > > > are
> > > > > > > > > aggregated as weighted averages, where the TOTAL
column is
> > the
> > > > > > weight.
> > > > > > > > And
> > > > > > > > > then stats (like RMSE and MSE) are recomputed from
those
> > > > aggregated
> > > > > > > > > scores. Generally, the statisticians recommend this
method
> > > over
> > > > > the
> > > > > > > mean
> > > > > > > > > of the daily scores. Neither is "wrong", they just
give
> you
> > > > > slightly
> > > > > > > > > different information.
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > John
> > > > > > > > >
> > > > > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via
RT <
> > > > > > > > met_help at ucar.edu>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks John.
> > > > > > > > > >
> > > > > > > > > > Sorry it's taken me such a long time to get to
this.
> It's
> > > > > nearing
> > > > > > > the
> > > > > > > > > end
> > > > > > > > > > of FY19 so I have been finalizing several
transition
> > projects
> > > > and
> > > > > > > > haven’t
> > > > > > > > > > had much time to work on MET recently. I just
picked
> this
> > > back
> > > > > up
> > > > > > > and
> > > > > > > > > have
> > > > > > > > > > loaded a couple new modules. Here is what I have
to work
> > > with
> > > > > now:
> > > > > > > > > >
> > > > > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Running
> > > > > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
> dwptdpConfig
> > > -v
> > > > 3
> > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
> log.out
> > > > > > > > > >
> > > > > > > > > > I get many matched pairs. Here is a sample of
what the
> log
> > > > file
> > > > > > > looks
> > > > > > > > > > like for one of the pressure ranges I am verifying
on:
> > > > > > > > > >
> > > > > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> > > > dptd/P425-376,
> > > > > > for
> > > > > > > > > > observation type radiosonde, over region FULL, for
> > > > interpolation
> > > > > > > method
> > > > > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > >=0,
> > > > > > > > > > observation filtering threshold >=0, and field
logic
> UNION.
> > > > > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=10.0, observation filtering threshold >=10.0,
and field
> > > logic
> > > > > > > UNION.
> > > > > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > >=0,
> > > > > > > > > > observation filtering threshold >=0, and field
logic
> UNION.
> > > > > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=10.0, observation filtering threshold >=10.0,
and field
> > > logic
> > > > > > > UNION.
> > > > > > > > > > 15280 DEBUG 2:
> > > > > > > > > > 15281 DEBUG 2:
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > > > > >
> > > > > > > > > > I am going to work on processing these point stat
files
> to
> > > > create
> > > > > > > those
> > > > > > > > > > vertical raob plots we had a discussion about. I
> remember
> > us
> > > > > > talking
> > > > > > > > > about
> > > > > > > > > > the partial sums file. Why did we choose to go
the route
> > of
> > > > > > > producing
> > > > > > > > > > partial sums then feeding that into series
analysis to
> > > generate
> > > > > > bias
> > > > > > > > and
> > > > > > > > > > MSE? It looks like bias and MSE both exist within
the
> CNT
> > > line
> > > > > > type
> > > > > > > > > (MBIAS
> > > > > > > > > > and MSE)?
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Justin
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: John Halley Gotway via RT [mailto:
> met_help at ucar.edu]
> > > > > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > faulting
> > > > > > > > > >
> > > > > > > > > > Justin,
> > > > > > > > > >
> > > > > > > > > > Great, thanks for sending me the sample data.
Yes, I was
> > > able
> > > > to
> > > > > > > > > replicate
> > > > > > > > > > the segfault. The good news is that this is
caused by a
> > > simple
> > > > > > typo
> > > > > > > > > that's
> > > > > > > > > > easy to fix. If you look in the "obs.field" entry
of the
> > > > > > > relhumConfig
> > > > > > > > > > file, you'll see an empty string for the last
field
> listed:
> > > > > > > > > >
> > > > > > > > > > *obs = { field = [*
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > * ... {name = "dptd";level =
> > ["P988-1006"];},
> > > > > > > > > {name =
> > > > > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > > > > If you change that empty string to "dptd", the
segfault
> > will
> > > go
> > > > > > > away:*
> > > > > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > > > > Rerunning met-8.0 with that change, Point-Stat ran
to
> > > > completion
> > > > > > (in
> > > > > > > 2
> > > > > > > > > > minutes 48 seconds on my desktop machine), but it
> produced
> > 0
> > > > > > matched
> > > > > > > > > > pairs. They were discarded because of the valid
times
> > (seen
> > > > > using
> > > > > > > -v 3
> > > > > > > > > > command line option to Point-Stat). The ob file
you sent
> > is
> > > > > named
> > > > > > "
> > > > > > > > > > raob_2015020412.nc" but the actual times in that
file
> are
> > > for
> > > > > > > > > > "20190426_120000":
> > > > > > > > > >
> > > > > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > > > > http://raob_2015020412.nc
> > > > > > > > >*
> > > > > > > > > >
> > > > > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > > > > >
> > > > > > > > > > So please be aware of that discrepancy. To just
produce
> > some
> > > > > > matched
> > > > > > > > > > pairs, I told Point-Stat to use the valid times of
the
> > data:
> > > > > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY
raob_2015020412.nc
> > > > > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> > > > 20190426_120000
> > > > > > > > > > -obs_valid_end 20190426_120000*
> > > > > > > > > >
> > > > > > > > > > But I still get 0 matched pairs. This time, it's
because
> > of
> > > > bad
> > > > > > > > forecast
> > > > > > > > > > values:
> > > > > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > > > > >
> > > > > > > > > > Taking a step back... let's run one of these
fields
> through
> > > > > > > > > > plot_data_plane, which results in an error:
> > > > > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<
> > > > > http://plot.ps>
> > > > > > > > > > 'name="./read_NRL_binary.py
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > > > > ERROR : DataPlane::two_to_one() -> range check
error:
> (Nx,
> > > > Ny) =
> > > > > > > (97,
> > > > > > > > > 97),
> > > > > > > > > > (x, y) = (97, 0)
> > > > > > > > > >
> > > > > > > > > > While the numpy object is 97x97, the grid is
specified as
> > > being
> > > > > > > 118x118
> > > > > > > > > in
> > > > > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > > > > >
> > > > > > > > > > Just to get something working, I modified the nx
and ny
> in
> > > the
> > > > > > python
> > > > > > > > > > script:
> > > > > > > > > > 'nx':97,
> > > > > > > > > > 'ny':97,
> > > > > > > > > > Rerunning again, I still didn't get any matched
pairs.
> > > > > > > > > >
> > > > > > > > > > So I'd suggest...
> > > > > > > > > > - Fix the typo in the config file.
> > > > > > > > > > - Figure out the discrepancy between the obs file
name
> > > > timestamp
> > > > > > and
> > > > > > > > the
> > > > > > > > > > data in that file.
> > > > > > > > > > - Make sure the grid information is consistent
with the
> > data
> > > in
> > > > > the
> > > > > > > > > python
> > > > > > > > > > script.
> > > > > > > > > >
> > > > > > > > > > Obviously though, we don't want to code to be
segfaulting
> > in
> > > > any
> > > > > > > > > > condition. So next, I tested using met-8.1 with
that
> empty
> > > > > string.
> > > > > > > > This
> > > > > > > > > > time it does run with no segfault, but prints a
warning
> > about
> > > > the
> > > > > > > empty
> > > > > > > > > > string.
> > > > > > > > > >
> > > > > > > > > > Hope that helps.
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > John
> > > > > > > > > >
> > > > > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin
via RT <
> > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Hey John,
> > > > > > > > > > >
> > > > > > > > > > > Ive put my data in tsu_data_20190815/ under
met_help.
> > > > > > > > > > >
> > > > > > > > > > > I am running met-8.0/met-8.0-with-grib2-support
and
> have
> > > > > > provided
> > > > > > > > > > > everything
> > > > > > > > > > > on that list you've provided me. Let me know if
you're
> > > able
> > > > to
> > > > > > > > > replicate
> > > > > > > > > > > it
> > > > > > > > > > >
> > > > > > > > > > > Justin
> > > > > > > > > > >
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: John Halley Gotway via RT [mailto:
> > met_help at ucar.edu]
> > > > > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > > faulting
> > > > > > > > > > >
> > > > > > > > > > > Justin,
> > > > > > > > > > >
> > > > > > > > > > > Well that doesn't seem to be very helpful of
Point-Stat
> > at
> > > > all.
> > > > > > > > There
> > > > > > > > > > > isn't much jumping out at me from the log
messages you
> > > sent.
> > > > > In
> > > > > > > > fact,
> > > > > > > > > I
> > > > > > > > > > > hunted around for the DEBUG(7) log message but
couldn't
> > > find
> > > > > > where
> > > > > > > in
> > > > > > > > > the
> > > > > > > > > > > code it's being written. Are you able to send
me some
> > > sample
> > > > > > data
> > > > > > > to
> > > > > > > > > > > replicate this behavior?
> > > > > > > > > > >
> > > > > > > > > > > I'd need to know...
> > > > > > > > > > > - What version of MET are you running.
> > > > > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > > > > - The python script that you're running.
> > > > > > > > > > > - The input file for that python script.
> > > > > > > > > > > - The NetCDF point observation file you're
passing to
> > > > > Point-Stat.
> > > > > > > > > > >
> > > > > > > > > > > If I can replicate the behavior here, it should
be easy
> > to
> > > > run
> > > > > it
> > > > > > > in
> > > > > > > > > the
> > > > > > > > > > > debugger and figure it out.
> > > > > > > > > > >
> > > > > > > > > > > You can post data to our anonymous ftp site as
> described
> > in
> > > > > "How
> > > > > > to
> > > > > > > > > send
> > > > > > > > > > us
> > > > > > > > > > > data":
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > > > > >
> > > > > > > > > > > Thanks,
> > > > > > > > > > > John
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin
via RT
> <
> > > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was
acted
> upon.
> > > > > > > > > > > > Transaction: Ticket created by
> > > justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > > Queue: met_help
> > > > > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > > > > Owner: Nobody
> > > > > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > > Status: new
> > > > > > > > > > > > Ticket <URL:
> > > > > > > > >
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Hey John,
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > I'm trying to extrapolate the production of
vertical
> > raob
> > > > > > > > > verification
> > > > > > > > > > > > plots
> > > > > > > > > > > > using point_stat and stat_analysis like we did
> together
> > > for
> > > > > > winds
> > > > > > > > but
> > > > > > > > > > for
> > > > > > > > > > > > relative humidity now. But when I run
point_stat, it
> > seg
> > > > > > faults
> > > > > > > > > > without
> > > > > > > > > > > > much explanation
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > > ----
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1
forecast
> > levels, 0
> > > > > > > > climatology
> > > > > > > > > > > mean
> > > > > > > > > > > > levels, and 0 climatology standard deviation
levels.
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > > ----
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2: Searching 4680328 observations from
617
> > > messages.
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station
id:
> 617
> > > > > > > > valid_time: 1
> > > > > > > > > > > >
> > > > > > > > > > > > run_stats.sh: line 26: 40818 Segmentation
fault
> > > > > point_stat
> > > > > > > > > > > > PYTHON_NUMPY
> > > > > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir
./out/point_stat
> -log
> > > > > > > > > > > > ./out/point_stat.log
> > > > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end
20200101
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > From my log file:
> > > > > > > > > > > >
> > > > > > > > > > > > 607 DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > 608 DEBUG 2: Searching 4680328 observations
from 617
> > > > > messages.
> > > > > > > > > > > >
> > > > > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1
station
> id:
> > > 617
> > > > > > > > > > valid_time: 1
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Any help would be much appreciated
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Justin
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Justin Tsu
> > > > > > > > > > > >
> > > > > > > > > > > > Marine Meteorology Division
> > > > > > > > > > > >
> > > > > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > > > > >
> > > > > > > > > > > > Building 704 Room 212
> > > > > > > > > > > >
> > > > > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > > > > >
> > > > > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > > > > >
> > > > > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Ph. (831) 656-4111
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Thu Oct 17 15:02:39 2019
Hey John,
[tsu at maury2 02_INNOVATION]$ echo $MET_BASE
[tsu at maury2 02_INNOVATION]$ point_stat --version
DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
/users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
MET Version: V8.1
Repository: https://svn-met-dev.cgd.ucar.edu/tags/met/met-8.1
Revision: 6381
Change Date: 2019-05-03 17:12:28 -0600 (Fri, 03 May 2019)
Yeah I forgot why I changed the ob file name to have a different date.
I guess I just wanted to see whether or not point_stat derived date
data from the file name or from the internal data. I am surprised you
didn't receive the data. I put it in the irap directory named
data.tar
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, October 17, 2019 12:55 PM
To: Tsu, Mr. Justin
Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
Thanks for sending the sample data. I ran into a few issues, but
worked
around them.
1. I didn't have any of your data files (
dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld).
So I just used a sample temperature data file instead (
trpres_sfc_0000.0_0000.0_glob360x181...).
2. There is a mismatch between the name of the point observation
file
you sent and it's contents. The file raob_2015020412.nc actually
contains data for 20190429_12:
ncdump -v hdr_vld_table raob_2015020412.nc
hdr_vld_table =
"20190426_120000" ;
So I just changed the "trpres_sfc" file name to use the
20190426_120000
timestamp to get matches.
And instead of trying to process 38 fields, I just did one.
But running met-8.1.1, it all ran fine without error. I got 111
matched
pairs. Of course they're bogus because the data types and times don't
match up, but the code is successfully producing matches.
So I'm not able to replicate the problems you're having. In fact, I
didn't
even need to set the MET_GRIB_TABLES environment variable. I ran met-
8.1.1
through the debugger and it doesn't even step into the
VarInfoGrib::add_grib_code() function which is producing the error.
Hmmm, can you please run "point_stat --version" and tell me what it
says?
Also, please check to see if you have the MET_BASE environment
variable
set. If you do, please try unsetting it.
Thanks,
John
On Thu, Oct 17, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> John,
>
> Sounds good. Ive put the data on the ftp. This is the same exact
data
> that I worked with you on before (when we were using MET 8.0).
Point stat
> has worked on this data previously but I guess with the new GRIB
> conventions and new MET code (using MET 8.1A now), things have
broken.
>
> Justin
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, October 17, 2019 11:51 AM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> It looks like that change in setting MET_GRIB_TABLES did fix the
immediate
> problem:
> ERROR : get_filenames_from_dir() -> can't stat
> "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
>
> Now, we just need to get the GRIB table lookup working as expected.
> Perhaps it'd be more efficient for you to send me sample data so I
can
> replicate the problem here and then debug it. You could post data
to our
> ftp site following these instructions:
>
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk#ftp
>
> I'd need the input files for Point-Stat (forecast file or python
embedding
> script/data, NetCDF observation file, Point-Stat config file, and
your
> custom GRIB table (grib1_nrl_v2_2.txt).
>
> As for why GRIB would be involved... in earlier versions of MET, we
> interpreted point data using the GRIB1 conventions. We have since
shifted
> away from that and process point observation variables by their
name,
> rather than referring the GRIB1 conventions. But that could explain
why a
> GRIB table lookup is being performed.
>
> Thanks,
> John
>
> On Thu, Oct 17, 2019 at 11:34 AM Tsu, Mr. Justin via RT
<met_help at ucar.edu
> >
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > Unfortunately this did not fix it
> >
> > [tsu at maury2 01_POINT_STAT_WORK]$ echo $MET_GRIB_TABLES
> > /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> >
> > DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
> > /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> > DEBUG 1: Default Config File:
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > DEBUG 1: User Config File: dwptdpConfig
> > ERROR :
> > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1 field
> > abbreviation 'dptd' for table version 2
> > ERROR :
> >
> > Could it be an issue between GRIB 1 and GRIB 2? What about the
fact that
> I
> > am using netCDF as my input data format?
> >
> > Justin
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Thursday, October 17, 2019 8:26 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > When MET_GRIB_TABLES is set to a directory, MET tries to process
all
> files
> > in that directory. Please try to instead set it explicitly to
your
> single
> > filename:
> >
> > setenv MET_GRIB_TABLES `pwd`/grib1_nrl_v2_2.txt
> > ... or ...
> > export MET_GRIB_TABLES=`pwd`/grib1_nrl_v2_2.txt
> >
> > Does that work any better?
> >
> > Thanks,
> > John
> >
> > On Wed, Oct 16, 2019 at 6:20 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Hi John,
> > >
> > > I also created my own grib table file named grib1_nrl_v2_2.txt
and
> added
> > > the following:
> > >
> > > [tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
> > > 256 128 98 -1 "wdir" "NRL WIND DIRECTION"
> > > 256 128 98 -1 "t" "NRL TEMPERATURE"
> > > 256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
> > > 256 128 98 -1 "pres" "NRL PRESSURE"
> > > 256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
> > >
> > > Which are the names of the variables I am using in my netcdf
file.
> > > Setting export MET_GRIB_TABLES=$(pwd) then running point_stat I
get:
> > >
> > > ERROR :
> > > ERROR : get_filenames_from_dir() -> can't stat
> > > "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> > > ERROR :
> > >
> > > Justin
> > >
> > >
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Wednesday, October 2, 2019 11:14 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > This means that you're requesting a variable named "dpdt" in the
> > Point-Stat
> > > config file. MET looks for a definition of that string in it's
default
> > > GRIB1 tables:
> > > grep dpdt met-8.1/share/met/table_files/*
> > >
> > > But that returns 0 matches. So this error message is telling
you that
> > MET
> > > doesn't know how to interpret that variable name.
> > >
> > > Here's what I'd suggest:
> > > (1) Run the input GRIB1 file through the "wgrib" utility. If
"wgrib"
> > knows
> > > about this variable, it will report the name... and most likely,
that's
> > the
> > > same name that MET will know. If so, switch from using "dpdt"
to using
> > > whatever name wgrib reports.
> > >
> > > (2) If "wgrib" does NOT know about this variable, it'll just
list out
> the
> > > corresponding GRIB1 codes instead. That means we'll need to go
create
> a
> > > small GRIB table to define these strings. Take a look in:
> > > met-8.1/share/met/table_files
> > >
> > > We could create a new file named "grib1_nrl_{PTV}_{CENTER}.txt"
where
> > > CENTER is the number encoded in your GRIB file to define NRL and
PTV is
> > the
> > > parameter table version number used in your GRIB file. In that,
you'll
> > > define the mapping of GRIB1 codes to strings (like "dpdt"). And
for
> now,
> > > we'll need to set the "MET_GRIB_TABLES" environment variable to
the
> > > location of that file. But in the long run, you can send me
that file,
> > and
> > > we'll add it to "table_files" directory to be included in the
next
> > release
> > > of MET.
> > >
> > > If you have trouble creating a new GRIB table file, just let me
know
> and
> > > send me a sample GRIB file.
> > >
> > > Thanks,
> > > John
> > >
> > >
> > > On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hi John,
> > > >
> > > > Apologies for taking such a long time getting back to you.
End of
> > fiscal
> > > > year things have consumed much of my time and I have not had
much
> time
> > to
> > > > work on any of this.
> > > >
> > > > Before proceeding to the planning process of determining how
to call
> > > > point_stat to deal with the vertical levels, I need to fix
what is
> > going
> > > on
> > > > with my GRIB1 variables. When I run point_stat, I keep
getting this
> > > error:
> > > >
> > > > DEBUG 1: Default Config File:
> > > >
> > >
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > > > DEBUG 1: User Config File: dwptdpConfig
> > > > ERROR :
> > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > > abbreviation 'dptd' for table version 2
> > > > ERROR :
> > > >
> > > > I remember getting this before but don't remember how we fixed
it.
> > > > I am using met-8.1/met-8.1a-with-grib2-support
> > > >
> > > > Justin
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Friday, September 13, 2019 3:46 PM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > Sorry for the delay. I was in DC on travel this week until
today.
> > > >
> > > > It's really up to you how you'd like to configure it. Unless
it's
> too
> > > > unwieldy, I do think I'd try verifying all levels at once in a
single
> > > call
> > > > to Point-Stat. All those observations are contained in the
same
> point
> > > > observation file. If you verify each level in a separate call
to
> > > > Point-Stat, you'll be looping through and processing those obs
many,
> > many
> > > > times, which will be relatively slow. From a processing
perspective,
> > > it'd
> > > > be more efficient to process them all at once, in a single
call to
> > > > Point-Stat.
> > > >
> > > > But you balance runtime efficiency versus ease of scripting
and
> > > > configuration. And that's why it's up to you to decide which
you
> > prefer.
> > > >
> > > > Hope that helps.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > > On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Hey John,
> > > > >
> > > > > That makes sense. The way that I've set up my config file
is as
> > > follows:
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > > > > ];
> > > > > }
> > > > > obs = {
> > > > > field = [
> > > > > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > > > > ];
> > > > > }
> > > > > message_type = [ "${MSG_TYPE}" ];
> > > > >
> > > > > The environmental variables I'm setting in the wrapper
script are
> > LEV,
> > > > > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this way,
it
> seems
> > > > like I
> > > > > will only be able to run point_Stat for a single elevation
and a
> > single
> > > > > lead time. Do you recommend this? Or Should I put all the
> elevations
> > > > for a
> > > > > single lead time in one pass of point_stat?
> > > > >
> > > > > So my config file will look like something like this...
> > > > > fcst = {
> > > > > field = [
> > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > ... etc.
> > > > > ];
> > > > > }
> > > > >
> > > > > Also, I am not sure what happened by when I run point_stat
now I am
> > > > > getting that error
> > > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > > > abbreviation 'dptd' for table version 2
> > > > > Again. This makes me think that the obs_var name is wrong,
but
> > ncdump
> > > > -v
> > > > > obs_var raob_*.nc gives me obs_var =
> > > > > "ws",
> > > > > "wdir",
> > > > > "t",
> > > > > "dptd",
> > > > > "pres",
> > > > > "ght" ;
> > > > > So clearly dptd exists.
> > > > >
> > > > > Justin
> > > > >
> > > > >
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, September 6, 2019 1:40 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Here's a sample Point-Stat output file name:
> > > > > point_stat_360000L_20070331_120000V.stat
> > > > >
> > > > > The "360000L" indicates that this is output for a 36-hour
forecast.
> > > And
> > > > > the "20070331_120000V" timestamp is the valid time.
> > > > >
> > > > > If you run Point-Stat once for each forecast lead time, the
> > timestamps
> > > > > should be different and they should not clobber eachother.
> > > > >
> > > > > But let's say you don't want to run Point-Stat or Grid-Stat
> multiple
> > > > times
> > > > > with the same timing info. The "output_prefix" config file
entry
> is
> > > used
> > > > > to customize the output file names to prevent them from
clobbering
> > > > > eachother. For example, setting:
> > > > > output_prefix="RUN1";
> > > > > Would result in files named "
> > > > > point_stat_RUN1_360000L_20070331_120000V.stat".
> > > > >
> > > > > Make sense?
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Invoking point_stat multiple times will create and replace
the
> old
> > > _cnt
> > > > > > and _sl1l2 files right? At that point, I'll have a bunch
of CNT
> > and
> > > > > SL1L2
> > > > > > files and then use stat_analysis to aggregate them?
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, September 6, 2019 1:11 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Yes, that is a long list of fields, but I don't see a way
obvious
> > way
> > > > of
> > > > > > shortening that. But to do multiple lead times, I'd just
call
> > > > Point-Stat
> > > > > > multiple times, once for each lead time, and update the
config
> file
> > > to
> > > > > use
> > > > > > environment variables for the current time:
> > > > > >
> > > > > > fcst = {
> > > > > > field = [
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > },
> > > > > > ...
> > > > > >
> > > > > > Where the calling scripts sets the ${INIT_TIME} and
${FCST_HR}
> > > > > environment
> > > > > > variables.
> > > > > >
> > > > > > John
> > > > > >
> > > > > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Thanks John,
> > > > > > >
> > > > > > > I managed to scrap together some code to get RAOB stats
from
> CNT
> > > > > plotted
> > > > > > > with 95% CI. Working on Surface stats now.
> > > > > > >
> > > > > > > So my configuration file looks like this right now:
> > > > > > >
> > > > > > > fcst = {
> > > > > > > field = [
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > >
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > > > > ];
> > > > > > > }
> > > > > > >
> > > > > > > obs = {
> > > > > > > field = [
> > > > > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > > > > {name = "dptd";level = ["P4.6-6"];},
> > > > > > > {name = "dptd";level = ["P6.1-8"];},
> > > > > > > {name = "dptd";level = ["P9-15"];},
> > > > > > > {name = "dptd";level = ["P16-25"];},
> > > > > > > {name = "dptd";level = ["P26-40"];},
> > > > > > > {name = "dptd";level = ["P41-65"];},
> > > > > > > {name = "dptd";level = ["P66-85"];},
> > > > > > > {name = "dptd";level = ["P86-125"];},
> > > > > > > {name = "dptd";level = ["P126-175"];},
> > > > > > > {name = "dptd";level = ["P176-225"];},
> > > > > > > {name = "dptd";level = ["P226-275"];},
> > > > > > > {name = "dptd";level = ["P276-325"];},
> > > > > > > {name = "dptd";level = ["P326-375"];},
> > > > > > > {name = "dptd";level = ["P376-425"];},
> > > > > > > {name = "dptd";level = ["P426-475"];},
> > > > > > > {name = "dptd";level = ["P476-525"];},
> > > > > > > {name = "dptd";level = ["P526-575"];},
> > > > > > > {name = "dptd";level = ["P576-625"];},
> > > > > > > {name = "dptd";level = ["P626-675"];},
> > > > > > > {name = "dptd";level = ["P676-725"];},
> > > > > > > {name = "dptd";level = ["P726-775"];},
> > > > > > > {name = "dptd";level = ["P776-825"];},
> > > > > > > {name = "dptd";level = ["P826-875"];},
> > > > > > > {name = "dptd";level = ["P876-912"];},
> > > > > > > {name = "dptd";level = ["P913-936"];},
> > > > > > > {name = "dptd";level = ["P937-962"];},
> > > > > > > {name = "dptd";level = ["P963-987"];},
> > > > > > > {name = "dptd";level = ["P988-1006"];},
> > > > > > > {name = "dptd";level = ["P1007-1013"];}
> > > > > > >
> > > > > > > And I have the data:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > > > > >
> > > > > > > for a particular DTG and vertical level. If I want to
run
> > multiple
> > > > > lead
> > > > > > > times, it seems like I'll have to copy that long list of
fields
> > for
> > > > > each
> > > > > > > lead time in the fcst dict and then duplicate the obs
> dictionary
> > so
> > > > > that
> > > > > > > each forecast entry has a corresponding obs level
matching
> range.
> > > Is
> > > > > > this
> > > > > > > correct or is there a shorter/better way to do this?
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > I see that you're plotting RMSE and bias (called ME for
Mean
> > Error
> > > in
> > > > > > MET)
> > > > > > > in the plots you sent.
> > > > > > >
> > > > > > > Table 7.6 of the MET User's Guide (
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > > > > )
> > > > > > > describes the contents of the CNT line type type. Bot
the
> columns
> > > for
> > > > > > RMSE
> > > > > > > and ME are followed by _NCL and _NCU columns which give
the
> > > > parametric
> > > > > > > approximation of the confidence interval for those
scores. So
> > yes,
> > > > you
> > > > > > can
> > > > > > > run Stat-Analysis to aggregate SL1L2 lines together and
write
> the
> > > > > > > corresponding CNT output line type.
> > > > > > >
> > > > > > > The RMSE_NCL and RMSE_NCU columns contain the lower and
upper
> > > > > parametric
> > > > > > > confidence intervals for the RMSE statistic and ME_NCL
and
> ME_NCU
> > > > > columns
> > > > > > > for the ME statistic.
> > > > > > >
> > > > > > > You can change the alpha value for those confidence
intervals
> by
> > > > > setting:
> > > > > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for 95%
CI).
> > > > > > >
> > > > > > > Thanks,
> > > > > > > John
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT <
> > > > > > met_help at ucar.edu>
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Thanks John,
> > > > > > > >
> > > > > > > > This all helps me greatly. One more questions: is
there any
> > > > > > information
> > > > > > > > in either the CNT or SL1L2 that could give me
confidence
> > > intervals
> > > > > for
> > > > > > > > each data point? I'm looking to replicate the
attached plot.
> > > > Notice
> > > > > > > that
> > > > > > > > the individual points could have either a 99, 95 or 90
%
> > > > confidence.
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > Sounds about right. Each time you run Grid-Stat or
> Point-Stat
> > > you
> > > > > can
> > > > > > > > write the CNT output line type which contains stats
like MSE,
> > ME,
> > > > > MAE,
> > > > > > > and
> > > > > > > > RMSE. And I'm recommended that you also write the
SL1L2 line
> > > type
> > > > as
> > > > > > > well.
> > > > > > > >
> > > > > > > > Then you'd run a stat_analysis job like this:
> > > > > > > >
> > > > > > > > stat_analysis -lookin /path/to/stat/data -job
aggregate_stat
> > > > > -line_type
> > > > > > > > SL1L2 -out_line_type CNT -by
FCST_VAR,FCST_LEV,FCST_LEAD
> > > -out_stat
> > > > > > > > cnt_out.stat
> > > > > > > >
> > > > > > > > This job reads any .stat files it finds in
> > "/path/to/stat/data",
> > > > > reads
> > > > > > > the
> > > > > > > > SL1L2 line type, and for each unique combination of
FCST_VAR,
> > > > > FCST_LEV,
> > > > > > > and
> > > > > > > > FCST_LEAD columns, it'll aggregate those SL1L2 partial
sums
> > > > together
> > > > > > and
> > > > > > > > write out the corresponding CNT line type to the
output file
> > > named
> > > > > > > > cnt_out.stat.
> > > > > > > >
> > > > > > > > John
> > > > > > > >
> > > > > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via
RT <
> > > > > > > met_help at ucar.edu
> > > > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > > > > > >
> > > > > > > > > So if I understand what you're saying correctly,
then if I
> > > wanted
> > > > > to
> > > > > > an
> > > > > > > > > average of 24 hour forecasts over a month long run,
then I
> > > would
> > > > > use
> > > > > > > the
> > > > > > > > > SL1L2 output to aggregate and produce this average?
> Whereas
> > > if I
> > > > > > used
> > > > > > > > CNT,
> > > > > > > > > this would just provide me ~30 individual (per day
over a
> > > month)
> > > > 24
> > > > > > > hour
> > > > > > > > > forecast verifications?
> > > > > > > > >
> > > > > > > > > On a side note, did we ever go over how to plot the
SL1L2
> MSE
> > > and
> > > > > > > biases?
> > > > > > > > > I am forgetting if we used stat_analysis to produce
a plot
> or
> > > if
> > > > > the
> > > > > > > plot
> > > > > > > > > you showed me was just something you guys post
processed
> > using
> > > > > python
> > > > > > > or
> > > > > > > > > whatnot.
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> > faulting
> > > > > > > > >
> > > > > > > > > Justin,
> > > > > > > > >
> > > > > > > > > We wrote the SL1L2 partial sums from Point-Stat
because
> they
> > > can
> > > > be
> > > > > > > > > aggregated together by the stat-analysis tool over
multiple
> > > days
> > > > or
> > > > > > > > cases.
> > > > > > > > >
> > > > > > > > > If you're interested in continuous statistics from
> > Point-Stat,
> > > > I'd
> > > > > > > > > recommend writing the CNT line type (which has the
stats
> > > computed
> > > > > for
> > > > > > > > that
> > > > > > > > > single run) and the SL1L2 line type (so that you can
> > aggregate
> > > > them
> > > > > > > > > together in stat-analysis or METviewer).
> > > > > > > > >
> > > > > > > > > The other alternative is looking at the average of
the
> daily
> > > > > > statistics
> > > > > > > > > scores. For RMSE, the average of the daily RMSE is
equal
> to
> > > the
> > > > > > > > aggregated
> > > > > > > > > score... as long as the number of matched pairs
remains
> > > constant
> > > > > day
> > > > > > to
> > > > > > > > > day. But if one today you have 98 matched pairs and
> tomorrow
> > > you
> > > > > > have
> > > > > > > > 105,
> > > > > > > > > then tomorrow's score will have slightly more
weight. The
> > > SL1L2
> > > > > > lines
> > > > > > > > are
> > > > > > > > > aggregated as weighted averages, where the TOTAL
column is
> > the
> > > > > > weight.
> > > > > > > > And
> > > > > > > > > then stats (like RMSE and MSE) are recomputed from
those
> > > > aggregated
> > > > > > > > > scores. Generally, the statisticians recommend this
method
> > > over
> > > > > the
> > > > > > > mean
> > > > > > > > > of the daily scores. Neither is "wrong", they just
give
> you
> > > > > slightly
> > > > > > > > > different information.
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > John
> > > > > > > > >
> > > > > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin via
RT <
> > > > > > > > met_help at ucar.edu>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks John.
> > > > > > > > > >
> > > > > > > > > > Sorry it's taken me such a long time to get to
this.
> It's
> > > > > nearing
> > > > > > > the
> > > > > > > > > end
> > > > > > > > > > of FY19 so I have been finalizing several
transition
> > projects
> > > > and
> > > > > > > > haven’t
> > > > > > > > > > had much time to work on MET recently. I just
picked
> this
> > > back
> > > > > up
> > > > > > > and
> > > > > > > > > have
> > > > > > > > > > loaded a couple new modules. Here is what I have
to work
> > > with
> > > > > now:
> > > > > > > > > >
> > > > > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Running
> > > > > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
> dwptdpConfig
> > > -v
> > > > 3
> > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101 >>
> log.out
> > > > > > > > > >
> > > > > > > > > > I get many matched pairs. Here is a sample of
what the
> log
> > > > file
> > > > > > > looks
> > > > > > > > > > like for one of the pressure ranges I am verifying
on:
> > > > > > > > > >
> > > > > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400 versus
> > > > dptd/P425-376,
> > > > > > for
> > > > > > > > > > observation type radiosonde, over region FULL, for
> > > > interpolation
> > > > > > > method
> > > > > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > > > > 15259 DEBUG 3: Observations processed = 4680328
> > > > > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > > > > 15261 DEBUG 3: Rejected: obs type = 3890030
> > > > > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > >=0,
> > > > > > > > > > observation filtering threshold >=0, and field
logic
> UNION.
> > > > > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=10.0, observation filtering threshold >=10.0,
and field
> > > logic
> > > > > > > UNION.
> > > > > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > >=0,
> > > > > > > > > > observation filtering threshold >=0, and field
logic
> UNION.
> > > > > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=5.0, observation filtering threshold >=5.0, and
field
> > logic
> > > > > > UNION.
> > > > > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
> filtering
> > > > > > threshold
> > > > > > > > > > >=10.0, observation filtering threshold >=10.0,
and field
> > > logic
> > > > > > > UNION.
> > > > > > > > > > 15280 DEBUG 2:
> > > > > > > > > > 15281 DEBUG 2:
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > > > > >
> > > > > > > > > > I am going to work on processing these point stat
files
> to
> > > > create
> > > > > > > those
> > > > > > > > > > vertical raob plots we had a discussion about. I
> remember
> > us
> > > > > > talking
> > > > > > > > > about
> > > > > > > > > > the partial sums file. Why did we choose to go
the route
> > of
> > > > > > > producing
> > > > > > > > > > partial sums then feeding that into series
analysis to
> > > generate
> > > > > > bias
> > > > > > > > and
> > > > > > > > > > MSE? It looks like bias and MSE both exist within
the
> CNT
> > > line
> > > > > > type
> > > > > > > > > (MBIAS
> > > > > > > > > > and MSE)?
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Justin
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: John Halley Gotway via RT [mailto:
> met_help at ucar.edu]
> > > > > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > faulting
> > > > > > > > > >
> > > > > > > > > > Justin,
> > > > > > > > > >
> > > > > > > > > > Great, thanks for sending me the sample data.
Yes, I was
> > > able
> > > > to
> > > > > > > > > replicate
> > > > > > > > > > the segfault. The good news is that this is
caused by a
> > > simple
> > > > > > typo
> > > > > > > > > that's
> > > > > > > > > > easy to fix. If you look in the "obs.field" entry
of the
> > > > > > > relhumConfig
> > > > > > > > > > file, you'll see an empty string for the last
field
> listed:
> > > > > > > > > >
> > > > > > > > > > *obs = { field = [*
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > * ... {name = "dptd";level =
> > ["P988-1006"];},
> > > > > > > > > {name =
> > > > > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > > > > If you change that empty string to "dptd", the
segfault
> > will
> > > go
> > > > > > > away:*
> > > > > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > > > > Rerunning met-8.0 with that change, Point-Stat ran
to
> > > > completion
> > > > > > (in
> > > > > > > 2
> > > > > > > > > > minutes 48 seconds on my desktop machine), but it
> produced
> > 0
> > > > > > matched
> > > > > > > > > > pairs. They were discarded because of the valid
times
> > (seen
> > > > > using
> > > > > > > -v 3
> > > > > > > > > > command line option to Point-Stat). The ob file
you sent
> > is
> > > > > named
> > > > > > "
> > > > > > > > > > raob_2015020412.nc" but the actual times in that
file
> are
> > > for
> > > > > > > > > > "20190426_120000":
> > > > > > > > > >
> > > > > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > > > > http://raob_2015020412.nc
> > > > > > > > >*
> > > > > > > > > >
> > > > > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > > > > >
> > > > > > > > > > So please be aware of that discrepancy. To just
produce
> > some
> > > > > > matched
> > > > > > > > > > pairs, I told Point-Stat to use the valid times of
the
> > data:
> > > > > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY
raob_2015020412.nc
> > > > > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > > > > * -outdir out -v 3 -log run_ps.log -obs_valid_beg
> > > > 20190426_120000
> > > > > > > > > > -obs_valid_end 20190426_120000*
> > > > > > > > > >
> > > > > > > > > > But I still get 0 matched pairs. This time, it's
because
> > of
> > > > bad
> > > > > > > > forecast
> > > > > > > > > > values:
> > > > > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > > > > >
> > > > > > > > > > Taking a step back... let's run one of these
fields
> through
> > > > > > > > > > plot_data_plane, which results in an error:
> > > > > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY plot.ps
<
> > > > > http://plot.ps>
> > > > > > > > > > 'name="./read_NRL_binary.py
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > > > > ERROR : DataPlane::two_to_one() -> range check
error:
> (Nx,
> > > > Ny) =
> > > > > > > (97,
> > > > > > > > > 97),
> > > > > > > > > > (x, y) = (97, 0)
> > > > > > > > > >
> > > > > > > > > > While the numpy object is 97x97, the grid is
specified as
> > > being
> > > > > > > 118x118
> > > > > > > > > in
> > > > > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > > > > >
> > > > > > > > > > Just to get something working, I modified the nx
and ny
> in
> > > the
> > > > > > python
> > > > > > > > > > script:
> > > > > > > > > > 'nx':97,
> > > > > > > > > > 'ny':97,
> > > > > > > > > > Rerunning again, I still didn't get any matched
pairs.
> > > > > > > > > >
> > > > > > > > > > So I'd suggest...
> > > > > > > > > > - Fix the typo in the config file.
> > > > > > > > > > - Figure out the discrepancy between the obs file
name
> > > > timestamp
> > > > > > and
> > > > > > > > the
> > > > > > > > > > data in that file.
> > > > > > > > > > - Make sure the grid information is consistent
with the
> > data
> > > in
> > > > > the
> > > > > > > > > python
> > > > > > > > > > script.
> > > > > > > > > >
> > > > > > > > > > Obviously though, we don't want to code to be
segfaulting
> > in
> > > > any
> > > > > > > > > > condition. So next, I tested using met-8.1 with
that
> empty
> > > > > string.
> > > > > > > > This
> > > > > > > > > > time it does run with no segfault, but prints a
warning
> > about
> > > > the
> > > > > > > empty
> > > > > > > > > > string.
> > > > > > > > > >
> > > > > > > > > > Hope that helps.
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > John
> > > > > > > > > >
> > > > > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin
via RT <
> > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Hey John,
> > > > > > > > > > >
> > > > > > > > > > > Ive put my data in tsu_data_20190815/ under
met_help.
> > > > > > > > > > >
> > > > > > > > > > > I am running met-8.0/met-8.0-with-grib2-support
and
> have
> > > > > > provided
> > > > > > > > > > > everything
> > > > > > > > > > > on that list you've provided me. Let me know if
you're
> > > able
> > > > to
> > > > > > > > > replicate
> > > > > > > > > > > it
> > > > > > > > > > >
> > > > > > > > > > > Justin
> > > > > > > > > > >
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: John Halley Gotway via RT [mailto:
> > met_help at ucar.edu]
> > > > > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > > faulting
> > > > > > > > > > >
> > > > > > > > > > > Justin,
> > > > > > > > > > >
> > > > > > > > > > > Well that doesn't seem to be very helpful of
Point-Stat
> > at
> > > > all.
> > > > > > > > There
> > > > > > > > > > > isn't much jumping out at me from the log
messages you
> > > sent.
> > > > > In
> > > > > > > > fact,
> > > > > > > > > I
> > > > > > > > > > > hunted around for the DEBUG(7) log message but
couldn't
> > > find
> > > > > > where
> > > > > > > in
> > > > > > > > > the
> > > > > > > > > > > code it's being written. Are you able to send
me some
> > > sample
> > > > > > data
> > > > > > > to
> > > > > > > > > > > replicate this behavior?
> > > > > > > > > > >
> > > > > > > > > > > I'd need to know...
> > > > > > > > > > > - What version of MET are you running.
> > > > > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > > > > - The python script that you're running.
> > > > > > > > > > > - The input file for that python script.
> > > > > > > > > > > - The NetCDF point observation file you're
passing to
> > > > > Point-Stat.
> > > > > > > > > > >
> > > > > > > > > > > If I can replicate the behavior here, it should
be easy
> > to
> > > > run
> > > > > it
> > > > > > > in
> > > > > > > > > the
> > > > > > > > > > > debugger and figure it out.
> > > > > > > > > > >
> > > > > > > > > > > You can post data to our anonymous ftp site as
> described
> > in
> > > > > "How
> > > > > > to
> > > > > > > > > send
> > > > > > > > > > us
> > > > > > > > > > > data":
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > > > > >
> > > > > > > > > > > Thanks,
> > > > > > > > > > > John
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr. Justin
via RT
> <
> > > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was
acted
> upon.
> > > > > > > > > > > > Transaction: Ticket created by
> > > justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > > Queue: met_help
> > > > > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > > > > Owner: Nobody
> > > > > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > > Status: new
> > > > > > > > > > > > Ticket <URL:
> > > > > > > > >
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Hey John,
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > I'm trying to extrapolate the production of
vertical
> > raob
> > > > > > > > > verification
> > > > > > > > > > > > plots
> > > > > > > > > > > > using point_stat and stat_analysis like we did
> together
> > > for
> > > > > > winds
> > > > > > > > but
> > > > > > > > > > for
> > > > > > > > > > > > relative humidity now. But when I run
point_stat, it
> > seg
> > > > > > faults
> > > > > > > > > > without
> > > > > > > > > > > > much explanation
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > > ----
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1
forecast
> > levels, 0
> > > > > > > > climatology
> > > > > > > > > > > mean
> > > > > > > > > > > > levels, and 0 climatology standard deviation
levels.
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > > ----
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 2: Searching 4680328 observations from
617
> > > messages.
> > > > > > > > > > > >
> > > > > > > > > > > > DEBUG 7: tbl dims: messge_type: 1 station
id:
> 617
> > > > > > > > valid_time: 1
> > > > > > > > > > > >
> > > > > > > > > > > > run_stats.sh: line 26: 40818 Segmentation
fault
> > > > > point_stat
> > > > > > > > > > > > PYTHON_NUMPY
> > > > > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir
./out/point_stat
> -log
> > > > > > > > > > > > ./out/point_stat.log
> > > > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end
20200101
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > From my log file:
> > > > > > > > > > > >
> > > > > > > > > > > > 607 DEBUG 2:
> > > > > > > > > > > >
> > > > > > > > > > > > 608 DEBUG 2: Searching 4680328 observations
from 617
> > > > > messages.
> > > > > > > > > > > >
> > > > > > > > > > > > 609 DEBUG 7: tbl dims: messge_type: 1
station
> id:
> > > 617
> > > > > > > > > > valid_time: 1
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Any help would be much appreciated
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Justin
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Justin Tsu
> > > > > > > > > > > >
> > > > > > > > > > > > Marine Meteorology Division
> > > > > > > > > > > >
> > > > > > > > > > > > Data Assimilation/Mesoscale Modeling
> > > > > > > > > > > >
> > > > > > > > > > > > Building 704 Room 212
> > > > > > > > > > > >
> > > > > > > > > > > > Naval Research Laboratory, Code 7531
> > > > > > > > > > > >
> > > > > > > > > > > > 7 Grace Hopper Avenue
> > > > > > > > > > > >
> > > > > > > > > > > > Monterey, CA 93943-5502
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Ph. (831) 656-4111
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Fri Oct 18 09:16:44 2019
Justin,
Thanks for sending that data. I was able to pull it down, but I'm out
of
the office on vacation today and Monday. Will take a look on Tuesday.
Thanks,
John
On Thu, Oct 17, 2019 at 3:03 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> Hey John,
>
> [tsu at maury2 02_INNOVATION]$ echo $MET_BASE
>
> [tsu at maury2 02_INNOVATION]$ point_stat --version
> DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
> /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
>
> MET Version: V8.1
> Repository: https://svn-met-dev.cgd.ucar.edu/tags/met/met-8.1
> Revision: 6381
> Change Date: 2019-05-03 17:12:28 -0600 (Fri, 03 May 2019)
>
> Yeah I forgot why I changed the ob file name to have a different
date. I
> guess I just wanted to see whether or not point_stat derived date
data from
> the file name or from the internal data. I am surprised you didn't
receive
> the data. I put it in the irap directory named data.tar
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, October 17, 2019 12:55 PM
> To: Tsu, Mr. Justin
> Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> Thanks for sending the sample data. I ran into a few issues, but
worked
> around them.
>
> 1. I didn't have any of your data files (
>
dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld).
> So I just used a sample temperature data file instead (
> trpres_sfc_0000.0_0000.0_glob360x181...).
> 2. There is a mismatch between the name of the point observation
file
> you sent and it's contents. The file raob_2015020412.nc actually
> contains data for 20190429_12:
>
> ncdump -v hdr_vld_table raob_2015020412.nc
> hdr_vld_table =
> "20190426_120000" ;
>
> So I just changed the "trpres_sfc" file name to use the
20190426_120000
> timestamp to get matches.
> And instead of trying to process 38 fields, I just did one.
>
> But running met-8.1.1, it all ran fine without error. I got 111
matched
> pairs. Of course they're bogus because the data types and times
don't
> match up, but the code is successfully producing matches.
>
> So I'm not able to replicate the problems you're having. In fact, I
didn't
> even need to set the MET_GRIB_TABLES environment variable. I ran
met-8.1.1
> through the debugger and it doesn't even step into the
> VarInfoGrib::add_grib_code() function which is producing the error.
>
> Hmmm, can you please run "point_stat --version" and tell me what it
says?
>
> Also, please check to see if you have the MET_BASE environment
variable
> set. If you do, please try unsetting it.
>
> Thanks,
> John
>
>
> On Thu, Oct 17, 2019 at 1:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> >
> > John,
> >
> > Sounds good. Ive put the data on the ftp. This is the same exact
data
> > that I worked with you on before (when we were using MET 8.0).
Point
> stat
> > has worked on this data previously but I guess with the new GRIB
> > conventions and new MET code (using MET 8.1A now), things have
broken.
> >
> > Justin
> > -----Original Message-----
> > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > Sent: Thursday, October 17, 2019 11:51 AM
> > To: Tsu, Mr. Justin
> > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> >
> > Justin,
> >
> > It looks like that change in setting MET_GRIB_TABLES did fix the
> immediate
> > problem:
> > ERROR : get_filenames_from_dir() -> can't stat
> > "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> >
> > Now, we just need to get the GRIB table lookup working as
expected.
> > Perhaps it'd be more efficient for you to send me sample data so I
can
> > replicate the problem here and then debug it. You could post data
to our
> > ftp site following these instructions:
> >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk#ftp
> >
> > I'd need the input files for Point-Stat (forecast file or python
> embedding
> > script/data, NetCDF observation file, Point-Stat config file, and
your
> > custom GRIB table (grib1_nrl_v2_2.txt).
> >
> > As for why GRIB would be involved... in earlier versions of MET,
we
> > interpreted point data using the GRIB1 conventions. We have since
> shifted
> > away from that and process point observation variables by their
name,
> > rather than referring the GRIB1 conventions. But that could
explain why
> a
> > GRIB table lookup is being performed.
> >
> > Thanks,
> > John
> >
> > On Thu, Oct 17, 2019 at 11:34 AM Tsu, Mr. Justin via RT <
> met_help at ucar.edu
> > >
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > >
> > > Unfortunately this did not fix it
> > >
> > > [tsu at maury2 01_POINT_STAT_WORK]$ echo $MET_GRIB_TABLES
> > > /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> > >
> > > DEBUG 1: Reading user-defined grib1 MET_GRIB_TABLES file:
> > > /users/tsu/MET/work/01_POINT_STAT_WORK/grib1_nrl_v2_2.txt
> > > DEBUG 1: Default Config File:
> > >
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > > DEBUG 1: User Config File: dwptdpConfig
> > > ERROR :
> > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > abbreviation 'dptd' for table version 2
> > > ERROR :
> > >
> > > Could it be an issue between GRIB 1 and GRIB 2? What about the
fact
> that
> > I
> > > am using netCDF as my input data format?
> > >
> > > Justin
> > > -----Original Message-----
> > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > Sent: Thursday, October 17, 2019 8:26 AM
> > > To: Tsu, Mr. Justin
> > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > >
> > > Justin,
> > >
> > > When MET_GRIB_TABLES is set to a directory, MET tries to process
all
> > files
> > > in that directory. Please try to instead set it explicitly to
your
> > single
> > > filename:
> > >
> > > setenv MET_GRIB_TABLES `pwd`/grib1_nrl_v2_2.txt
> > > ... or ...
> > > export MET_GRIB_TABLES=`pwd`/grib1_nrl_v2_2.txt
> > >
> > > Does that work any better?
> > >
> > > Thanks,
> > > John
> > >
> > > On Wed, Oct 16, 2019 at 6:20 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
>
> > > >
> > > > Hi John,
> > > >
> > > > I also created my own grib table file named
grib1_nrl_v2_2.txt and
> > added
> > > > the following:
> > > >
> > > > [tsu at maury2 01_POINT_STAT_WORK]$ tail -5 grib1_nrl_v2_2.txt
> > > > 256 128 98 -1 "wdir" "NRL WIND DIRECTION"
> > > > 256 128 98 -1 "t" "NRL TEMPERATURE"
> > > > 256 128 98 -1 "dptd" "NRL DEWPOINT DEPRESSION"
> > > > 256 128 98 -1 "pres" "NRL PRESSURE"
> > > > 256 128 98 -1 "ght" "NRL GEOPOTENTIAL"
> > > >
> > > > Which are the names of the variables I am using in my netcdf
file.
> > > > Setting export MET_GRIB_TABLES=$(pwd) then running point_stat
I get:
> > > >
> > > > ERROR :
> > > > ERROR : get_filenames_from_dir() -> can't stat
> > > > "/users/tsu/MET/work/01_POINT_STAT_WORK/data/data"
> > > > ERROR :
> > > >
> > > > Justin
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > Sent: Wednesday, October 2, 2019 11:14 AM
> > > > To: Tsu, Mr. Justin
> > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg faulting
> > > >
> > > > Justin,
> > > >
> > > > This means that you're requesting a variable named "dpdt" in
the
> > > Point-Stat
> > > > config file. MET looks for a definition of that string in
it's
> default
> > > > GRIB1 tables:
> > > > grep dpdt met-8.1/share/met/table_files/*
> > > >
> > > > But that returns 0 matches. So this error message is telling
you
> that
> > > MET
> > > > doesn't know how to interpret that variable name.
> > > >
> > > > Here's what I'd suggest:
> > > > (1) Run the input GRIB1 file through the "wgrib" utility. If
"wgrib"
> > > knows
> > > > about this variable, it will report the name... and most
likely,
> that's
> > > the
> > > > same name that MET will know. If so, switch from using "dpdt"
to
> using
> > > > whatever name wgrib reports.
> > > >
> > > > (2) If "wgrib" does NOT know about this variable, it'll just
list out
> > the
> > > > corresponding GRIB1 codes instead. That means we'll need to
go
> create
> > a
> > > > small GRIB table to define these strings. Take a look in:
> > > > met-8.1/share/met/table_files
> > > >
> > > > We could create a new file named
"grib1_nrl_{PTV}_{CENTER}.txt" where
> > > > CENTER is the number encoded in your GRIB file to define NRL
and PTV
> is
> > > the
> > > > parameter table version number used in your GRIB file. In
that,
> you'll
> > > > define the mapping of GRIB1 codes to strings (like "dpdt").
And for
> > now,
> > > > we'll need to set the "MET_GRIB_TABLES" environment variable
to the
> > > > location of that file. But in the long run, you can send me
that
> file,
> > > and
> > > > we'll add it to "table_files" directory to be included in the
next
> > > release
> > > > of MET.
> > > >
> > > > If you have trouble creating a new GRIB table file, just let
me know
> > and
> > > > send me a sample GRIB file.
> > > >
> > > > Thanks,
> > > > John
> > > >
> > > >
> > > > On Tue, Oct 1, 2019 at 2:34 PM Tsu, Mr. Justin via RT <
> > met_help at ucar.edu
> > > >
> > > > wrote:
> > > >
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > >
> > > > > Hi John,
> > > > >
> > > > > Apologies for taking such a long time getting back to you.
End of
> > > fiscal
> > > > > year things have consumed much of my time and I have not had
much
> > time
> > > to
> > > > > work on any of this.
> > > > >
> > > > > Before proceeding to the planning process of determining how
to
> call
> > > > > point_stat to deal with the vertical levels, I need to fix
what is
> > > going
> > > > on
> > > > > with my GRIB1 variables. When I run point_stat, I keep
getting
> this
> > > > error:
> > > > >
> > > > > DEBUG 1: Default Config File:
> > > > >
> > > >
> > >
> >
> /software/depot/met-8.1a/met-
8.1a/share/met/config/PointStatConfig_default
> > > > > DEBUG 1: User Config File: dwptdpConfig
> > > > > ERROR :
> > > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized GRIB1
field
> > > > > abbreviation 'dptd' for table version 2
> > > > > ERROR :
> > > > >
> > > > > I remember getting this before but don't remember how we
fixed it.
> > > > > I am using met-8.1/met-8.1a-with-grib2-support
> > > > >
> > > > > Justin
> > > > >
> > > > > -----Original Message-----
> > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > Sent: Friday, September 13, 2019 3:46 PM
> > > > > To: Tsu, Mr. Justin
> > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > >
> > > > > Justin,
> > > > >
> > > > > Sorry for the delay. I was in DC on travel this week until
today.
> > > > >
> > > > > It's really up to you how you'd like to configure it.
Unless it's
> > too
> > > > > unwieldy, I do think I'd try verifying all levels at once in
a
> single
> > > > call
> > > > > to Point-Stat. All those observations are contained in the
same
> > point
> > > > > observation file. If you verify each level in a separate
call to
> > > > > Point-Stat, you'll be looping through and processing those
obs
> many,
> > > many
> > > > > times, which will be relatively slow. From a processing
> perspective,
> > > > it'd
> > > > > be more efficient to process them all at once, in a single
call to
> > > > > Point-Stat.
> > > > >
> > > > > But you balance runtime efficiency versus ease of scripting
and
> > > > > configuration. And that's why it's up to you to decide
which you
> > > prefer.
> > > > >
> > > > > Hope that helps.
> > > > >
> > > > > Thanks,
> > > > > John
> > > > >
> > > > > On Mon, Sep 9, 2019 at 4:56 PM Tsu, Mr. Justin via RT <
> > > met_help at ucar.edu
> > > > >
> > > > > wrote:
> > > > >
> > > > > >
> > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
> > > > > >
> > > > > > Hey John,
> > > > > >
> > > > > > That makes sense. The way that I've set up my config file
is as
> > > > follows:
> > > > > > fcst = {
> > > > > > field = [
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_${LEV}_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";}
> > > > > > ];
> > > > > > }
> > > > > > obs = {
> > > > > > field = [
> > > > > > {name = "dptd";level = ["P${LEV1}-${LEV2}"];}
> > > > > > ];
> > > > > > }
> > > > > > message_type = [ "${MSG_TYPE}" ];
> > > > > >
> > > > > > The environmental variables I'm setting in the wrapper
script are
> > > LEV,
> > > > > > INIT_TIME, FCST_HR, LEV1, LEV2, and MSG_TYPE. In this
way, it
> > seems
> > > > > like I
> > > > > > will only be able to run point_Stat for a single elevation
and a
> > > single
> > > > > > lead time. Do you recommend this? Or Should I put all the
> > elevations
> > > > > for a
> > > > > > single lead time in one pass of point_stat?
> > > > > >
> > > > > > So my config file will look like something like this...
> > > > > > fcst = {
> > > > > > field = [
> > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.10_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.20_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.40_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.50_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000.60_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}_fcstfld";},
> > > > > > ... etc.
> > > > > > ];
> > > > > > }
> > > > > >
> > > > > > Also, I am not sure what happened by when I run point_stat
now I
> am
> > > > > > getting that error
> > > > > > ERROR : VarInfoGrib::add_grib_code() -> unrecognized
GRIB1 field
> > > > > > abbreviation 'dptd' for table version 2
> > > > > > Again. This makes me think that the obs_var name is
wrong, but
> > > ncdump
> > > > > -v
> > > > > > obs_var raob_*.nc gives me obs_var =
> > > > > > "ws",
> > > > > > "wdir",
> > > > > > "t",
> > > > > > "dptd",
> > > > > > "pres",
> > > > > > "ght" ;
> > > > > > So clearly dptd exists.
> > > > > >
> > > > > > Justin
> > > > > >
> > > > > >
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> > > > > > Sent: Friday, September 6, 2019 1:40 PM
> > > > > > To: Tsu, Mr. Justin
> > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > >
> > > > > > Justin,
> > > > > >
> > > > > > Here's a sample Point-Stat output file name:
> > > > > > point_stat_360000L_20070331_120000V.stat
> > > > > >
> > > > > > The "360000L" indicates that this is output for a 36-hour
> forecast.
> > > > And
> > > > > > the "20070331_120000V" timestamp is the valid time.
> > > > > >
> > > > > > If you run Point-Stat once for each forecast lead time,
the
> > > timestamps
> > > > > > should be different and they should not clobber eachother.
> > > > > >
> > > > > > But let's say you don't want to run Point-Stat or Grid-
Stat
> > multiple
> > > > > times
> > > > > > with the same timing info. The "output_prefix" config
file entry
> > is
> > > > used
> > > > > > to customize the output file names to prevent them from
> clobbering
> > > > > > eachother. For example, setting:
> > > > > > output_prefix="RUN1";
> > > > > > Would result in files named "
> > > > > > point_stat_RUN1_360000L_20070331_120000V.stat".
> > > > > >
> > > > > > Make sense?
> > > > > >
> > > > > > Thanks,
> > > > > > John
> > > > > >
> > > > > > On Fri, Sep 6, 2019 at 2:16 PM Tsu, Mr. Justin via RT <
> > > > met_help at ucar.edu
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> >
> > > > > > >
> > > > > > > Invoking point_stat multiple times will create and
replace the
> > old
> > > > _cnt
> > > > > > > and _sl1l2 files right? At that point, I'll have a
bunch of
> CNT
> > > and
> > > > > > SL1L2
> > > > > > > files and then use stat_analysis to aggregate them?
> > > > > > >
> > > > > > > Justin
> > > > > > >
> > > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > Sent: Friday, September 6, 2019 1:11 PM
> > > > > > > To: Tsu, Mr. Justin
> > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
faulting
> > > > > > >
> > > > > > > Justin,
> > > > > > >
> > > > > > > Yes, that is a long list of fields, but I don't see a
way
> obvious
> > > way
> > > > > of
> > > > > > > shortening that. But to do multiple lead times, I'd
just call
> > > > > Point-Stat
> > > > > > > multiple times, once for each lead time, and update the
config
> > file
> > > > to
> > > > > > use
> > > > > > > environment variables for the current time:
> > > > > > >
> > > > > > > fcst = {
> > > > > > > field = [
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > > },
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > > },
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > > },
> > > > > > > {name = "/users/tsu/MET/work/read_NRL_binary.py
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_${INIT_TIME}_${FCST_HR}";
> > > > > > > },
> > > > > > > ...
> > > > > > >
> > > > > > > Where the calling scripts sets the ${INIT_TIME} and
${FCST_HR}
> > > > > > environment
> > > > > > > variables.
> > > > > > >
> > > > > > > John
> > > > > > >
> > > > > > > On Fri, Sep 6, 2019 at 1:02 PM Tsu, Mr. Justin via RT <
> > > > > met_help at ucar.edu
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > >
> > > > > > > >
> > > > > > > > Thanks John,
> > > > > > > >
> > > > > > > > I managed to scrap together some code to get RAOB
stats from
> > CNT
> > > > > > plotted
> > > > > > > > with 95% CI. Working on Surface stats now.
> > > > > > > >
> > > > > > > > So my configuration file looks like this right now:
> > > > > > > >
> > > > > > > > fcst = {
> > > > > > > > field = [
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > >
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000002_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000003_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000004_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000005_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000007_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000010_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000020_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000030_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000050_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000070_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000100_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000150_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000200_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000250_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000300_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000350_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000400_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000450_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000500_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000550_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000600_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000650_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000700_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000750_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000800_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000850_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000900_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000925_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000950_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_000975_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001000_000000_3a0118x0118_2015080106_00180000_fcstfld";},
> > > > > > > > {name =
"/users/tsu/MET/work/read_NRL_binary.py
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./dwptdp_data/dwptdp_pre_001013_000000_3a0118x0118_2015080106_00180000_fcstfld";}
> > > > > > > > ];
> > > > > > > > }
> > > > > > > >
> > > > > > > > obs = {
> > > > > > > > field = [
> > > > > > > > {name = "dptd";level = ["P0.86-1.5"];},
> > > > > > > > {name = "dptd";level = ["P1.6-2.5"];},
> > > > > > > > {name = "dptd";level = ["P2.6-3.5"];},
> > > > > > > > {name = "dptd";level = ["P3.6-4.5"];},
> > > > > > > > {name = "dptd";level = ["P4.6-6"];},
> > > > > > > > {name = "dptd";level = ["P6.1-8"];},
> > > > > > > > {name = "dptd";level = ["P9-15"];},
> > > > > > > > {name = "dptd";level = ["P16-25"];},
> > > > > > > > {name = "dptd";level = ["P26-40"];},
> > > > > > > > {name = "dptd";level = ["P41-65"];},
> > > > > > > > {name = "dptd";level = ["P66-85"];},
> > > > > > > > {name = "dptd";level = ["P86-125"];},
> > > > > > > > {name = "dptd";level = ["P126-175"];},
> > > > > > > > {name = "dptd";level = ["P176-225"];},
> > > > > > > > {name = "dptd";level = ["P226-275"];},
> > > > > > > > {name = "dptd";level = ["P276-325"];},
> > > > > > > > {name = "dptd";level = ["P326-375"];},
> > > > > > > > {name = "dptd";level = ["P376-425"];},
> > > > > > > > {name = "dptd";level = ["P426-475"];},
> > > > > > > > {name = "dptd";level = ["P476-525"];},
> > > > > > > > {name = "dptd";level = ["P526-575"];},
> > > > > > > > {name = "dptd";level = ["P576-625"];},
> > > > > > > > {name = "dptd";level = ["P626-675"];},
> > > > > > > > {name = "dptd";level = ["P676-725"];},
> > > > > > > > {name = "dptd";level = ["P726-775"];},
> > > > > > > > {name = "dptd";level = ["P776-825"];},
> > > > > > > > {name = "dptd";level = ["P826-875"];},
> > > > > > > > {name = "dptd";level = ["P876-912"];},
> > > > > > > > {name = "dptd";level = ["P913-936"];},
> > > > > > > > {name = "dptd";level = ["P937-962"];},
> > > > > > > > {name = "dptd";level = ["P963-987"];},
> > > > > > > > {name = "dptd";level = ["P988-1006"];},
> > > > > > > > {name = "dptd";level = ["P1007-1013"];}
> > > > > > > >
> > > > > > > > And I have the data:
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00000000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00030000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00060000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00090000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00120000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00180000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00240000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00300000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00360000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00420000_fcstfld
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
dwptdp_data/dwptdp_pre_000001_000000_3a0118x0118_2015080106_00480000_fcstfld
> > > > > > > >
> > > > > > > > for a particular DTG and vertical level. If I want to
run
> > > multiple
> > > > > > lead
> > > > > > > > times, it seems like I'll have to copy that long list
of
> fields
> > > for
> > > > > > each
> > > > > > > > lead time in the fcst dict and then duplicate the obs
> > dictionary
> > > so
> > > > > > that
> > > > > > > > each forecast entry has a corresponding obs level
matching
> > range.
> > > > Is
> > > > > > > this
> > > > > > > > correct or is there a shorter/better way to do this?
> > > > > > > >
> > > > > > > > Justin
> > > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > Sent: Tuesday, September 3, 2019 8:36 AM
> > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> faulting
> > > > > > > >
> > > > > > > > Justin,
> > > > > > > >
> > > > > > > > I see that you're plotting RMSE and bias (called ME
for Mean
> > > Error
> > > > in
> > > > > > > MET)
> > > > > > > > in the plots you sent.
> > > > > > > >
> > > > > > > > Table 7.6 of the MET User's Guide (
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/sites/default/files/community-
code/met/docs/user-guide/MET_Users_Guide_v8.1.1.pdf
> > > > > > > > )
> > > > > > > > describes the contents of the CNT line type type. Bot
the
> > columns
> > > > for
> > > > > > > RMSE
> > > > > > > > and ME are followed by _NCL and _NCU columns which
give the
> > > > > parametric
> > > > > > > > approximation of the confidence interval for those
scores.
> So
> > > yes,
> > > > > you
> > > > > > > can
> > > > > > > > run Stat-Analysis to aggregate SL1L2 lines together
and write
> > the
> > > > > > > > corresponding CNT output line type.
> > > > > > > >
> > > > > > > > The RMSE_NCL and RMSE_NCU columns contain the lower
and upper
> > > > > > parametric
> > > > > > > > confidence intervals for the RMSE statistic and ME_NCL
and
> > ME_NCU
> > > > > > columns
> > > > > > > > for the ME statistic.
> > > > > > > >
> > > > > > > > You can change the alpha value for those confidence
intervals
> > by
> > > > > > setting:
> > > > > > > > -out_alpha 0.01 (for 99% CI) or -out_alpha 0.05 (for
95% CI).
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > John
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, Aug 30, 2019 at 5:11 PM Tsu, Mr. Justin via RT
<
> > > > > > > met_help at ucar.edu>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > >
> > > > > > > > >
> > > > > > > > > Thanks John,
> > > > > > > > >
> > > > > > > > > This all helps me greatly. One more questions: is
there
> any
> > > > > > > information
> > > > > > > > > in either the CNT or SL1L2 that could give me
confidence
> > > > intervals
> > > > > > for
> > > > > > > > > each data point? I'm looking to replicate the
attached
> plot.
> > > > > Notice
> > > > > > > > that
> > > > > > > > > the individual points could have either a 99, 95 or
90 %
> > > > > confidence.
> > > > > > > > >
> > > > > > > > > Justin
> > > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: John Halley Gotway via RT
[mailto:met_help at ucar.edu]
> > > > > > > > > Sent: Friday, August 30, 2019 12:46 PM
> > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat seg
> > faulting
> > > > > > > > >
> > > > > > > > > Justin,
> > > > > > > > >
> > > > > > > > > Sounds about right. Each time you run Grid-Stat or
> > Point-Stat
> > > > you
> > > > > > can
> > > > > > > > > write the CNT output line type which contains stats
like
> MSE,
> > > ME,
> > > > > > MAE,
> > > > > > > > and
> > > > > > > > > RMSE. And I'm recommended that you also write the
SL1L2
> line
> > > > type
> > > > > as
> > > > > > > > well.
> > > > > > > > >
> > > > > > > > > Then you'd run a stat_analysis job like this:
> > > > > > > > >
> > > > > > > > > stat_analysis -lookin /path/to/stat/data -job
> aggregate_stat
> > > > > > -line_type
> > > > > > > > > SL1L2 -out_line_type CNT -by
FCST_VAR,FCST_LEV,FCST_LEAD
> > > > -out_stat
> > > > > > > > > cnt_out.stat
> > > > > > > > >
> > > > > > > > > This job reads any .stat files it finds in
> > > "/path/to/stat/data",
> > > > > > reads
> > > > > > > > the
> > > > > > > > > SL1L2 line type, and for each unique combination of
> FCST_VAR,
> > > > > > FCST_LEV,
> > > > > > > > and
> > > > > > > > > FCST_LEAD columns, it'll aggregate those SL1L2
partial sums
> > > > > together
> > > > > > > and
> > > > > > > > > write out the corresponding CNT line type to the
output
> file
> > > > named
> > > > > > > > > cnt_out.stat.
> > > > > > > > >
> > > > > > > > > John
> > > > > > > > >
> > > > > > > > > On Fri, Aug 30, 2019 at 12:36 PM Tsu, Mr. Justin via
RT <
> > > > > > > > met_help at ucar.edu
> > > > > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > >
> > > > > > > > > >
> > > > > > > > > > So if I understand what you're saying correctly,
then if
> I
> > > > wanted
> > > > > > to
> > > > > > > an
> > > > > > > > > > average of 24 hour forecasts over a month long
run, then
> I
> > > > would
> > > > > > use
> > > > > > > > the
> > > > > > > > > > SL1L2 output to aggregate and produce this
average?
> > Whereas
> > > > if I
> > > > > > > used
> > > > > > > > > CNT,
> > > > > > > > > > this would just provide me ~30 individual (per day
over a
> > > > month)
> > > > > 24
> > > > > > > > hour
> > > > > > > > > > forecast verifications?
> > > > > > > > > >
> > > > > > > > > > On a side note, did we ever go over how to plot
the SL1L2
> > MSE
> > > > and
> > > > > > > > biases?
> > > > > > > > > > I am forgetting if we used stat_analysis to
produce a
> plot
> > or
> > > > if
> > > > > > the
> > > > > > > > plot
> > > > > > > > > > you showed me was just something you guys post
processed
> > > using
> > > > > > python
> > > > > > > > or
> > > > > > > > > > whatnot.
> > > > > > > > > >
> > > > > > > > > > Justin
> > > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: John Halley Gotway via RT [mailto:
> met_help at ucar.edu]
> > > > > > > > > > Sent: Friday, August 30, 2019 8:47 AM
> > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > faulting
> > > > > > > > > >
> > > > > > > > > > Justin,
> > > > > > > > > >
> > > > > > > > > > We wrote the SL1L2 partial sums from Point-Stat
because
> > they
> > > > can
> > > > > be
> > > > > > > > > > aggregated together by the stat-analysis tool over
> multiple
> > > > days
> > > > > or
> > > > > > > > > cases.
> > > > > > > > > >
> > > > > > > > > > If you're interested in continuous statistics from
> > > Point-Stat,
> > > > > I'd
> > > > > > > > > > recommend writing the CNT line type (which has the
stats
> > > > computed
> > > > > > for
> > > > > > > > > that
> > > > > > > > > > single run) and the SL1L2 line type (so that you
can
> > > aggregate
> > > > > them
> > > > > > > > > > together in stat-analysis or METviewer).
> > > > > > > > > >
> > > > > > > > > > The other alternative is looking at the average of
the
> > daily
> > > > > > > statistics
> > > > > > > > > > scores. For RMSE, the average of the daily RMSE
is equal
> > to
> > > > the
> > > > > > > > > aggregated
> > > > > > > > > > score... as long as the number of matched pairs
remains
> > > > constant
> > > > > > day
> > > > > > > to
> > > > > > > > > > day. But if one today you have 98 matched pairs
and
> > tomorrow
> > > > you
> > > > > > > have
> > > > > > > > > 105,
> > > > > > > > > > then tomorrow's score will have slightly more
weight.
> The
> > > > SL1L2
> > > > > > > lines
> > > > > > > > > are
> > > > > > > > > > aggregated as weighted averages, where the TOTAL
column
> is
> > > the
> > > > > > > weight.
> > > > > > > > > And
> > > > > > > > > > then stats (like RMSE and MSE) are recomputed from
those
> > > > > aggregated
> > > > > > > > > > scores. Generally, the statisticians recommend
this
> method
> > > > over
> > > > > > the
> > > > > > > > mean
> > > > > > > > > > of the daily scores. Neither is "wrong", they
just give
> > you
> > > > > > slightly
> > > > > > > > > > different information.
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > John
> > > > > > > > > >
> > > > > > > > > > On Thu, Aug 29, 2019 at 5:07 PM Tsu, Mr. Justin
via RT <
> > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks John.
> > > > > > > > > > >
> > > > > > > > > > > Sorry it's taken me such a long time to get to
this.
> > It's
> > > > > > nearing
> > > > > > > > the
> > > > > > > > > > end
> > > > > > > > > > > of FY19 so I have been finalizing several
transition
> > > projects
> > > > > and
> > > > > > > > > haven’t
> > > > > > > > > > > had much time to work on MET recently. I just
picked
> > this
> > > > back
> > > > > > up
> > > > > > > > and
> > > > > > > > > > have
> > > > > > > > > > > loaded a couple new modules. Here is what I
have to
> work
> > > > with
> > > > > > now:
> > > > > > > > > > >
> > > > > > > > > > > 1) intel/xe_2013-sp1-u1
> > > > > > > > > > > 2) netcdf-local/netcdf-met
> > > > > > > > > > > 3) met-8.1/met-8.1a-with-grib2-support
> > > > > > > > > > > 4) ncview-2.1.5/ncview-2.1.5
> > > > > > > > > > > 5) udunits/udunits-2.1.24
> > > > > > > > > > > 6) gcc-6.3.0/gcc-6.3.0
> > > > > > > > > > > 7) ImageMagicK/ImageMagick-6.9.0-10
> > > > > > > > > > > 8) python/anaconda-7-15-15-save.6.6.2017
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Running
> > > > > > > > > > > > point_stat PYTHON_NUMPY raob_2015020412.nc
> > dwptdpConfig
> > > > -v
> > > > > 3
> > > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end 20200101
>>
> > log.out
> > > > > > > > > > >
> > > > > > > > > > > I get many matched pairs. Here is a sample of
what the
> > log
> > > > > file
> > > > > > > > looks
> > > > > > > > > > > like for one of the pressure ranges I am
verifying on:
> > > > > > > > > > >
> > > > > > > > > > > 15257 DEBUG 2: Processing dwptdp/pre_000400
versus
> > > > > dptd/P425-376,
> > > > > > > for
> > > > > > > > > > > observation type radiosonde, over region FULL,
for
> > > > > interpolation
> > > > > > > > method
> > > > > > > > > > > NEAREST(1), using 98 pairs.
> > > > > > > > > > > 15258 DEBUG 3: Number of matched pairs = 98
> > > > > > > > > > > 15259 DEBUG 3: Observations processed =
4680328
> > > > > > > > > > > 15260 DEBUG 3: Rejected: SID exclusion = 0
> > > > > > > > > > > 15261 DEBUG 3: Rejected: obs type =
3890030
> > > > > > > > > > > 15262 DEBUG 3: Rejected: valid time = 0
> > > > > > > > > > > 15263 DEBUG 3: Rejected: bad obs value = 0
> > > > > > > > > > > 15264 DEBUG 3: Rejected: off the grid = 786506
> > > > > > > > > > > 15265 DEBUG 3: Rejected: topography = 0
> > > > > > > > > > > 15266 DEBUG 3: Rejected: level mismatch = 3694
> > > > > > > > > > > 15267 DEBUG 3: Rejected: quality marker = 0
> > > > > > > > > > > 15268 DEBUG 3: Rejected: message type = 0
> > > > > > > > > > > 15269 DEBUG 3: Rejected: masking region = 0
> > > > > > > > > > > 15270 DEBUG 3: Rejected: bad fcst value = 0
> > > > > > > > > > > 15271 DEBUG 3: Rejected: duplicates = 0
> > > > > > > > > > > 15272 DEBUG 2: Computing Continuous Statistics.
> > > > > > > > > > > 15273 DEBUG 3: Using 98 of 98 pairs for forecast
> > filtering
> > > > > > > threshold
> > > > > > > > > >=0,
> > > > > > > > > > > observation filtering threshold >=0, and field
logic
> > UNION.
> > > > > > > > > > > 15274 DEBUG 3: Using 98 of 98 pairs for forecast
> > filtering
> > > > > > > threshold
> > > > > > > > > > > >=5.0, observation filtering threshold >=5.0,
and field
> > > logic
> > > > > > > UNION.
> > > > > > > > > > > 15275 DEBUG 3: Using 98 of 98 pairs for forecast
> > filtering
> > > > > > > threshold
> > > > > > > > > > > >=10.0, observation filtering threshold >=10.0,
and
> field
> > > > logic
> > > > > > > > UNION.
> > > > > > > > > > > 15276 DEBUG 2: Computing Scalar Partial Sums.
> > > > > > > > > > > 15277 DEBUG 3: Using 98 of 98 pairs for forecast
> > filtering
> > > > > > > threshold
> > > > > > > > > >=0,
> > > > > > > > > > > observation filtering threshold >=0, and field
logic
> > UNION.
> > > > > > > > > > > 15278 DEBUG 3: Using 98 of 98 pairs for forecast
> > filtering
> > > > > > > threshold
> > > > > > > > > > > >=5.0, observation filtering threshold >=5.0,
and field
> > > logic
> > > > > > > UNION.
> > > > > > > > > > > 15279 DEBUG 3: Using 98 of 98 pairs for forecast
> > filtering
> > > > > > > threshold
> > > > > > > > > > > >=10.0, observation filtering threshold >=10.0,
and
> field
> > > > logic
> > > > > > > > UNION.
> > > > > > > > > > > 15280 DEBUG 2:
> > > > > > > > > > > 15281 DEBUG 2:
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--------------------------------------------------------------------------------
> > > > > > > > > > >
> > > > > > > > > > > I am going to work on processing these point
stat files
> > to
> > > > > create
> > > > > > > > those
> > > > > > > > > > > vertical raob plots we had a discussion about.
I
> > remember
> > > us
> > > > > > > talking
> > > > > > > > > > about
> > > > > > > > > > > the partial sums file. Why did we choose to go
the
> route
> > > of
> > > > > > > > producing
> > > > > > > > > > > partial sums then feeding that into series
analysis to
> > > > generate
> > > > > > > bias
> > > > > > > > > and
> > > > > > > > > > > MSE? It looks like bias and MSE both exist
within the
> > CNT
> > > > line
> > > > > > > type
> > > > > > > > > > (MBIAS
> > > > > > > > > > > and MSE)?
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Justin
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: John Halley Gotway via RT [mailto:
> > met_help at ucar.edu]
> > > > > > > > > > > Sent: Friday, August 16, 2019 12:16 PM
> > > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544] point_stat
seg
> > > > faulting
> > > > > > > > > > >
> > > > > > > > > > > Justin,
> > > > > > > > > > >
> > > > > > > > > > > Great, thanks for sending me the sample data.
Yes, I
> was
> > > > able
> > > > > to
> > > > > > > > > > replicate
> > > > > > > > > > > the segfault. The good news is that this is
caused by
> a
> > > > simple
> > > > > > > typo
> > > > > > > > > > that's
> > > > > > > > > > > easy to fix. If you look in the "obs.field"
entry of
> the
> > > > > > > > relhumConfig
> > > > > > > > > > > file, you'll see an empty string for the last
field
> > listed:
> > > > > > > > > > >
> > > > > > > > > > > *obs = { field = [*
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > * ... {name = "dptd";level =
> > > ["P988-1006"];},
> > > > > > > > > > {name =
> > > > > > > > > > > "";level = ["P1007-1013"];} ];*
> > > > > > > > > > > If you change that empty string to "dptd", the
segfault
> > > will
> > > > go
> > > > > > > > away:*
> > > > > > > > > > > {name = "dpdt";level = ["P1007-1013"];}*
> > > > > > > > > > > Rerunning met-8.0 with that change, Point-Stat
ran to
> > > > > completion
> > > > > > > (in
> > > > > > > > 2
> > > > > > > > > > > minutes 48 seconds on my desktop machine), but
it
> > produced
> > > 0
> > > > > > > matched
> > > > > > > > > > > pairs. They were discarded because of the valid
times
> > > (seen
> > > > > > using
> > > > > > > > -v 3
> > > > > > > > > > > command line option to Point-Stat). The ob file
you
> sent
> > > is
> > > > > > named
> > > > > > > "
> > > > > > > > > > > raob_2015020412.nc" but the actual times in that
file
> > are
> > > > for
> > > > > > > > > > > "20190426_120000":
> > > > > > > > > > >
> > > > > > > > > > > *ncdump -v hdr_vld_table raob_2015020412.nc <
> > > > > > > > http://raob_2015020412.nc
> > > > > > > > > >*
> > > > > > > > > > >
> > > > > > > > > > > * hdr_vld_table = "20190426_120000" ;*
> > > > > > > > > > >
> > > > > > > > > > > So please be aware of that discrepancy. To just
> produce
> > > some
> > > > > > > matched
> > > > > > > > > > > pairs, I told Point-Stat to use the valid times
of the
> > > data:
> > > > > > > > > > > *met-8.0/bin/point_stat PYTHON_NUMPY
> raob_2015020412.nc
> > > > > > > > > > > <http://raob_2015020412.nc> relhumConfig \*
> > > > > > > > > > > * -outdir out -v 3 -log run_ps.log
-obs_valid_beg
> > > > > 20190426_120000
> > > > > > > > > > > -obs_valid_end 20190426_120000*
> > > > > > > > > > >
> > > > > > > > > > > But I still get 0 matched pairs. This time,
it's
> because
> > > of
> > > > > bad
> > > > > > > > > forecast
> > > > > > > > > > > values:
> > > > > > > > > > > *DEBUG 3: Rejected: bad fcst value = 55*
> > > > > > > > > > >
> > > > > > > > > > > Taking a step back... let's run one of these
fields
> > through
> > > > > > > > > > > plot_data_plane, which results in an error:
> > > > > > > > > > > *met-8.0/bin/plot_data_plane PYTHON_NUMPY
plot.ps <
> > > > > > http://plot.ps>
> > > > > > > > > > > 'name="./read_NRL_binary.py
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
./relhum_data/relhum_pre_000001_000000_2a0097x0097_2015012100_00180000_fcstfld";'*
> > > > > > > > > > > ERROR : DataPlane::two_to_one() -> range check
error:
> > (Nx,
> > > > > Ny) =
> > > > > > > > (97,
> > > > > > > > > > 97),
> > > > > > > > > > > (x, y) = (97, 0)
> > > > > > > > > > >
> > > > > > > > > > > While the numpy object is 97x97, the grid is
specified
> as
> > > > being
> > > > > > > > 118x118
> > > > > > > > > > in
> > > > > > > > > > > the python script ('nx': 118, 'ny': 118).
> > > > > > > > > > >
> > > > > > > > > > > Just to get something working, I modified the nx
and ny
> > in
> > > > the
> > > > > > > python
> > > > > > > > > > > script:
> > > > > > > > > > > 'nx':97,
> > > > > > > > > > > 'ny':97,
> > > > > > > > > > > Rerunning again, I still didn't get any matched
pairs.
> > > > > > > > > > >
> > > > > > > > > > > So I'd suggest...
> > > > > > > > > > > - Fix the typo in the config file.
> > > > > > > > > > > - Figure out the discrepancy between the obs
file name
> > > > > timestamp
> > > > > > > and
> > > > > > > > > the
> > > > > > > > > > > data in that file.
> > > > > > > > > > > - Make sure the grid information is consistent
with the
> > > data
> > > > in
> > > > > > the
> > > > > > > > > > python
> > > > > > > > > > > script.
> > > > > > > > > > >
> > > > > > > > > > > Obviously though, we don't want to code to be
> segfaulting
> > > in
> > > > > any
> > > > > > > > > > > condition. So next, I tested using met-8.1 with
that
> > empty
> > > > > > string.
> > > > > > > > > This
> > > > > > > > > > > time it does run with no segfault, but prints a
warning
> > > about
> > > > > the
> > > > > > > > empty
> > > > > > > > > > > string.
> > > > > > > > > > >
> > > > > > > > > > > Hope that helps.
> > > > > > > > > > >
> > > > > > > > > > > Thanks,
> > > > > > > > > > > John
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Aug 15, 2019 at 7:00 PM Tsu, Mr. Justin
via RT
> <
> > > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > <URL:
> > > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Hey John,
> > > > > > > > > > > >
> > > > > > > > > > > > Ive put my data in tsu_data_20190815/ under
met_help.
> > > > > > > > > > > >
> > > > > > > > > > > > I am running met-8.0/met-8.0-with-grib2-
support and
> > have
> > > > > > > provided
> > > > > > > > > > > > everything
> > > > > > > > > > > > on that list you've provided me. Let me know
if
> you're
> > > > able
> > > > > to
> > > > > > > > > > replicate
> > > > > > > > > > > > it
> > > > > > > > > > > >
> > > > > > > > > > > > Justin
> > > > > > > > > > > >
> > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > From: John Halley Gotway via RT [mailto:
> > > met_help at ucar.edu]
> > > > > > > > > > > > Sent: Thursday, August 15, 2019 4:08 PM
> > > > > > > > > > > > To: Tsu, Mr. Justin
> > > > > > > > > > > > Subject: Re: [rt.rap.ucar.edu #91544]
point_stat seg
> > > > > faulting
> > > > > > > > > > > >
> > > > > > > > > > > > Justin,
> > > > > > > > > > > >
> > > > > > > > > > > > Well that doesn't seem to be very helpful of
> Point-Stat
> > > at
> > > > > all.
> > > > > > > > > There
> > > > > > > > > > > > isn't much jumping out at me from the log
messages
> you
> > > > sent.
> > > > > > In
> > > > > > > > > fact,
> > > > > > > > > > I
> > > > > > > > > > > > hunted around for the DEBUG(7) log message but
> couldn't
> > > > find
> > > > > > > where
> > > > > > > > in
> > > > > > > > > > the
> > > > > > > > > > > > code it's being written. Are you able to send
me
> some
> > > > sample
> > > > > > > data
> > > > > > > > to
> > > > > > > > > > > > replicate this behavior?
> > > > > > > > > > > >
> > > > > > > > > > > > I'd need to know...
> > > > > > > > > > > > - What version of MET are you running.
> > > > > > > > > > > > - A copy of your Point-Stat config file.
> > > > > > > > > > > > - The python script that you're running.
> > > > > > > > > > > > - The input file for that python script.
> > > > > > > > > > > > - The NetCDF point observation file you're
passing to
> > > > > > Point-Stat.
> > > > > > > > > > > >
> > > > > > > > > > > > If I can replicate the behavior here, it
should be
> easy
> > > to
> > > > > run
> > > > > > it
> > > > > > > > in
> > > > > > > > > > the
> > > > > > > > > > > > debugger and figure it out.
> > > > > > > > > > > >
> > > > > > > > > > > > You can post data to our anonymous ftp site as
> > described
> > > in
> > > > > > "How
> > > > > > > to
> > > > > > > > > > send
> > > > > > > > > > > us
> > > > > > > > > > > > data":
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dtcenter.org/community-code/model-evaluation-tools-met/met-
help-desk
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks,
> > > > > > > > > > > > John
> > > > > > > > > > > >
> > > > > > > > > > > > On Thu, Aug 15, 2019 at 3:57 PM Tsu, Mr.
Justin via
> RT
> > <
> > > > > > > > > > > met_help at ucar.edu>
> > > > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thu Aug 15 15:57:29 2019: Request 91544 was
acted
> > upon.
> > > > > > > > > > > > > Transaction: Ticket created by
> > > > justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > > > Queue: met_help
> > > > > > > > > > > > > Subject: point_stat seg faulting
> > > > > > > > > > > > > Owner: Nobody
> > > > > > > > > > > > > Requestors: justin.tsu at nrlmry.navy.mil
> > > > > > > > > > > > > Status: new
> > > > > > > > > > > > > Ticket <URL:
> > > > > > > > > >
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544
> > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Hey John,
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > I'm trying to extrapolate the production of
> vertical
> > > raob
> > > > > > > > > > verification
> > > > > > > > > > > > > plots
> > > > > > > > > > > > > using point_stat and stat_analysis like we
did
> > together
> > > > for
> > > > > > > winds
> > > > > > > > > but
> > > > > > > > > > > for
> > > > > > > > > > > > > relative humidity now. But when I run
point_stat,
> it
> > > seg
> > > > > > > faults
> > > > > > > > > > > without
> > > > > > > > > > > > > much explanation
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > > > ----
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2: Reading data for relhum/pre_001013.
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2: For relhum/pre_001013 found 1
forecast
> > > levels, 0
> > > > > > > > > climatology
> > > > > > > > > > > > mean
> > > > > > > > > > > > > levels, and 0 climatology standard deviation
> levels.
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
----------------------------------------------------------------------------
> > > > > > > > > > > > > ----
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2:
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 2: Searching 4680328 observations from
617
> > > > messages.
> > > > > > > > > > > > >
> > > > > > > > > > > > > DEBUG 7: tbl dims: messge_type: 1
station id:
> > 617
> > > > > > > > > valid_time: 1
> > > > > > > > > > > > >
> > > > > > > > > > > > > run_stats.sh: line 26: 40818 Segmentation
fault
> > > > > > point_stat
> > > > > > > > > > > > > PYTHON_NUMPY
> > > > > > > > > > > > > ${OBFILE} ${CONFIG} -v 10 -outdir
./out/point_stat
> > -log
> > > > > > > > > > > > > ./out/point_stat.log
> > > > > > > > > > > > > -obs_valid_beg 20010101 -obs_valid_end
20200101
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > >
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Wed Nov 06 17:00:58 2019
Justin,
It looks like I may have dropped the ball on this. Are you still
experiencing segfaults from Point-Stat and waiting for me to take a
look at it?
Thanks,
John
------------------------------------------------
Subject: point_stat seg faulting
From: Tsu, Mr. Justin
Time: Wed Nov 06 20:02:19 2019
John,
No I migrated everything over to another machine and somehow fixed it.
But I
am up and running with point_stat and am producing .stat files.
Looking
ahead, I am using NEXRAD Stage IV precipitation data to verify my
COAMPS total
precipitation (which from my last email I told you buckets every 96
hours).
My model fields only go to tau 48 so I should be okay here. I've also
downloaded 6 hourly NEXRAD precip data which I am going to use
pcp_combine on
to achieve the same accumulated precip as my model fields. Once I run
pcp_combine, how do I actually compare this to my model field? Do I
use
point_stat again?
Justin
-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Wednesday, November 6, 2019 4:01 PM
To: Tsu, Mr. Justin
Subject: [rt.rap.ucar.edu #91544] point_stat seg faulting
Justin,
It looks like I may have dropped the ball on this. Are you still
experiencing
segfaults from Point-Stat and waiting for me to take a look at it?
Thanks,
John
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Thu Nov 07 09:43:18 2019
Justin,
Great, glad you were able to make progress. If switching to another
machine solved the problem, that suggests it was most likely an
environment
issue, like some sort of library incompatibility.
As for comparing model precip to NEXRAD data you've summed up using
pcp_combine, it sounds like you're comparing a gridded forecast to a
gridded analysis. So the Grid-Stat tool would be the right choice for
that. And you'd also be able to do grid-to-grid comparisons using
MODE,
Wavelet-Stat, and Series-Analysis.
Thanks,
John
On Wed, Nov 6, 2019 at 8:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>
> John,
>
> No I migrated everything over to another machine and somehow fixed
it.
> But I
> am up and running with point_stat and am producing .stat files.
Looking
> ahead, I am using NEXRAD Stage IV precipitation data to verify my
COAMPS
> total
> precipitation (which from my last email I told you buckets every 96
> hours).
> My model fields only go to tau 48 so I should be okay here. I've
also
> downloaded 6 hourly NEXRAD precip data which I am going to use
pcp_combine
> on
> to achieve the same accumulated precip as my model fields. Once I
run
> pcp_combine, how do I actually compare this to my model field? Do I
use
> point_stat again?
>
> Justin
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Wednesday, November 6, 2019 4:01 PM
> To: Tsu, Mr. Justin
> Subject: [rt.rap.ucar.edu #91544] point_stat seg faulting
>
> Justin,
>
> It looks like I may have dropped the ball on this. Are you still
> experiencing
> segfaults from Point-Stat and waiting for me to take a look at it?
>
> Thanks,
> John
>
>
------------------------------------------------
Subject: point_stat seg faulting
From: John Halley Gotway
Time: Thu Nov 07 09:51:56 2019
Justin,
I'll go ahead and resolve this support ticket. If more questions come
up
in your comparison to NEXRAD data, please just send a new email to
met_help at ucar.edu.
Thanks,
John
On Thu, Nov 7, 2019 at 9:43 AM John Halley Gotway <johnhg at ucar.edu>
wrote:
> Justin,
>
> Great, glad you were able to make progress. If switching to another
> machine solved the problem, that suggests it was most likely an
environment
> issue, like some sort of library incompatibility.
>
> As for comparing model precip to NEXRAD data you've summed up using
> pcp_combine, it sounds like you're comparing a gridded forecast to a
> gridded analysis. So the Grid-Stat tool would be the right choice
for
> that. And you'd also be able to do grid-to-grid comparisons using
MODE,
> Wavelet-Stat, and Series-Analysis.
>
> Thanks,
> John
>
> On Wed, Nov 6, 2019 at 8:02 PM Tsu, Mr. Justin via RT
<met_help at ucar.edu>
> wrote:
>
>>
>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=91544 >
>>
>> John,
>>
>> No I migrated everything over to another machine and somehow fixed
it.
>> But I
>> am up and running with point_stat and am producing .stat files.
Looking
>> ahead, I am using NEXRAD Stage IV precipitation data to verify my
COAMPS
>> total
>> precipitation (which from my last email I told you buckets every 96
>> hours).
>> My model fields only go to tau 48 so I should be okay here. I've
also
>> downloaded 6 hourly NEXRAD precip data which I am going to use
>> pcp_combine on
>> to achieve the same accumulated precip as my model fields. Once I
run
>> pcp_combine, how do I actually compare this to my model field? Do I
use
>> point_stat again?
>>
>> Justin
>>
>> -----Original Message-----
>> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
>> Sent: Wednesday, November 6, 2019 4:01 PM
>> To: Tsu, Mr. Justin
>> Subject: [rt.rap.ucar.edu #91544] point_stat seg faulting
>>
>> Justin,
>>
>> It looks like I may have dropped the ball on this. Are you still
>> experiencing
>> segfaults from Point-Stat and waiting for me to take a look at it?
>>
>> Thanks,
>> John
>>
>>
------------------------------------------------
More information about the Met_help
mailing list