[Met_help] [rt.rap.ucar.edu #97518] History for Point Stat -- NDAS versus MADIS

John Halley Gotway via RT met_help at ucar.edu
Fri Nov 20 12:31:57 MST 2020


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Hi all,

It's me again.  Today I have a question that may not be fair to ask, but I figured I would throw it out there in case I'm missing an obviously technical issue that can explain my troubles.

I'm currently running Point Stat for verification of wind metrics-mainly 10m sustained winds and surface gusts.  I run the verification separately using two different observation platforms: NDAS and MADIS obs data.  The goal at this point was to compare the results from the two observation data sets, with the "hope" of showing that using the MADIS data provided similar results as the more trustworthy NDAS data.  Given the MADIS data provides a lot for data, both spatially and temporally, at the risk of a decrease in observation quality, we'd prefer to use it-but the results of this have me a bit perplexed.

The issue is: the results are widely different between the two different obs data sets.  The MADIS data results show much larger metrics-usually double or triple the results from the NDAS data.  I've attached a few timeseries to show examples of this....where the MBIAS and MAE are much larger via the MADIS dataset.

My technical question is this: is there a systematic issue that could explain these results? I have a hard time believing that that the MADIS data quality is the sole explanation.  I know the MADIS dataset contains a lot more information-so multiple observations from a point location may fall inside the time window to be included in the analysis-could this impact the results? I'm using the same config file for the two analysis (on the ftp site)...should I not be doing this?  Is there a technical MET issue that could explain why the MADIS numbers are higher?  Or is this simply of case of 'it is what it is'?

I realize the answer to this question may fall more on the meteorology/observation side of  spectrum, and not the core verification side, and for that reason I hate to bother you with this.  But, I figured it couldn't hurt to at least ask, in case I'm doing something technically wrong that may explain some of the difference.

I should also note that I've put a few sample output files, obs files and model grib files on the ftp site for reference.

Thanks for your help, and if this falls outside the area of your help, I apologize!

-Tom



[https://firstenergycorp.com/content/dam/opcologos/emailsig/FE-logo.png]

Thomas Workoff
Sr Scientist
office: 330-436-1475 (850-1475)
tworkoff at firstenergycorp.com
341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-West Akron Campus

------------------------------------------------------------------------------

The information contained in this message is intended only for the personal and confidential use of the recipient(s) named above. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this document in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately, and delete the original message.


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Point Stat -- NDAS versus MADIS
From: Minna Win
Time: Thu Nov 19 13:12:33 2020

Hello Tom,

I've assigned this ticket to John Halley Gotway.  Please allow a few
business days for a full response.

Regards,
Minna
---------------
Minna Win
National Center for Atmospheric Research
Developmental Testbed Center
Phone: 303-497-8423
Fax:   303-497-8401
---------------
Pronouns: she/her


On Thu, Nov 19, 2020 at 8:48 AM Workoff, Thomas E via RT
<met_help at ucar.edu>
wrote:

>
> Thu Nov 19 08:48:08 2020: Request 97518 was acted upon.
> Transaction: Ticket created by tworkoff at firstenergycorp.com
>        Queue: met_help
>      Subject: Point Stat -- NDAS versus MADIS
>        Owner: Nobody
>   Requestors: tworkoff at firstenergycorp.com
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
>
>
> Hi all,
>
> It's me again.  Today I have a question that may not be fair to ask,
but I
> figured I would throw it out there in case I'm missing an obviously
> technical issue that can explain my troubles.
>
> I'm currently running Point Stat for verification of wind metrics-
mainly
> 10m sustained winds and surface gusts.  I run the verification
separately
> using two different observation platforms: NDAS and MADIS obs data.
The
> goal at this point was to compare the results from the two
observation data
> sets, with the "hope" of showing that using the MADIS data provided
similar
> results as the more trustworthy NDAS data.  Given the MADIS data
provides a
> lot for data, both spatially and temporally, at the risk of a
decrease in
> observation quality, we'd prefer to use it-but the results of this
have me
> a bit perplexed.
>
> The issue is: the results are widely different between the two
different
> obs data sets.  The MADIS data results show much larger metrics-
usually
> double or triple the results from the NDAS data.  I've attached a
few
> timeseries to show examples of this....where the MBIAS and MAE are
much
> larger via the MADIS dataset.
>
> My technical question is this: is there a systematic issue that
could
> explain these results? I have a hard time believing that that the
MADIS
> data quality is the sole explanation.  I know the MADIS dataset
contains a
> lot more information-so multiple observations from a point location
may
> fall inside the time window to be included in the analysis-could
this
> impact the results? I'm using the same config file for the two
analysis (on
> the ftp site)...should I not be doing this?  Is there a technical
MET issue
> that could explain why the MADIS numbers are higher?  Or is this
simply of
> case of 'it is what it is'?
>
> I realize the answer to this question may fall more on the
> meteorology/observation side of  spectrum, and not the core
verification
> side, and for that reason I hate to bother you with this.  But, I
figured
> it couldn't hurt to at least ask, in case I'm doing something
technically
> wrong that may explain some of the difference.
>
> I should also note that I've put a few sample output files, obs
files and
> model grib files on the ftp site for reference.
>
> Thanks for your help, and if this falls outside the area of your
help, I
> apologize!
>
> -Tom
>
>
>
> [https://firstenergycorp.com/content/dam/opcologos/emailsig/FE-
logo.png]
>
> Thomas Workoff
> Sr Scientist
> office: 330-436-1475 (850-1475)
> tworkoff at firstenergycorp.com
> 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-West
Akron
> Campus
>
>
>
------------------------------------------------------------------------------
>
> The information contained in this message is intended only for the
> personal and confidential use of the recipient(s) named above. If
the
> reader of this message is not the intended recipient or an agent
> responsible for delivering it to the intended recipient, you are
hereby
> notified that you have received this document in error and that any
review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please
notify
> us immediately, and delete the original message.
>
>

------------------------------------------------
Subject: Point Stat -- NDAS versus MADIS
From: John Halley Gotway
Time: Thu Nov 19 17:23:41 2020

Hello Tom,

I see you have some questions related to using Point-Stat to compare
model
performance against NDAS PrepBufr observations versus MADIS
observations.
Thanks for sending along the plot of the statistics to illustrate your
questions. I read that you're surprised by the relatively large
performance
differences when verifying against these two different datasets.

I don't have any obvious solutions or explanations off the bat for
you. It
certainly seems plausible that there could be systematic differences
between NDAS and PrepBufr observations. However, there are several
questions to consider before coming to that conclusion.

For some reason, I'm having trouble accessing the RAL ftp site. So
unfortunately, these comments are prior to looking at the data you
sent.

(1) Time differences: You already hinted at this... more MADIS obs in
the
time window. And that's a great thing to consider. Try setting
"obs_summary
= NEAREST" to only use the observation that has the valid time closest
to
the forecast valid time.

(2) Level differences: Make sure that you're observations are for
approximately the same vertical levels.

(3) When your plots state "10-20 MPH", I assume you're defining a
wind_thresh to filter the U/V matched pairs. Is that threshold defined
on
the forecast winds, observed winds, or both? When comparing results
for 2
different obs datasets, it may be a good idea to only apply the filter
to
the forecast winds. That would minimize the effect of systematic
differences in the observations.

(4) You may find it useful to run the "plot_mpr.R" script shown on
this
page:
http://dtcenter.org/community-code/model-evaluation-tools-met/sample-
analysis-scripts

Here's a sample resulting plot:
http://dtcenter.org/sites/default/files/community-code/met/r-
scripts/mpr_plots.pdf

The output includes scatter plots and Q-Q plots derived from the MPR
output
line type from Point-Stat. Generating that for both NDAS and MADIS
output
might help shed more light on these differences.

Please LMK if you have additional questions and if I should look more
closely at the output you posted to ftp site. I'll have to try again
once
it's accessible again.

Thanks,
John Halley Gotway

On Thu, Nov 19, 2020 at 1:13 PM Minna Win via RT <met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
>
> Hello Tom,
>
> I've assigned this ticket to John Halley Gotway.  Please allow a few
> business days for a full response.
>
> Regards,
> Minna
> ---------------
> Minna Win
> National Center for Atmospheric Research
> Developmental Testbed Center
> Phone: 303-497-8423
> Fax:   303-497-8401
> ---------------
> Pronouns: she/her
>
>
> On Thu, Nov 19, 2020 at 8:48 AM Workoff, Thomas E via RT <
> met_help at ucar.edu>
> wrote:
>
> >
> > Thu Nov 19 08:48:08 2020: Request 97518 was acted upon.
> > Transaction: Ticket created by tworkoff at firstenergycorp.com
> >        Queue: met_help
> >      Subject: Point Stat -- NDAS versus MADIS
> >        Owner: Nobody
> >   Requestors: tworkoff at firstenergycorp.com
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
> >
> >
> > Hi all,
> >
> > It's me again.  Today I have a question that may not be fair to
ask, but
> I
> > figured I would throw it out there in case I'm missing an
obviously
> > technical issue that can explain my troubles.
> >
> > I'm currently running Point Stat for verification of wind metrics-
mainly
> > 10m sustained winds and surface gusts.  I run the verification
separately
> > using two different observation platforms: NDAS and MADIS obs
data.  The
> > goal at this point was to compare the results from the two
observation
> data
> > sets, with the "hope" of showing that using the MADIS data
provided
> similar
> > results as the more trustworthy NDAS data.  Given the MADIS data
> provides a
> > lot for data, both spatially and temporally, at the risk of a
decrease in
> > observation quality, we'd prefer to use it-but the results of this
have
> me
> > a bit perplexed.
> >
> > The issue is: the results are widely different between the two
different
> > obs data sets.  The MADIS data results show much larger metrics-
usually
> > double or triple the results from the NDAS data.  I've attached a
few
> > timeseries to show examples of this....where the MBIAS and MAE are
much
> > larger via the MADIS dataset.
> >
> > My technical question is this: is there a systematic issue that
could
> > explain these results? I have a hard time believing that that the
MADIS
> > data quality is the sole explanation.  I know the MADIS dataset
contains
> a
> > lot more information-so multiple observations from a point
location may
> > fall inside the time window to be included in the analysis-could
this
> > impact the results? I'm using the same config file for the two
analysis
> (on
> > the ftp site)...should I not be doing this?  Is there a technical
MET
> issue
> > that could explain why the MADIS numbers are higher?  Or is this
simply
> of
> > case of 'it is what it is'?
> >
> > I realize the answer to this question may fall more on the
> > meteorology/observation side of  spectrum, and not the core
verification
> > side, and for that reason I hate to bother you with this.  But, I
figured
> > it couldn't hurt to at least ask, in case I'm doing something
technically
> > wrong that may explain some of the difference.
> >
> > I should also note that I've put a few sample output files, obs
files and
> > model grib files on the ftp site for reference.
> >
> > Thanks for your help, and if this falls outside the area of your
help, I
> > apologize!
> >
> > -Tom
> >
> >
> >
> > [https://firstenergycorp.com/content/dam/opcologos/emailsig/FE-
logo.png]
> >
> > Thomas Workoff
> > Sr Scientist
> > office: 330-436-1475 (850-1475)
> > tworkoff at firstenergycorp.com
> > 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-
West
> Akron
> > Campus
> >
> >
> >
>
------------------------------------------------------------------------------
> >
> > The information contained in this message is intended only for the
> > personal and confidential use of the recipient(s) named above. If
the
> > reader of this message is not the intended recipient or an agent
> > responsible for delivering it to the intended recipient, you are
hereby
> > notified that you have received this document in error and that
any
> review,
> > dissemination, distribution, or copying of this message is
strictly
> > prohibited. If you have received this communication in error,
please
> notify
> > us immediately, and delete the original message.
> >
> >
>
>

------------------------------------------------
Subject: RE: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat -- NDAS versus MADIS
From: Workoff, Thomas E
Time: Fri Nov 20 06:49:53 2020

Hi John,

As always, thank you for your time and thoughts!  You've given me a
few good leads to go forward with as I try to figure this out.

First, your suggestion for setting "obs_summary = NEAREST" is likely
relevant.  Looking at my config file, I currently have 'obs_summary =
NONE'---I must have glossed over this part several months ago when
setting up the file.  Not sure how I missed it....but even if that
isn't the main culprit, that needs to be fixed.  So thank you!

Regarding the thresholds...I actually have the cnt_threshold variable
set for my thresholds, NOT wind_thresh.  My notes suggest this was
done because I wasn't sure how MET would handle the GUST parameter and
if it would naturally see that as 'wind'.  The config file I use
applies to GUST, WIND, UGRD and VGRD all in one file, so I handled
this by setting the cnt_thresh value so it would apply to all of them.
Is this not the best way to handle this? Perhaps this is another
possible error---for example, that section reads:

cnt_thresh     = [
>=4.47&&<8.94,>=8.94&&<13.41,>=13.41&&<17.88,>=17.88&&<22.35,>=22.35&&<26.82,>=26.82&&<31.29,>31.29
];
cnt_logic      = UNION;
wind_thresh    = [ NA ];

I also do not separate the threshold applications from forecast versus
obs.  The threshold is set in the global section, not specifically to
the forecast fields.  This is another change that perhaps I should
make.  After all, I want verification run on the forecast fields at
those threshold bins--so I should have been more specific about that.

FYI--I placed the config file for both the point_stat run and the
prep_bufr conversion on the ftp site.  If you can gain access to that,
you should be able to see what I'm talking about.  I included the
prep_bufr config file because I was originally considering that I may
be making some sketchy decisions in how the data in the prepbufr file
was being processed into the nedtcdf file, and perhaps the obs were
also an issue.  I can't totally rule this part out...especially with
the gusts, because there is a lot of variability in the observation
data sets (peak winds versus gusts versus maximum wind speeds, etc).
But investigating that may be a step or two down the line from here,
given what you've suggested in my point_stat run itself.

Also, thanks for the link to the R script!  When I get some time, I'll
push some data through that to see what it may possibly show me.  But
I think you've identified some low hanging fruit that I will try
today.

Now, off to make changes and re-run 3 months of data all over
again.....

I'll report back on the results of these changes.

Thanks again!


Thomas Workoff
Senior Scientist
Office: 330-436-1475 (850-1475)
tworkoff at firstenergycorp.com
341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-West
Akron Campus


-----Original Message-----
From: John Halley Gotway via RT <met_help at ucar.edu>
Sent: Thursday, November 19, 2020 7:24 PM
To: Workoff, Thomas E <tworkoff at firstenergycorp.com>
Subject: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat -- NDAS
versus MADIS

Hello Tom,

I see you have some questions related to using Point-Stat to compare
model performance against NDAS PrepBufr observations versus MADIS
observations.
Thanks for sending along the plot of the statistics to illustrate your
questions. I read that you're surprised by the relatively large
performance differences when verifying against these two different
datasets.

I don't have any obvious solutions or explanations off the bat for
you. It certainly seems plausible that there could be systematic
differences between NDAS and PrepBufr observations. However, there are
several questions to consider before coming to that conclusion.

For some reason, I'm having trouble accessing the RAL ftp site. So
unfortunately, these comments are prior to looking at the data you
sent.

(1) Time differences: You already hinted at this... more MADIS obs in
the time window. And that's a great thing to consider. Try setting
"obs_summary = NEAREST" to only use the observation that has the valid
time closest to the forecast valid time.

(2) Level differences: Make sure that you're observations are for
approximately the same vertical levels.

(3) When your plots state "10-20 MPH", I assume you're defining a
wind_thresh to filter the U/V matched pairs. Is that threshold defined
on the forecast winds, observed winds, or both? When comparing results
for 2 different obs datasets, it may be a good idea to only apply the
filter to the forecast winds. That would minimize the effect of
systematic differences in the observations.

(4) You may find it useful to run the "plot_mpr.R" script shown on
this
page:
http://secure-web.cisco.com/1qg1v0yZSMaNM4QpWKZGaQr-oq-
eXPz4dJRcCBAP4jya7cIr05F9HH07Hhcb90yDv7uFcy5MKYqb7QSLQZ_Di3nEQzzg4A521Lp2eG30vPGNQNplr2MGIE0sY7Gz56e4O20hNcNAtlowNrzPwlEzoGln-
8hXMFbERCjGn07lycdDZtsZzscUAHGMpU-F3ALbaIZ_m5VbVyH36sn3leXwoTK-
zYyOYh5KiBQdXulgYtLBkO4EQ0B36OshswFHjaD-
uOTHKx0bfyHgsdec9YR8bNPB9Q_fFFRBm0eubrkwtw2Er88FFGaukSVtrhiP2Cqr44VT0WzlMlutrA7Ky-
OqJcgPOrZ7p1nIomFfgwbi6TtAQusmsyfAffgGaJLKIi3fu/http%3A%2F%2Fdtcenter.org%2Fcommunity-
code%2Fmodel-evaluation-tools-met%2Fsample-analysis-scripts

Here's a sample resulting plot:
http://secure-web.cisco.com/1Un-
d15_mV7f_tTo7VfDmcP_rOFl76J983roGvywJYbtyOlOof6SNFN5ynpRp7E4bYqtZeqvbzk_Zsxu-
DbNyfpUb5E3_Jb7yP65_K0XLIQr38XIQbNJ12H2r0hbwR05C43U13CYrls53tASvVJd71-
5xqKu4zGKVtEBBFXHKtsllfEamzh4Co6Duy-LIEFJlGTWTJ8UoQ0RzJYlE37RMxUNCCs-
lOIfEXBwR23nY3Ff1SeYmG_Y6ACi67WNpkXB2qAG51pvtwR9Hx-
spebzLuQTndwds6ZbO5g3wUByYhs8A8JNLlrVU8VkTK6_fruQWW_l9w_iahdjCD46NOxIdZ4EYr0yL8Uy9dJczP01PekWwtIJUHHVjPKdxHtLfcFo5/http%3A%2F%2Fdtcenter.org%2Fsites%2Fdefault%2Ffiles%2Fcommunity-
code%2Fmet%2Fr-scripts%2Fmpr_plots.pdf

The output includes scatter plots and Q-Q plots derived from the MPR
output line type from Point-Stat. Generating that for both NDAS and
MADIS output might help shed more light on these differences.

Please LMK if you have additional questions and if I should look more
closely at the output you posted to ftp site. I'll have to try again
once it's accessible again.

Thanks,
John Halley Gotway

On Thu, Nov 19, 2020 at 1:13 PM Minna Win via RT <met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
>
> Hello Tom,
>
> I've assigned this ticket to John Halley Gotway.  Please allow a few
> business days for a full response.
>
> Regards,
> Minna
> ---------------
> Minna Win
> National Center for Atmospheric Research Developmental Testbed
Center
> Phone: 303-497-8423
> Fax:   303-497-8401
> ---------------
> Pronouns: she/her
>
>
> On Thu, Nov 19, 2020 at 8:48 AM Workoff, Thomas E via RT <
> met_help at ucar.edu>
> wrote:
>
> >
> > Thu Nov 19 08:48:08 2020: Request 97518 was acted upon.
> > Transaction: Ticket created by tworkoff at firstenergycorp.com
> >        Queue: met_help
> >      Subject: Point Stat -- NDAS versus MADIS
> >        Owner: Nobody
> >   Requestors: tworkoff at firstenergycorp.com
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
> >
> >
> > Hi all,
> >
> > It's me again.  Today I have a question that may not be fair to
ask, but
> I
> > figured I would throw it out there in case I'm missing an
obviously
> > technical issue that can explain my troubles.
> >
> > I'm currently running Point Stat for verification of wind metrics-
mainly
> > 10m sustained winds and surface gusts.  I run the verification
separately
> > using two different observation platforms: NDAS and MADIS obs
data.  The
> > goal at this point was to compare the results from the two
observation
> data
> > sets, with the "hope" of showing that using the MADIS data
provided
> similar
> > results as the more trustworthy NDAS data.  Given the MADIS data
> provides a
> > lot for data, both spatially and temporally, at the risk of a
decrease in
> > observation quality, we'd prefer to use it-but the results of this
have
> me
> > a bit perplexed.
> >
> > The issue is: the results are widely different between the two
different
> > obs data sets.  The MADIS data results show much larger metrics-
usually
> > double or triple the results from the NDAS data.  I've attached a
few
> > timeseries to show examples of this....where the MBIAS and MAE are
much
> > larger via the MADIS dataset.
> >
> > My technical question is this: is there a systematic issue that
could
> > explain these results? I have a hard time believing that that the
MADIS
> > data quality is the sole explanation.  I know the MADIS dataset
contains
> a
> > lot more information-so multiple observations from a point
location may
> > fall inside the time window to be included in the analysis-could
this
> > impact the results? I'm using the same config file for the two
analysis
> (on
> > the ftp site)...should I not be doing this?  Is there a technical
MET
> issue
> > that could explain why the MADIS numbers are higher?  Or is this
simply
> of
> > case of 'it is what it is'?
> >
> > I realize the answer to this question may fall more on the
> > meteorology/observation side of  spectrum, and not the core
verification
> > side, and for that reason I hate to bother you with this.  But, I
figured
> > it couldn't hurt to at least ask, in case I'm doing something
technically
> > wrong that may explain some of the difference.
> >
> > I should also note that I've put a few sample output files, obs
files and
> > model grib files on the ftp site for reference.
> >
> > Thanks for your help, and if this falls outside the area of your
help, I
> > apologize!
> >
> > -Tom
> >
> >
> >
> > [https://firstenergycorp.com/content/dam/opcologos/emailsig/FE-
logo.png]
> >
> > Thomas Workoff
> > Sr Scientist
> > office: 330-436-1475 (850-1475)
> > tworkoff at firstenergycorp.com
> > 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-
West
> Akron
> > Campus
> >
> >
> >
>
------------------------------------------------------------------------------
> >
> > The information contained in this message is intended only for the
> > personal and confidential use of the recipient(s) named above. If
the
> > reader of this message is not the intended recipient or an agent
> > responsible for delivering it to the intended recipient, you are
hereby
> > notified that you have received this document in error and that
any
> review,
> > dissemination, distribution, or copying of this message is
strictly
> > prohibited. If you have received this communication in error,
please
> notify
> > us immediately, and delete the original message.
> >
> >
>
>


------------------------------------------------------------------------------

The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are
hereby notified that you have received this document in error and that
any review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.


------------------------------------------------
Subject: Point Stat -- NDAS versus MADIS
From: John Halley Gotway
Time: Fri Nov 20 11:03:23 2020

Tom,

OK, I was able to retrieve the files from ftp. In the
attached PointStatConfig_wind_continuous_bins_mph-NEW, I made the
following
recommended updates:

(1) obs_summary = NEAREST;
We should probably make this the new default value. We had not yet
changed
that to keep the default behavior the same as it had been before
adding
that logic. But scientifically, I think that's a better default.

(2) cat_thresh = [ ];
Emptied out the top-level cat_thresh setting. Previously it was set
to:
cat_thresh = [ NA ];
NA is a valid threshold type in MET but it always evaluates to true.
That's
why you were getting CTC/CTS/FHO output lines where everything was an
event. Emptied it out to get rid of those unuseful output lines.

(3) output_flag = {
   fho    = NONE;
   ctc    = NONE;
   cts    = NONE;
...
For the same reasons listed in (2).

(5) fcst = {
   cnt_thresh = [
>=4.47&&<8.94,>=8.94&&<13.41,>=13.41&&<17.88,>=17.88&&<22.35,>=22.35&&<26.82,>=26.82&&<31.29,>31.29
];
...
obs = {
   cnt_thresh = [ NA, NA, NA, NA, NA, NA, NA ];
...
Define the cnt_thresh settings inside the fcst and obs dictionaries
instead
of at the top-level. Here we're filtering the pairs based only on the
forecast values, not the obs values.

(4) cnt_thresh = [];
      cnt_logic = INTERSECTION;
Reset the top-level cnt_thresh setting back to it's default. The ones
in
the fcst and obs dictionary take precedence. The NA thresh in obs
always
evaluates to true. So that's why we want the logic to be INTERSECTION.
If
it were UNION, we'd use every point because NA is always true.

(5) output_flag = {
...
   mpr    = BOTH;
...
That R script processes the MPR line type. If you'd like to run it,
you'll
need to enable the MPR output.
This is a good idea when running a couple of cases, but may be way too
much
data for 3 months.
So up to you to decide how to proceed.

I didn't see any obvious issues in the PB2NC config file you sent.

Hope that helps.

Thanks,
John

On Fri, Nov 20, 2020 at 6:50 AM Workoff, Thomas E via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
>
> Hi John,
>
> As always, thank you for your time and thoughts!  You've given me a
few
> good leads to go forward with as I try to figure this out.
>
> First, your suggestion for setting "obs_summary = NEAREST" is likely
> relevant.  Looking at my config file, I currently have 'obs_summary
=
> NONE'---I must have glossed over this part several months ago when
setting
> up the file.  Not sure how I missed it....but even if that isn't the
main
> culprit, that needs to be fixed.  So thank you!
>
> Regarding the thresholds...I actually have the cnt_threshold
variable set
> for my thresholds, NOT wind_thresh.  My notes suggest this was done
because
> I wasn't sure how MET would handle the GUST parameter and if it
would
> naturally see that as 'wind'.  The config file I use applies to
GUST, WIND,
> UGRD and VGRD all in one file, so I handled this by setting the
cnt_thresh
> value so it would apply to all of them.  Is this not the best way to
handle
> this? Perhaps this is another possible error---for example, that
section
> reads:
>
> cnt_thresh     = [
>
>=4.47&&<8.94,>=8.94&&<13.41,>=13.41&&<17.88,>=17.88&&<22.35,>=22.35&&<26.82,>=26.82&&<31.29,>31.29
> ];
> cnt_logic      = UNION;
> wind_thresh    = [ NA ];
>
> I also do not separate the threshold applications from forecast
versus
> obs.  The threshold is set in the global section, not specifically
to the
> forecast fields.  This is another change that perhaps I should make.
After
> all, I want verification run on the forecast fields at those
threshold
> bins--so I should have been more specific about that.
>
> FYI--I placed the config file for both the point_stat run and the
> prep_bufr conversion on the ftp site.  If you can gain access to
that, you
> should be able to see what I'm talking about.  I included the
prep_bufr
> config file because I was originally considering that I may be
making some
> sketchy decisions in how the data in the prepbufr file was being
processed
> into the nedtcdf file, and perhaps the obs were also an issue.  I
can't
> totally rule this part out...especially with the gusts, because
there is a
> lot of variability in the observation data sets (peak winds versus
gusts
> versus maximum wind speeds, etc).  But investigating that may be a
step or
> two down the line from here, given what you've suggested in my
point_stat
> run itself.
>
> Also, thanks for the link to the R script!  When I get some time,
I'll
> push some data through that to see what it may possibly show me.
But I
> think you've identified some low hanging fruit that I will try
today.
>
> Now, off to make changes and re-run 3 months of data all over
again.....
>
> I'll report back on the results of these changes.
>
> Thanks again!
>
>
> Thomas Workoff
> Senior Scientist
> Office: 330-436-1475 (850-1475)
> tworkoff at firstenergycorp.com
> 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-West
Akron
> Campus
>
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Thursday, November 19, 2020 7:24 PM
> To: Workoff, Thomas E <tworkoff at firstenergycorp.com>
> Subject: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat -- NDAS
> versus MADIS
>
> Hello Tom,
>
> I see you have some questions related to using Point-Stat to compare
model
> performance against NDAS PrepBufr observations versus MADIS
observations.
> Thanks for sending along the plot of the statistics to illustrate
your
> questions. I read that you're surprised by the relatively large
performance
> differences when verifying against these two different datasets.
>
> I don't have any obvious solutions or explanations off the bat for
you. It
> certainly seems plausible that there could be systematic differences
> between NDAS and PrepBufr observations. However, there are several
> questions to consider before coming to that conclusion.
>
> For some reason, I'm having trouble accessing the RAL ftp site. So
> unfortunately, these comments are prior to looking at the data you
sent.
>
> (1) Time differences: You already hinted at this... more MADIS obs
in the
> time window. And that's a great thing to consider. Try setting
"obs_summary
> = NEAREST" to only use the observation that has the valid time
closest to
> the forecast valid time.
>
> (2) Level differences: Make sure that you're observations are for
> approximately the same vertical levels.
>
> (3) When your plots state "10-20 MPH", I assume you're defining a
> wind_thresh to filter the U/V matched pairs. Is that threshold
defined on
> the forecast winds, observed winds, or both? When comparing results
for 2
> different obs datasets, it may be a good idea to only apply the
filter to
> the forecast winds. That would minimize the effect of systematic
> differences in the observations.
>
> (4) You may find it useful to run the "plot_mpr.R" script shown on
this
> page:
>
> http://secure-web.cisco.com/1qg1v0yZSMaNM4QpWKZGaQr-oq-
eXPz4dJRcCBAP4jya7cIr05F9HH07Hhcb90yDv7uFcy5MKYqb7QSLQZ_Di3nEQzzg4A521Lp2eG30vPGNQNplr2MGIE0sY7Gz56e4O20hNcNAtlowNrzPwlEzoGln-
8hXMFbERCjGn07lycdDZtsZzscUAHGMpU-F3ALbaIZ_m5VbVyH36sn3leXwoTK-
zYyOYh5KiBQdXulgYtLBkO4EQ0B36OshswFHjaD-
uOTHKx0bfyHgsdec9YR8bNPB9Q_fFFRBm0eubrkwtw2Er88FFGaukSVtrhiP2Cqr44VT0WzlMlutrA7Ky-
OqJcgPOrZ7p1nIomFfgwbi6TtAQusmsyfAffgGaJLKIi3fu/http%3A%2F%2Fdtcenter.org%2Fcommunity-
code%2Fmodel-evaluation-tools-met%2Fsample-analysis-scripts
>
> Here's a sample resulting plot:
>
> http://secure-web.cisco.com/1Un-
d15_mV7f_tTo7VfDmcP_rOFl76J983roGvywJYbtyOlOof6SNFN5ynpRp7E4bYqtZeqvbzk_Zsxu-
DbNyfpUb5E3_Jb7yP65_K0XLIQr38XIQbNJ12H2r0hbwR05C43U13CYrls53tASvVJd71-
5xqKu4zGKVtEBBFXHKtsllfEamzh4Co6Duy-LIEFJlGTWTJ8UoQ0RzJYlE37RMxUNCCs-
lOIfEXBwR23nY3Ff1SeYmG_Y6ACi67WNpkXB2qAG51pvtwR9Hx-
spebzLuQTndwds6ZbO5g3wUByYhs8A8JNLlrVU8VkTK6_fruQWW_l9w_iahdjCD46NOxIdZ4EYr0yL8Uy9dJczP01PekWwtIJUHHVjPKdxHtLfcFo5/http%3A%2F%2Fdtcenter.org%2Fsites%2Fdefault%2Ffiles%2Fcommunity-
code%2Fmet%2Fr-scripts%2Fmpr_plots.pdf
>
> The output includes scatter plots and Q-Q plots derived from the MPR
> output line type from Point-Stat. Generating that for both NDAS and
MADIS
> output might help shed more light on these differences.
>
> Please LMK if you have additional questions and if I should look
more
> closely at the output you posted to ftp site. I'll have to try again
once
> it's accessible again.
>
> Thanks,
> John Halley Gotway
>
> On Thu, Nov 19, 2020 at 1:13 PM Minna Win via RT <met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
> >
> > Hello Tom,
> >
> > I've assigned this ticket to John Halley Gotway.  Please allow a
few
> > business days for a full response.
> >
> > Regards,
> > Minna
> > ---------------
> > Minna Win
> > National Center for Atmospheric Research Developmental Testbed
Center
> > Phone: 303-497-8423
> > Fax:   303-497-8401
> > ---------------
> > Pronouns: she/her
> >
> >
> > On Thu, Nov 19, 2020 at 8:48 AM Workoff, Thomas E via RT <
> > met_help at ucar.edu>
> > wrote:
> >
> > >
> > > Thu Nov 19 08:48:08 2020: Request 97518 was acted upon.
> > > Transaction: Ticket created by tworkoff at firstenergycorp.com
> > >        Queue: met_help
> > >      Subject: Point Stat -- NDAS versus MADIS
> > >        Owner: Nobody
> > >   Requestors: tworkoff at firstenergycorp.com
> > >       Status: new
> > >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518
> >
> > >
> > >
> > > Hi all,
> > >
> > > It's me again.  Today I have a question that may not be fair to
ask,
> but
> > I
> > > figured I would throw it out there in case I'm missing an
obviously
> > > technical issue that can explain my troubles.
> > >
> > > I'm currently running Point Stat for verification of wind
> metrics-mainly
> > > 10m sustained winds and surface gusts.  I run the verification
> separately
> > > using two different observation platforms: NDAS and MADIS obs
data.
> The
> > > goal at this point was to compare the results from the two
observation
> > data
> > > sets, with the "hope" of showing that using the MADIS data
provided
> > similar
> > > results as the more trustworthy NDAS data.  Given the MADIS data
> > provides a
> > > lot for data, both spatially and temporally, at the risk of a
decrease
> in
> > > observation quality, we'd prefer to use it-but the results of
this have
> > me
> > > a bit perplexed.
> > >
> > > The issue is: the results are widely different between the two
> different
> > > obs data sets.  The MADIS data results show much larger metrics-
usually
> > > double or triple the results from the NDAS data.  I've attached
a few
> > > timeseries to show examples of this....where the MBIAS and MAE
are much
> > > larger via the MADIS dataset.
> > >
> > > My technical question is this: is there a systematic issue that
could
> > > explain these results? I have a hard time believing that that
the MADIS
> > > data quality is the sole explanation.  I know the MADIS dataset
> contains
> > a
> > > lot more information-so multiple observations from a point
location may
> > > fall inside the time window to be included in the analysis-could
this
> > > impact the results? I'm using the same config file for the two
analysis
> > (on
> > > the ftp site)...should I not be doing this?  Is there a
technical MET
> > issue
> > > that could explain why the MADIS numbers are higher?  Or is this
simply
> > of
> > > case of 'it is what it is'?
> > >
> > > I realize the answer to this question may fall more on the
> > > meteorology/observation side of  spectrum, and not the core
> verification
> > > side, and for that reason I hate to bother you with this.  But,
I
> figured
> > > it couldn't hurt to at least ask, in case I'm doing something
> technically
> > > wrong that may explain some of the difference.
> > >
> > > I should also note that I've put a few sample output files, obs
files
> and
> > > model grib files on the ftp site for reference.
> > >
> > > Thanks for your help, and if this falls outside the area of your
help,
> I
> > > apologize!
> > >
> > > -Tom
> > >
> > >
> > >
> > > [
> https://firstenergycorp.com/content/dam/opcologos/emailsig/FE-
logo.png]
> > >
> > > Thomas Workoff
> > > Sr Scientist
> > > office: 330-436-1475 (850-1475)
> > > tworkoff at firstenergycorp.com
> > > 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-
West
> > Akron
> > > Campus
> > >
> > >
> > >
> >
>
------------------------------------------------------------------------------
> > >
> > > The information contained in this message is intended only for
the
> > > personal and confidential use of the recipient(s) named above.
If the
> > > reader of this message is not the intended recipient or an agent
> > > responsible for delivering it to the intended recipient, you are
hereby
> > > notified that you have received this document in error and that
any
> > review,
> > > dissemination, distribution, or copying of this message is
strictly
> > > prohibited. If you have received this communication in error,
please
> > notify
> > > us immediately, and delete the original message.
> > >
> > >
> >
> >
>
>
>
>
------------------------------------------------------------------------------
>
> The information contained in this message is intended only for the
> personal and confidential use of the recipient(s) named above. If
the
> reader of this message is not the intended recipient or an agent
> responsible for delivering it to the intended recipient, you are
hereby
> notified that you have received this document in error and that any
review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please
notify
> us immediately, and delete the original message.
>
>
>

------------------------------------------------
Subject: RE: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat -- NDAS versus MADIS
From: Workoff, Thomas E
Time: Fri Nov 20 11:55:31 2020

John,

This is incredibly helpful. I want to thank you for taking the time to
look it over...as it will save me time down the road of
troubleshooting some of these details, one-by-one.

I've already re-run some of the analysis with a few of the changes
listed below, and it does appear to make a difference.  I'll have to
re-run everything, with the full slate of changes, across both MADIS
and NDAS data to see if it impacts the 'wholescale' differences (e.g.
the MADIS results being consistently 'larger') in the verification
statistics.  I was prepared to see some inconsistencies regarding GUST
verification...simply because that's such a fickle forecast and
observation, but also the inconsistencies in the obs regarding that
parameter.  However, the notable 'difference' in the simple WIND
verification between the NDAS and MADIS runs jumped out to me. So,
we'll see what happens with new runs...but now I'm far more
comfortable that any differences I will encounter are do to the obs
datasets, and not user error.

And I had A LOT of user error/inconsistency!

Just for further reference, if the obs_summary is set NONE, or the
normal default....how does point_stat handle multiple observations at
a point location inside the given time window?  My mind automatically
jumped to verification being done on each observation in the time
window...but perhaps that's not true.

Also, thank you for the clarification on an 'NA' option versus simply
leaving it blank. This is something I had overlooked when walking
through the training/sample exercises.

In all, you've been incredibly helpful, as always. I can't thank you
enough for your time and input.

Stay safe!

p.s. you can close this ticket, as you've provided all the info I need
regarding the MET runs.


 

Thomas Workoff
Senior Scientist
Office: 330-436-1475 (850-1475)
tworkoff at firstenergycorp.com
341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-West
Akron Campus


-----Original Message-----
From: John Halley Gotway via RT <met_help at ucar.edu>
Sent: Friday, November 20, 2020 1:03 PM
To: Workoff, Thomas E <tworkoff at firstenergycorp.com>
Subject: Re: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat --
NDAS versus MADIS

Tom,

OK, I was able to retrieve the files from ftp. In the attached
PointStatConfig_wind_continuous_bins_mph-NEW, I made the following
recommended updates:

(1) obs_summary = NEAREST;
We should probably make this the new default value. We had not yet
changed that to keep the default behavior the same as it had been
before adding that logic. But scientifically, I think that's a better
default.

(2) cat_thresh = [ ];
Emptied out the top-level cat_thresh setting. Previously it was set
to:
cat_thresh = [ NA ];
NA is a valid threshold type in MET but it always evaluates to true.
That's why you were getting CTC/CTS/FHO output lines where everything
was an event. Emptied it out to get rid of those unuseful output
lines.

(3) output_flag = {
   fho    = NONE;
   ctc    = NONE;
   cts    = NONE;
...
For the same reasons listed in (2).

(5) fcst = {
   cnt_thresh = [
>=4.47&&<8.94,>=8.94&&<13.41,>=13.41&&<17.88,>=17.88&&<22.35,>=22.35&&<2
>6.82,>=26.82&&<31.29,>31.29
];
...
obs = {
   cnt_thresh = [ NA, NA, NA, NA, NA, NA, NA ]; ...
Define the cnt_thresh settings inside the fcst and obs dictionaries
instead of at the top-level. Here we're filtering the pairs based only
on the forecast values, not the obs values.

(4) cnt_thresh = [];
      cnt_logic = INTERSECTION;
Reset the top-level cnt_thresh setting back to it's default. The ones
in the fcst and obs dictionary take precedence. The NA thresh in obs
always evaluates to true. So that's why we want the logic to be
INTERSECTION. If it were UNION, we'd use every point because NA is
always true.

(5) output_flag = {
...
   mpr    = BOTH;
...
That R script processes the MPR line type. If you'd like to run it,
you'll need to enable the MPR output.
This is a good idea when running a couple of cases, but may be way too
much data for 3 months.
So up to you to decide how to proceed.

I didn't see any obvious issues in the PB2NC config file you sent.

Hope that helps.

Thanks,
John

On Fri, Nov 20, 2020 at 6:50 AM Workoff, Thomas E via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
>
> Hi John,
>
> As always, thank you for your time and thoughts!  You've given me a
> few good leads to go forward with as I try to figure this out.
>
> First, your suggestion for setting "obs_summary = NEAREST" is likely
> relevant.  Looking at my config file, I currently have 'obs_summary
=
> NONE'---I must have glossed over this part several months ago when
> setting up the file.  Not sure how I missed it....but even if that
> isn't the main culprit, that needs to be fixed.  So thank you!
>
> Regarding the thresholds...I actually have the cnt_threshold
variable
> set for my thresholds, NOT wind_thresh.  My notes suggest this was
> done because I wasn't sure how MET would handle the GUST parameter
and
> if it would naturally see that as 'wind'.  The config file I use
> applies to GUST, WIND, UGRD and VGRD all in one file, so I handled
> this by setting the cnt_thresh value so it would apply to all of
them.
> Is this not the best way to handle this? Perhaps this is another
> possible error---for example, that section
> reads:
>
> cnt_thresh     = [
>
>=4.47&&<8.94,>=8.94&&<13.41,>=13.41&&<17.88,>=17.88&&<22.35,>=22.35&&
> ><26.82,>=26.82&&<31.29,>31.29
> ];
> cnt_logic      = UNION;
> wind_thresh    = [ NA ];
>
> I also do not separate the threshold applications from forecast
versus
> obs.  The threshold is set in the global section, not specifically
to
> the forecast fields.  This is another change that perhaps I should
> make.  After all, I want verification run on the forecast fields at
> those threshold bins--so I should have been more specific about
that.
>
> FYI--I placed the config file for both the point_stat run and the
> prep_bufr conversion on the ftp site.  If you can gain access to
that,
> you should be able to see what I'm talking about.  I included the
> prep_bufr config file because I was originally considering that I
may
> be making some sketchy decisions in how the data in the prepbufr
file
> was being processed into the nedtcdf file, and perhaps the obs were
> also an issue.  I can't totally rule this part out...especially with
> the gusts, because there is a lot of variability in the observation
> data sets (peak winds versus gusts versus maximum wind speeds, etc).
> But investigating that may be a step or two down the line from here,
> given what you've suggested in my point_stat run itself.
>
> Also, thanks for the link to the R script!  When I get some time,
I'll
> push some data through that to see what it may possibly show me.
But
> I think you've identified some low hanging fruit that I will try
today.
>
> Now, off to make changes and re-run 3 months of data all over
again.....
>
> I'll report back on the results of these changes.
>
> Thanks again!
>
>
> Thomas Workoff
> Senior Scientist
> Office: 330-436-1475 (850-1475)
> tworkoff at firstenergycorp.com
> 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-West
> Akron Campus
>
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Thursday, November 19, 2020 7:24 PM
> To: Workoff, Thomas E <tworkoff at firstenergycorp.com>
> Subject: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat -- NDAS
> versus MADIS
>
> Hello Tom,
>
> I see you have some questions related to using Point-Stat to compare
> model performance against NDAS PrepBufr observations versus MADIS
observations.
> Thanks for sending along the plot of the statistics to illustrate
your
> questions. I read that you're surprised by the relatively large
> performance differences when verifying against these two different
datasets.
>
> I don't have any obvious solutions or explanations off the bat for
> you. It certainly seems plausible that there could be systematic
> differences between NDAS and PrepBufr observations. However, there
are
> several questions to consider before coming to that conclusion.
>
> For some reason, I'm having trouble accessing the RAL ftp site. So
> unfortunately, these comments are prior to looking at the data you
sent.
>
> (1) Time differences: You already hinted at this... more MADIS obs
in
> the time window. And that's a great thing to consider. Try setting
> "obs_summary = NEAREST" to only use the observation that has the
valid
> time closest to the forecast valid time.
>
> (2) Level differences: Make sure that you're observations are for
> approximately the same vertical levels.
>
> (3) When your plots state "10-20 MPH", I assume you're defining a
> wind_thresh to filter the U/V matched pairs. Is that threshold
defined
> on the forecast winds, observed winds, or both? When comparing
results
> for 2 different obs datasets, it may be a good idea to only apply
the
> filter to the forecast winds. That would minimize the effect of
> systematic differences in the observations.
>
> (4) You may find it useful to run the "plot_mpr.R" script shown on
> this
> page:
>
> http://secure-web.cisco.com/1qg1v0yZSMaNM4QpWKZGaQr-oq-
eXPz4dJRcCBAP4j
>
ya7cIr05F9HH07Hhcb90yDv7uFcy5MKYqb7QSLQZ_Di3nEQzzg4A521Lp2eG30vPGNQNpl
> r2MGIE0sY7Gz56e4O20hNcNAtlowNrzPwlEzoGln-
8hXMFbERCjGn07lycdDZtsZzscUAH
> GMpU-F3ALbaIZ_m5VbVyH36sn3leXwoTK-
zYyOYh5KiBQdXulgYtLBkO4EQ0B36OshswFH
> jaD-
uOTHKx0bfyHgsdec9YR8bNPB9Q_fFFRBm0eubrkwtw2Er88FFGaukSVtrhiP2Cqr44
> VT0WzlMlutrA7Ky-
OqJcgPOrZ7p1nIomFfgwbi6TtAQusmsyfAffgGaJLKIi3fu/http%3
> A%2F%2Fdtcenter.org%2Fcommunity-code%2Fmodel-evaluation-tools-
met%2Fsa
> mple-analysis-scripts
>
> Here's a sample resulting plot:
>
> http://secure-web.cisco.com/1Un-
d15_mV7f_tTo7VfDmcP_rOFl76J983roGvywJY
> btyOlOof6SNFN5ynpRp7E4bYqtZeqvbzk_Zsxu-
DbNyfpUb5E3_Jb7yP65_K0XLIQr38XI
> QbNJ12H2r0hbwR05C43U13CYrls53tASvVJd71-
5xqKu4zGKVtEBBFXHKtsllfEamzh4Co
> 6Duy-LIEFJlGTWTJ8UoQ0RzJYlE37RMxUNCCs-
lOIfEXBwR23nY3Ff1SeYmG_Y6ACi67WN
> pkXB2qAG51pvtwR9Hx-
spebzLuQTndwds6ZbO5g3wUByYhs8A8JNLlrVU8VkTK6_fruQWW
>
_l9w_iahdjCD46NOxIdZ4EYr0yL8Uy9dJczP01PekWwtIJUHHVjPKdxHtLfcFo5/http%3
> A%2F%2Fdtcenter.org%2Fsites%2Fdefault%2Ffiles%2Fcommunity-
code%2Fmet%2
> Fr-scripts%2Fmpr_plots.pdf
>
> The output includes scatter plots and Q-Q plots derived from the MPR
> output line type from Point-Stat. Generating that for both NDAS and
> MADIS output might help shed more light on these differences.
>
> Please LMK if you have additional questions and if I should look
more
> closely at the output you posted to ftp site. I'll have to try again
> once it's accessible again.
>
> Thanks,
> John Halley Gotway
>
> On Thu, Nov 19, 2020 at 1:13 PM Minna Win via RT <met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
> >
> > Hello Tom,
> >
> > I've assigned this ticket to John Halley Gotway.  Please allow a
few
> > business days for a full response.
> >
> > Regards,
> > Minna
> > ---------------
> > Minna Win
> > National Center for Atmospheric Research Developmental Testbed
> > Center
> > Phone: 303-497-8423
> > Fax:   303-497-8401
> > ---------------
> > Pronouns: she/her
> >
> >
> > On Thu, Nov 19, 2020 at 8:48 AM Workoff, Thomas E via RT <
> > met_help at ucar.edu>
> > wrote:
> >
> > >
> > > Thu Nov 19 08:48:08 2020: Request 97518 was acted upon.
> > > Transaction: Ticket created by tworkoff at firstenergycorp.com
> > >        Queue: met_help
> > >      Subject: Point Stat -- NDAS versus MADIS
> > >        Owner: Nobody
> > >   Requestors: tworkoff at firstenergycorp.com
> > >       Status: new
> > >  Ticket <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518
> >
> > >
> > >
> > > Hi all,
> > >
> > > It's me again.  Today I have a question that may not be fair to
> > > ask,
> but
> > I
> > > figured I would throw it out there in case I'm missing an
> > > obviously technical issue that can explain my troubles.
> > >
> > > I'm currently running Point Stat for verification of wind
> metrics-mainly
> > > 10m sustained winds and surface gusts.  I run the verification
> separately
> > > using two different observation platforms: NDAS and MADIS obs
data.
> The
> > > goal at this point was to compare the results from the two
> > > observation
> > data
> > > sets, with the "hope" of showing that using the MADIS data
> > > provided
> > similar
> > > results as the more trustworthy NDAS data.  Given the MADIS data
> > provides a
> > > lot for data, both spatially and temporally, at the risk of a
> > > decrease
> in
> > > observation quality, we'd prefer to use it-but the results of
this
> > > have
> > me
> > > a bit perplexed.
> > >
> > > The issue is: the results are widely different between the two
> different
> > > obs data sets.  The MADIS data results show much larger
> > > metrics-usually double or triple the results from the NDAS data.
> > > I've attached a few timeseries to show examples of this....where
> > > the MBIAS and MAE are much larger via the MADIS dataset.
> > >
> > > My technical question is this: is there a systematic issue that
> > > could explain these results? I have a hard time believing that
> > > that the MADIS data quality is the sole explanation.  I know the
> > > MADIS dataset
> contains
> > a
> > > lot more information-so multiple observations from a point
> > > location may fall inside the time window to be included in the
> > > analysis-could this impact the results? I'm using the same
config
> > > file for the two analysis
> > (on
> > > the ftp site)...should I not be doing this?  Is there a
technical
> > > MET
> > issue
> > > that could explain why the MADIS numbers are higher?  Or is this
> > > simply
> > of
> > > case of 'it is what it is'?
> > >
> > > I realize the answer to this question may fall more on the
> > > meteorology/observation side of  spectrum, and not the core
> verification
> > > side, and for that reason I hate to bother you with this.  But,
I
> figured
> > > it couldn't hurt to at least ask, in case I'm doing something
> technically
> > > wrong that may explain some of the difference.
> > >
> > > I should also note that I've put a few sample output files, obs
> > > files
> and
> > > model grib files on the ftp site for reference.
> > >
> > > Thanks for your help, and if this falls outside the area of your
> > > help,
> I
> > > apologize!
> > >
> > > -Tom
> > >
> > >
> > >
> > > [
> https://firstenergycorp.com/content/dam/opcologos/emailsig/FE-
logo.png
> ]
> > >
> > > Thomas Workoff
> > > Sr Scientist
> > > office: 330-436-1475 (850-1475)
> > > tworkoff at firstenergycorp.com
> > > 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 /
> > > AK-West
> > Akron
> > > Campus
> > >
> > >
> > >
> >
>
----------------------------------------------------------------------
> --------
> > >
> > > The information contained in this message is intended only for
the
> > > personal and confidential use of the recipient(s) named above.
If
> > > the reader of this message is not the intended recipient or an
> > > agent responsible for delivering it to the intended recipient,
you
> > > are hereby notified that you have received this document in
error
> > > and that any
> > review,
> > > dissemination, distribution, or copying of this message is
> > > strictly prohibited. If you have received this communication in
> > > error, please
> > notify
> > > us immediately, and delete the original message.
> > >
> > >
> >
> >
>
>
>
>
----------------------------------------------------------------------
> --------
>
> The information contained in this message is intended only for the
> personal and confidential use of the recipient(s) named above. If
the
> reader of this message is not the intended recipient or an agent
> responsible for delivering it to the intended recipient, you are
> hereby notified that you have received this document in error and
that
> any review, dissemination, distribution, or copying of this message
is
> strictly prohibited. If you have received this communication in
error,
> please notify us immediately, and delete the original message.
>
>
>

------------------------------------------------------------------------------

The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are
hereby notified that you have received this document in error and that
any review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.


------------------------------------------------
Subject: Point Stat -- NDAS versus MADIS
From: John Halley Gotway
Time: Fri Nov 20 12:31:12 2020

Tom,

Great, I'll go ahead and resolve the ticket. To answer your question
about
obs_summary... when it's set to NONE all point observations falling
within
the time window at an observation location are used to create matched
pairs
and are included in the resulting statistics.

If your forecast is valid at time t, and you have obs at t-5 minus and
t+5
minutes, that'll result in 2 fcst/obs matched pairs. The forecast
value
will be the same but the obs will be different. Let's say you have one
NDAS
obs valid exactly at the forecast valid time... and you have MADIS obs
every 5 minutes in your +/- 15 minute obs_window. First, you'd have
many
more MADIS than NDAS matched pairs (6 MADIS obs in a 30 minute window
compared to 1 NDAS obs). And all those extra pairs could be a
significant
source of error.

Listed below is the documentation about the obs_summary options:
https://dtcenter.github.io/MET/develop/Users_Guide/README.html
(FYI, the location of these docs will change prior to the next
release.)

The “obs_summary” entry specifies how to compute statistics on
observations
that appear at a single location (lat,lon,level,elev) in Point-Stat
and
Ensemble-Stat. Eight techniques are currently supported:

   -

   “NONE” to use all point observations (legacy behavior)
   -

   “NEAREST” use only the observation that has the valid time closest
to
   the forecast valid time
   -

   “MIN” use only the observation that has the lowest value
   -

   “MAX” use only the observation that has the highest value
   -

   “UW_MEAN” compute an unweighted mean of the observations
   -

   “DW_MEAN” compute a weighted mean of the observations based on the
time
   of the observation
   -

   “MEDIAN” use the median observation
   -

   “PERC” use the Nth percentile observation where N = obs_perc_value


On Fri, Nov 20, 2020 at 11:56 AM Workoff, Thomas E via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
>
> John,
>
> This is incredibly helpful. I want to thank you for taking the time
to
> look it over...as it will save me time down the road of
troubleshooting
> some of these details, one-by-one.
>
> I've already re-run some of the analysis with a few of the changes
listed
> below, and it does appear to make a difference.  I'll have to re-run
> everything, with the full slate of changes, across both MADIS and
NDAS data
> to see if it impacts the 'wholescale' differences (e.g. the MADIS
results
> being consistently 'larger') in the verification statistics.  I was
> prepared to see some inconsistencies regarding GUST
verification...simply
> because that's such a fickle forecast and observation, but also the
> inconsistencies in the obs regarding that parameter.  However, the
notable
> 'difference' in the simple WIND verification between the NDAS and
MADIS
> runs jumped out to me. So, we'll see what happens with new
runs...but now
> I'm far more comfortable that any differences I will encounter are
do to
> the obs datasets, and not user error.
>
> And I had A LOT of user error/inconsistency!
>
> Just for further reference, if the obs_summary is set NONE, or the
normal
> default....how does point_stat handle multiple observations at a
point
> location inside the given time window?  My mind automatically jumped
to
> verification being done on each observation in the time window...but
> perhaps that's not true.
>
> Also, thank you for the clarification on an 'NA' option versus
simply
> leaving it blank. This is something I had overlooked when walking
through
> the training/sample exercises.
>
> In all, you've been incredibly helpful, as always. I can't thank you
> enough for your time and input.
>
> Stay safe!
>
> p.s. you can close this ticket, as you've provided all the info I
need
> regarding the MET runs.
>
>
>
>
> Thomas Workoff
> Senior Scientist
> Office: 330-436-1475 (850-1475)
> tworkoff at firstenergycorp.com
> 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-West
Akron
> Campus
>
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Friday, November 20, 2020 1:03 PM
> To: Workoff, Thomas E <tworkoff at firstenergycorp.com>
> Subject: Re: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat --
NDAS
> versus MADIS
>
> Tom,
>
> OK, I was able to retrieve the files from ftp. In the attached
> PointStatConfig_wind_continuous_bins_mph-NEW, I made the following
> recommended updates:
>
> (1) obs_summary = NEAREST;
> We should probably make this the new default value. We had not yet
changed
> that to keep the default behavior the same as it had been before
adding
> that logic. But scientifically, I think that's a better default.
>
> (2) cat_thresh = [ ];
> Emptied out the top-level cat_thresh setting. Previously it was set
to:
> cat_thresh = [ NA ];
> NA is a valid threshold type in MET but it always evaluates to true.
> That's why you were getting CTC/CTS/FHO output lines where
everything was
> an event. Emptied it out to get rid of those unuseful output lines.
>
> (3) output_flag = {
>    fho    = NONE;
>    ctc    = NONE;
>    cts    = NONE;
> ...
> For the same reasons listed in (2).
>
> (5) fcst = {
>    cnt_thresh = [
>
>=4.47&&<8.94,>=8.94&&<13.41,>=13.41&&<17.88,>=17.88&&<22.35,>=22.35&&<2
> >6.82,>=26.82&&<31.29,>31.29
> ];
> ...
> obs = {
>    cnt_thresh = [ NA, NA, NA, NA, NA, NA, NA ]; ...
> Define the cnt_thresh settings inside the fcst and obs dictionaries
> instead of at the top-level. Here we're filtering the pairs based
only on
> the forecast values, not the obs values.
>
> (4) cnt_thresh = [];
>       cnt_logic = INTERSECTION;
> Reset the top-level cnt_thresh setting back to it's default. The
ones in
> the fcst and obs dictionary take precedence. The NA thresh in obs
always
> evaluates to true. So that's why we want the logic to be
INTERSECTION. If
> it were UNION, we'd use every point because NA is always true.
>
> (5) output_flag = {
> ...
>    mpr    = BOTH;
> ...
> That R script processes the MPR line type. If you'd like to run it,
you'll
> need to enable the MPR output.
> This is a good idea when running a couple of cases, but may be way
too
> much data for 3 months.
> So up to you to decide how to proceed.
>
> I didn't see any obvious issues in the PB2NC config file you sent.
>
> Hope that helps.
>
> Thanks,
> John
>
> On Fri, Nov 20, 2020 at 6:50 AM Workoff, Thomas E via RT <
> met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
> >
> > Hi John,
> >
> > As always, thank you for your time and thoughts!  You've given me
a
> > few good leads to go forward with as I try to figure this out.
> >
> > First, your suggestion for setting "obs_summary = NEAREST" is
likely
> > relevant.  Looking at my config file, I currently have
'obs_summary =
> > NONE'---I must have glossed over this part several months ago when
> > setting up the file.  Not sure how I missed it....but even if that
> > isn't the main culprit, that needs to be fixed.  So thank you!
> >
> > Regarding the thresholds...I actually have the cnt_threshold
variable
> > set for my thresholds, NOT wind_thresh.  My notes suggest this was
> > done because I wasn't sure how MET would handle the GUST parameter
and
> > if it would naturally see that as 'wind'.  The config file I use
> > applies to GUST, WIND, UGRD and VGRD all in one file, so I handled
> > this by setting the cnt_thresh value so it would apply to all of
them.
> > Is this not the best way to handle this? Perhaps this is another
> > possible error---for example, that section
> > reads:
> >
> > cnt_thresh     = [
> >
>=4.47&&<8.94,>=8.94&&<13.41,>=13.41&&<17.88,>=17.88&&<22.35,>=22.35&&
> > ><26.82,>=26.82&&<31.29,>31.29
> > ];
> > cnt_logic      = UNION;
> > wind_thresh    = [ NA ];
> >
> > I also do not separate the threshold applications from forecast
versus
> > obs.  The threshold is set in the global section, not specifically
to
> > the forecast fields.  This is another change that perhaps I should
> > make.  After all, I want verification run on the forecast fields
at
> > those threshold bins--so I should have been more specific about
that.
> >
> > FYI--I placed the config file for both the point_stat run and the
> > prep_bufr conversion on the ftp site.  If you can gain access to
that,
> > you should be able to see what I'm talking about.  I included the
> > prep_bufr config file because I was originally considering that I
may
> > be making some sketchy decisions in how the data in the prepbufr
file
> > was being processed into the nedtcdf file, and perhaps the obs
were
> > also an issue.  I can't totally rule this part out...especially
with
> > the gusts, because there is a lot of variability in the
observation
> > data sets (peak winds versus gusts versus maximum wind speeds,
etc).
> > But investigating that may be a step or two down the line from
here,
> > given what you've suggested in my point_stat run itself.
> >
> > Also, thanks for the link to the R script!  When I get some time,
I'll
> > push some data through that to see what it may possibly show me.
But
> > I think you've identified some low hanging fruit that I will try
today.
> >
> > Now, off to make changes and re-run 3 months of data all over
again.....
> >
> > I'll report back on the results of these changes.
> >
> > Thanks again!
> >
> >
> > Thomas Workoff
> > Senior Scientist
> > Office: 330-436-1475 (850-1475)
> > tworkoff at firstenergycorp.com
> > 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 / AK-
West
> > Akron Campus
> >
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT <met_help at ucar.edu>
> > Sent: Thursday, November 19, 2020 7:24 PM
> > To: Workoff, Thomas E <tworkoff at firstenergycorp.com>
> > Subject: [EXTERNAL] Re: [rt.rap.ucar.edu #97518] Point Stat --
NDAS
> > versus MADIS
> >
> > Hello Tom,
> >
> > I see you have some questions related to using Point-Stat to
compare
> > model performance against NDAS PrepBufr observations versus MADIS
> observations.
> > Thanks for sending along the plot of the statistics to illustrate
your
> > questions. I read that you're surprised by the relatively large
> > performance differences when verifying against these two different
> datasets.
> >
> > I don't have any obvious solutions or explanations off the bat for
> > you. It certainly seems plausible that there could be systematic
> > differences between NDAS and PrepBufr observations. However, there
are
> > several questions to consider before coming to that conclusion.
> >
> > For some reason, I'm having trouble accessing the RAL ftp site. So
> > unfortunately, these comments are prior to looking at the data you
sent.
> >
> > (1) Time differences: You already hinted at this... more MADIS obs
in
> > the time window. And that's a great thing to consider. Try setting
> > "obs_summary = NEAREST" to only use the observation that has the
valid
> > time closest to the forecast valid time.
> >
> > (2) Level differences: Make sure that you're observations are for
> > approximately the same vertical levels.
> >
> > (3) When your plots state "10-20 MPH", I assume you're defining a
> > wind_thresh to filter the U/V matched pairs. Is that threshold
defined
> > on the forecast winds, observed winds, or both? When comparing
results
> > for 2 different obs datasets, it may be a good idea to only apply
the
> > filter to the forecast winds. That would minimize the effect of
> > systematic differences in the observations.
> >
> > (4) You may find it useful to run the "plot_mpr.R" script shown on
> > this
> > page:
> >
> > http://secure-web.cisco.com/1qg1v0yZSMaNM4QpWKZGaQr-oq-
eXPz4dJRcCBAP4j
> >
ya7cIr05F9HH07Hhcb90yDv7uFcy5MKYqb7QSLQZ_Di3nEQzzg4A521Lp2eG30vPGNQNpl
> > r2MGIE0sY7Gz56e4O20hNcNAtlowNrzPwlEzoGln-
8hXMFbERCjGn07lycdDZtsZzscUAH
> > GMpU-F3ALbaIZ_m5VbVyH36sn3leXwoTK-
zYyOYh5KiBQdXulgYtLBkO4EQ0B36OshswFH
> > jaD-
uOTHKx0bfyHgsdec9YR8bNPB9Q_fFFRBm0eubrkwtw2Er88FFGaukSVtrhiP2Cqr44
> > VT0WzlMlutrA7Ky-
OqJcgPOrZ7p1nIomFfgwbi6TtAQusmsyfAffgGaJLKIi3fu/http%3
> > A%2F%2Fdtcenter.org%2Fcommunity-code%2Fmodel-evaluation-tools-
met%2Fsa
> > mple-analysis-scripts
> >
> > Here's a sample resulting plot:
> >
> > http://secure-web.cisco.com/1Un-
d15_mV7f_tTo7VfDmcP_rOFl76J983roGvywJY
> > btyOlOof6SNFN5ynpRp7E4bYqtZeqvbzk_Zsxu-
DbNyfpUb5E3_Jb7yP65_K0XLIQr38XI
> > QbNJ12H2r0hbwR05C43U13CYrls53tASvVJd71-
5xqKu4zGKVtEBBFXHKtsllfEamzh4Co
> > 6Duy-LIEFJlGTWTJ8UoQ0RzJYlE37RMxUNCCs-
lOIfEXBwR23nY3Ff1SeYmG_Y6ACi67WN
> > pkXB2qAG51pvtwR9Hx-
spebzLuQTndwds6ZbO5g3wUByYhs8A8JNLlrVU8VkTK6_fruQWW
> >
_l9w_iahdjCD46NOxIdZ4EYr0yL8Uy9dJczP01PekWwtIJUHHVjPKdxHtLfcFo5/http%3
> > A%2F%2Fdtcenter.org%2Fsites%2Fdefault%2Ffiles%2Fcommunity-
code%2Fmet%2
> > Fr-scripts%2Fmpr_plots.pdf
> >
> > The output includes scatter plots and Q-Q plots derived from the
MPR
> > output line type from Point-Stat. Generating that for both NDAS
and
> > MADIS output might help shed more light on these differences.
> >
> > Please LMK if you have additional questions and if I should look
more
> > closely at the output you posted to ftp site. I'll have to try
again
> > once it's accessible again.
> >
> > Thanks,
> > John Halley Gotway
> >
> > On Thu, Nov 19, 2020 at 1:13 PM Minna Win via RT
<met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518 >
> > >
> > > Hello Tom,
> > >
> > > I've assigned this ticket to John Halley Gotway.  Please allow a
few
> > > business days for a full response.
> > >
> > > Regards,
> > > Minna
> > > ---------------
> > > Minna Win
> > > National Center for Atmospheric Research Developmental Testbed
> > > Center
> > > Phone: 303-497-8423
> > > Fax:   303-497-8401
> > > ---------------
> > > Pronouns: she/her
> > >
> > >
> > > On Thu, Nov 19, 2020 at 8:48 AM Workoff, Thomas E via RT <
> > > met_help at ucar.edu>
> > > wrote:
> > >
> > > >
> > > > Thu Nov 19 08:48:08 2020: Request 97518 was acted upon.
> > > > Transaction: Ticket created by tworkoff at firstenergycorp.com
> > > >        Queue: met_help
> > > >      Subject: Point Stat -- NDAS versus MADIS
> > > >        Owner: Nobody
> > > >   Requestors: tworkoff at firstenergycorp.com
> > > >       Status: new
> > > >  Ticket <URL:
> > > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=97518
> > >
> > > >
> > > >
> > > > Hi all,
> > > >
> > > > It's me again.  Today I have a question that may not be fair
to
> > > > ask,
> > but
> > > I
> > > > figured I would throw it out there in case I'm missing an
> > > > obviously technical issue that can explain my troubles.
> > > >
> > > > I'm currently running Point Stat for verification of wind
> > metrics-mainly
> > > > 10m sustained winds and surface gusts.  I run the verification
> > separately
> > > > using two different observation platforms: NDAS and MADIS obs
data.
> > The
> > > > goal at this point was to compare the results from the two
> > > > observation
> > > data
> > > > sets, with the "hope" of showing that using the MADIS data
> > > > provided
> > > similar
> > > > results as the more trustworthy NDAS data.  Given the MADIS
data
> > > provides a
> > > > lot for data, both spatially and temporally, at the risk of a
> > > > decrease
> > in
> > > > observation quality, we'd prefer to use it-but the results of
this
> > > > have
> > > me
> > > > a bit perplexed.
> > > >
> > > > The issue is: the results are widely different between the two
> > different
> > > > obs data sets.  The MADIS data results show much larger
> > > > metrics-usually double or triple the results from the NDAS
data.
> > > > I've attached a few timeseries to show examples of
this....where
> > > > the MBIAS and MAE are much larger via the MADIS dataset.
> > > >
> > > > My technical question is this: is there a systematic issue
that
> > > > could explain these results? I have a hard time believing that
> > > > that the MADIS data quality is the sole explanation.  I know
the
> > > > MADIS dataset
> > contains
> > > a
> > > > lot more information-so multiple observations from a point
> > > > location may fall inside the time window to be included in the
> > > > analysis-could this impact the results? I'm using the same
config
> > > > file for the two analysis
> > > (on
> > > > the ftp site)...should I not be doing this?  Is there a
technical
> > > > MET
> > > issue
> > > > that could explain why the MADIS numbers are higher?  Or is
this
> > > > simply
> > > of
> > > > case of 'it is what it is'?
> > > >
> > > > I realize the answer to this question may fall more on the
> > > > meteorology/observation side of  spectrum, and not the core
> > verification
> > > > side, and for that reason I hate to bother you with this.
But, I
> > figured
> > > > it couldn't hurt to at least ask, in case I'm doing something
> > technically
> > > > wrong that may explain some of the difference.
> > > >
> > > > I should also note that I've put a few sample output files,
obs
> > > > files
> > and
> > > > model grib files on the ftp site for reference.
> > > >
> > > > Thanks for your help, and if this falls outside the area of
your
> > > > help,
> > I
> > > > apologize!
> > > >
> > > > -Tom
> > > >
> > > >
> > > >
> > > > [
> > https://firstenergycorp.com/content/dam/opcologos/emailsig/FE-
logo.png
> > ]
> > > >
> > > > Thomas Workoff
> > > > Sr Scientist
> > > > office: 330-436-1475 (850-1475)
> > > > tworkoff at firstenergycorp.com
> > > > 341 White Pond Drive, Akron, OH 44320 | mailstop: A-WAC-C1 /
> > > > AK-West
> > > Akron
> > > > Campus
> > > >
> > > >
> > > >
> > >
> >
----------------------------------------------------------------------
> > --------
> > > >
> > > > The information contained in this message is intended only for
the
> > > > personal and confidential use of the recipient(s) named above.
If
> > > > the reader of this message is not the intended recipient or an
> > > > agent responsible for delivering it to the intended recipient,
you
> > > > are hereby notified that you have received this document in
error
> > > > and that any
> > > review,
> > > > dissemination, distribution, or copying of this message is
> > > > strictly prohibited. If you have received this communication
in
> > > > error, please
> > > notify
> > > > us immediately, and delete the original message.
> > > >
> > > >
> > >
> > >
> >
> >
> >
> >
----------------------------------------------------------------------
> > --------
> >
> > The information contained in this message is intended only for the
> > personal and confidential use of the recipient(s) named above. If
the
> > reader of this message is not the intended recipient or an agent
> > responsible for delivering it to the intended recipient, you are
> > hereby notified that you have received this document in error and
that
> > any review, dissemination, distribution, or copying of this
message is
> > strictly prohibited. If you have received this communication in
error,
> > please notify us immediately, and delete the original message.
> >
> >
> >
>
>
>
------------------------------------------------------------------------------
>
> The information contained in this message is intended only for the
> personal and confidential use of the recipient(s) named above. If
the
> reader of this message is not the intended recipient or an agent
> responsible for delivering it to the intended recipient, you are
hereby
> notified that you have received this document in error and that any
review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please
notify
> us immediately, and delete the original message.
>
>
>

------------------------------------------------


More information about the Met_help mailing list