[Met_help] [rt.rap.ucar.edu #92884] History for Hypothesis Testing
John Halley Gotway via RT
met_help at ucar.edu
Thu Oct 31 11:55:31 MDT 2019
----------------------------------------------------------------
Initial Request
----------------------------------------------------------------
John, can you or someone else explain how you come up with practically significant and significant on data studies you have done for us and others. How are you using MET to get those values? We seem to be always comparing models and need know if the differences are significant or not.
Thanks
Bob
----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------
Subject: Hypothesis Testing
From: John Halley Gotway
Time: Thu Oct 31 09:40:19 2019
Bob,
We have been using METviewer to determine statistical significance.
Most
recently, we've been using METviewer to create scorecards which
compute
pairwise differences for many variables, levels, lead times, and
masking
regions. It computes p-values which indicate a confidence level for
whether the value 0 falls outside the confidence intervals for those
differences. And it's those p-values which are shown in each cell of
the
scorecard.
The determination of practical significance is a bit more manual. The
scientists determine some reasonable values for practical
significance,
often based on the precision of the observation values used for
verification. In the past, we've written scripts to post-process data
files that METviewer creates when generating plots/scorecards and
applied
the practical significance logic that the scientists have specified.
In the long run, I think it'd make more sense to incorporate the
handling
of practical significance directly into the scorecard generation
process.
But that logic does not yet exist.
I believe we have funding this year to assist the AF in spinning up
use of
METviewer. I'm hoping you'll find it as useful in your evaluations as
we
have in ours.
I've cc'ed Michelle Harrold on this ticket in case she has anything
else to
add about statistical/practical significance in our evaluations for
the AF.
Thanks,
John
On Thu, Oct 31, 2019 at 6:57 AM robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:
>
> Thu Oct 31 06:57:19 2019: Request 92884 was acted upon.
> Transaction: Ticket created by robert.craig.2 at us.af.mil
> Queue: met_help
> Subject: Hypothesis Testing
> Owner: Nobody
> Requestors: robert.craig.2 at us.af.mil
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884 >
>
>
> John, can you or someone else explain how you come up with
practically
> significant and significant on data studies you have done for us and
> others. How are you using MET to get those values? We seem to be
always
> comparing models and need know if the differences are significant or
not.
>
> Thanks
> Bob
>
>
------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis Testing
From: robert.craig.2 at us.af.mil
Time: Thu Oct 31 09:49:54 2019
John, would it be possible to get the algorithm which generates the P-
values? It is going to be awhile before we can run MET Viewer - it is
a political thing on who loads new software on our systems and MET
Viewer became the test case unfortunately. So until we have MET
viewer, we have to use Stat Analysis and python code to generate the
significance beyond what the error bars provide.
Thanks
Bob
-----Original Message-----
From: John Halley Gotway via RT <met_help at ucar.edu>
Sent: Thursday, October 31, 2019 10:40 AM
To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
<robert.craig.2 at us.af.mil>
Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-OL
<matthew.sittel.ctr at us.af.mil>
Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis
Testing
Bob,
We have been using METviewer to determine statistical significance.
Most recently, we've been using METviewer to create scorecards which
compute pairwise differences for many variables, levels, lead times,
and masking regions. It computes p-values which indicate a confidence
level for whether the value 0 falls outside the confidence intervals
for those differences. And it's those p-values which are shown in
each cell of the scorecard.
The determination of practical significance is a bit more manual. The
scientists determine some reasonable values for practical
significance, often based on the precision of the observation values
used for verification. In the past, we've written scripts to post-
process data files that METviewer creates when generating
plots/scorecards and applied the practical significance logic that the
scientists have specified.
In the long run, I think it'd make more sense to incorporate the
handling of practical significance directly into the scorecard
generation process.
But that logic does not yet exist.
I believe we have funding this year to assist the AF in spinning up
use of METviewer. I'm hoping you'll find it as useful in your
evaluations as we have in ours.
I've cc'ed Michelle Harrold on this ticket in case she has anything
else to add about statistical/practical significance in our
evaluations for the AF.
Thanks,
John
On Thu, Oct 31, 2019 at 6:57 AM robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:
>
> Thu Oct 31 06:57:19 2019: Request 92884 was acted upon.
> Transaction: Ticket created by robert.craig.2 at us.af.mil
> Queue: met_help
> Subject: Hypothesis Testing
> Owner: Nobody
> Requestors: robert.craig.2 at us.af.mil
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884
> >
>
>
> John, can you or someone else explain how you come up with
> practically significant and significant on data studies you have
done
> for us and others. How are you using MET to get those values? We
> seem to be always comparing models and need know if the differences
are significant or not.
>
> Thanks
> Bob
>
>
------------------------------------------------
Subject: Hypothesis Testing
From: John Halley Gotway
Time: Thu Oct 31 11:18:48 2019
Bob,
The good news is that the METviewer source code is publicly available
via
GitHub, and everything you need is in there:
https://github.com/NCAR/METviewer
The bad news is that it's pretty complex and would likely be very
difficult
to extract the specific functionality you need.
I'm sorry to hear that the installation of METviewer has become a
political
hot potato. In the long run, I think the DTC collaborating with the
AF on
using/enhancing METviewer would be much more efficient than
reimplementing
its methodology in house. Please let us know if there's anything we
can do
to help make that happen. For example, we could host some AF data in
the
NCAR instance which you could access remotely to demonstrate its
utility.
Or if you have a machine on which Docker could be installed, we could
show
you how to use that type of METviewer install.
The current major development task for Tatiana (and others) is
transitioning METviewer over from using R to Python. And in that
process,
when possible, they are factoring out functionality from METviewer
into
GitHub repositories named METplotpy and METcalcpy. The goal is to
make the
statistics calculations and plotting routines useful to both METviewer
and
other applications (like METexpress and user scripts). When that work
is
more mature, calling the statistics calculation routines from your own
scripts should be much easier.
Thanks,
John
On Thu, Oct 31, 2019 at 9:50 AM robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884 >
>
> John, would it be possible to get the algorithm which generates the
> P-values? It is going to be awhile before we can run MET Viewer -
it is a
> political thing on who loads new software on our systems and MET
Viewer
> became the test case unfortunately. So until we have MET viewer,
we have
> to use Stat Analysis and python code to generate the significance
beyond
> what the error bars provide.
>
> Thanks
> Bob
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Thursday, October 31, 2019 10:40 AM
> To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
<robert.craig.2 at us.af.mil>
> Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-OL
<
> matthew.sittel.ctr at us.af.mil>
> Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis
Testing
>
> Bob,
>
> We have been using METviewer to determine statistical significance.
Most
> recently, we've been using METviewer to create scorecards which
compute
> pairwise differences for many variables, levels, lead times, and
masking
> regions. It computes p-values which indicate a confidence level for
> whether the value 0 falls outside the confidence intervals for those
> differences. And it's those p-values which are shown in each cell
of the
> scorecard.
>
> The determination of practical significance is a bit more manual.
The
> scientists determine some reasonable values for practical
significance,
> often based on the precision of the observation values used for
> verification. In the past, we've written scripts to post-process
data
> files that METviewer creates when generating plots/scorecards and
applied
> the practical significance logic that the scientists have specified.
>
> In the long run, I think it'd make more sense to incorporate the
handling
> of practical significance directly into the scorecard generation
process.
> But that logic does not yet exist.
>
> I believe we have funding this year to assist the AF in spinning up
use of
> METviewer. I'm hoping you'll find it as useful in your evaluations
as we
> have in ours.
>
> I've cc'ed Michelle Harrold on this ticket in case she has anything
else
> to add about statistical/practical significance in our evaluations
for the
> AF.
>
> Thanks,
> John
>
> On Thu, Oct 31, 2019 at 6:57 AM robert.craig.2 at us.af.mil via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Thu Oct 31 06:57:19 2019: Request 92884 was acted upon.
> > Transaction: Ticket created by robert.craig.2 at us.af.mil
> > Queue: met_help
> > Subject: Hypothesis Testing
> > Owner: Nobody
> > Requestors: robert.craig.2 at us.af.mil
> > Status: new
> > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884
> > >
> >
> >
> > John, can you or someone else explain how you come up with
> > practically significant and significant on data studies you have
done
> > for us and others. How are you using MET to get those values? We
> > seem to be always comparing models and need know if the
differences are
> significant or not.
> >
> > Thanks
> > Bob
> >
> >
>
>
>
>
------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis Testing
From: robert.craig.2 at us.af.mil
Time: Thu Oct 31 11:25:08 2019
John, we will have MET Viewer eventually probably within a couple
months - I was just looking for something easy I could code up for
hypothesis testing until MET Viewer. Oh well.
Thanks
Bob
-----Original Message-----
From: John Halley Gotway via RT <met_help at ucar.edu>
Sent: Thursday, October 31, 2019 12:19 PM
To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
<robert.craig.2 at us.af.mil>
Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-OL
<matthew.sittel.ctr at us.af.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis
Testing
Bob,
The good news is that the METviewer source code is publicly available
via GitHub, and everything you need is in there:
https://github.com/NCAR/METviewer
The bad news is that it's pretty complex and would likely be very
difficult to extract the specific functionality you need.
I'm sorry to hear that the installation of METviewer has become a
political hot potato. In the long run, I think the DTC collaborating
with the AF on using/enhancing METviewer would be much more efficient
than reimplementing its methodology in house. Please let us know if
there's anything we can do to help make that happen. For example, we
could host some AF data in the NCAR instance which you could access
remotely to demonstrate its utility.
Or if you have a machine on which Docker could be installed, we could
show you how to use that type of METviewer install.
The current major development task for Tatiana (and others) is
transitioning METviewer over from using R to Python. And in that
process, when possible, they are factoring out functionality from
METviewer into GitHub repositories named METplotpy and METcalcpy. The
goal is to make the statistics calculations and plotting routines
useful to both METviewer and other applications (like METexpress and
user scripts). When that work is more mature, calling the statistics
calculation routines from your own scripts should be much easier.
Thanks,
John
On Thu, Oct 31, 2019 at 9:50 AM robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884 >
>
> John, would it be possible to get the algorithm which generates the
> P-values? It is going to be awhile before we can run MET Viewer -
it
> is a political thing on who loads new software on our systems and
MET
> Viewer became the test case unfortunately. So until we have MET
> viewer, we have to use Stat Analysis and python code to generate
the
> significance beyond what the error bars provide.
>
> Thanks
> Bob
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Thursday, October 31, 2019 10:40 AM
> To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
> <robert.craig.2 at us.af.mil>
> Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-OL
<
> matthew.sittel.ctr at us.af.mil>
> Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis
> Testing
>
> Bob,
>
> We have been using METviewer to determine statistical significance.
> Most recently, we've been using METviewer to create scorecards which
> compute pairwise differences for many variables, levels, lead times,
> and masking regions. It computes p-values which indicate a
confidence
> level for whether the value 0 falls outside the confidence intervals
> for those differences. And it's those p-values which are shown in
> each cell of the scorecard.
>
> The determination of practical significance is a bit more manual.
The
> scientists determine some reasonable values for practical
> significance, often based on the precision of the observation values
> used for verification. In the past, we've written scripts to
> post-process data files that METviewer creates when generating
> plots/scorecards and applied the practical significance logic that
the scientists have specified.
>
> In the long run, I think it'd make more sense to incorporate the
> handling of practical significance directly into the scorecard
generation process.
> But that logic does not yet exist.
>
> I believe we have funding this year to assist the AF in spinning up
> use of METviewer. I'm hoping you'll find it as useful in your
> evaluations as we have in ours.
>
> I've cc'ed Michelle Harrold on this ticket in case she has anything
> else to add about statistical/practical significance in our
> evaluations for the AF.
>
> Thanks,
> John
>
> On Thu, Oct 31, 2019 at 6:57 AM robert.craig.2 at us.af.mil via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Thu Oct 31 06:57:19 2019: Request 92884 was acted upon.
> > Transaction: Ticket created by robert.craig.2 at us.af.mil
> > Queue: met_help
> > Subject: Hypothesis Testing
> > Owner: Nobody
> > Requestors: robert.craig.2 at us.af.mil
> > Status: new
> > Ticket <URL:
> > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884
> > >
> >
> >
> > John, can you or someone else explain how you come up with
> > practically significant and significant on data studies you have
> > done for us and others. How are you using MET to get those
values?
> > We seem to be always comparing models and need know if the
> > differences are
> significant or not.
> >
> > Thanks
> > Bob
> >
> >
>
>
>
>
------------------------------------------------
Subject: Hypothesis Testing
From: John Halley Gotway
Time: Thu Oct 31 11:49:52 2019
Bob,
Sorry, no, I don't have a quick and easy answer for you. In my
opinion,
the real crux of this is being able to compute pairwise differences.
And
STAT-Analysis is not set up to compute them. All of that logic lives
in
METviewer.
Michelle mentioned that she's talked to Evan about the AF using AWS
cloud
solutions for modelling. Tatiana already has METviewer up and running
on
AWS and actively being used by NOAA/EMC staff. If it makes life
easier,
one option would be setting METviewer in the AF corner of AWS. So use
that
cloud solution without installing METviewer on any local machines.
John
On Thu, Oct 31, 2019 at 11:25 AM robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884 >
>
> John, we will have MET Viewer eventually probably within a couple
months -
> I was just looking for something easy I could code up for hypothesis
> testing until MET Viewer. Oh well.
>
> Thanks
> Bob
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Thursday, October 31, 2019 12:19 PM
> To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
<robert.craig.2 at us.af.mil>
> Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-OL
<
> matthew.sittel.ctr at us.af.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884]
Hypothesis
> Testing
>
> Bob,
>
> The good news is that the METviewer source code is publicly
available via
> GitHub, and everything you need is in there:
> https://github.com/NCAR/METviewer
> The bad news is that it's pretty complex and would likely be very
> difficult to extract the specific functionality you need.
>
> I'm sorry to hear that the installation of METviewer has become a
> political hot potato. In the long run, I think the DTC
collaborating with
> the AF on using/enhancing METviewer would be much more efficient
than
> reimplementing its methodology in house. Please let us know if
there's
> anything we can do to help make that happen. For example, we could
host
> some AF data in the NCAR instance which you could access remotely to
> demonstrate its utility.
> Or if you have a machine on which Docker could be installed, we
could show
> you how to use that type of METviewer install.
>
> The current major development task for Tatiana (and others) is
> transitioning METviewer over from using R to Python. And in that
process,
> when possible, they are factoring out functionality from METviewer
into
> GitHub repositories named METplotpy and METcalcpy. The goal is to
make the
> statistics calculations and plotting routines useful to both
METviewer and
> other applications (like METexpress and user scripts). When that
work is
> more mature, calling the statistics calculation routines from your
own
> scripts should be much easier.
>
> Thanks,
> John
>
> On Thu, Oct 31, 2019 at 9:50 AM robert.craig.2 at us.af.mil via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884 >
> >
> > John, would it be possible to get the algorithm which generates
the
> > P-values? It is going to be awhile before we can run MET Viewer -
it
> > is a political thing on who loads new software on our systems and
MET
> > Viewer became the test case unfortunately. So until we have MET
> > viewer, we have to use Stat Analysis and python code to generate
the
> > significance beyond what the error bars provide.
> >
> > Thanks
> > Bob
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT <met_help at ucar.edu>
> > Sent: Thursday, October 31, 2019 10:40 AM
> > To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
> > <robert.craig.2 at us.af.mil>
> > Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-
OL <
> > matthew.sittel.ctr at us.af.mil>
> > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis
> > Testing
> >
> > Bob,
> >
> > We have been using METviewer to determine statistical
significance.
> > Most recently, we've been using METviewer to create scorecards
which
> > compute pairwise differences for many variables, levels, lead
times,
> > and masking regions. It computes p-values which indicate a
confidence
> > level for whether the value 0 falls outside the confidence
intervals
> > for those differences. And it's those p-values which are shown in
> > each cell of the scorecard.
> >
> > The determination of practical significance is a bit more manual.
The
> > scientists determine some reasonable values for practical
> > significance, often based on the precision of the observation
values
> > used for verification. In the past, we've written scripts to
> > post-process data files that METviewer creates when generating
> > plots/scorecards and applied the practical significance logic that
the
> scientists have specified.
> >
> > In the long run, I think it'd make more sense to incorporate the
> > handling of practical significance directly into the scorecard
> generation process.
> > But that logic does not yet exist.
> >
> > I believe we have funding this year to assist the AF in spinning
up
> > use of METviewer. I'm hoping you'll find it as useful in your
> > evaluations as we have in ours.
> >
> > I've cc'ed Michelle Harrold on this ticket in case she has
anything
> > else to add about statistical/practical significance in our
> > evaluations for the AF.
> >
> > Thanks,
> > John
> >
> > On Thu, Oct 31, 2019 at 6:57 AM robert.craig.2 at us.af.mil via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > Thu Oct 31 06:57:19 2019: Request 92884 was acted upon.
> > > Transaction: Ticket created by robert.craig.2 at us.af.mil
> > > Queue: met_help
> > > Subject: Hypothesis Testing
> > > Owner: Nobody
> > > Requestors: robert.craig.2 at us.af.mil
> > > Status: new
> > > Ticket <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884
> > > >
> > >
> > >
> > > John, can you or someone else explain how you come up with
> > > practically significant and significant on data studies you have
> > > done for us and others. How are you using MET to get those
values?
> > > We seem to be always comparing models and need know if the
> > > differences are
> > significant or not.
> > >
> > > Thanks
> > > Bob
> > >
> > >
> >
> >
> >
> >
>
>
>
>
------------------------------------------------
Subject: RE: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis Testing
From: robert.craig.2 at us.af.mil
Time: Thu Oct 31 11:54:31 2019
We will eventually will be going there. The difficulty is getting our
MET data from our production system to the cloud currently. For the
near term, we will live on Prod 10.
Thanks for your help.
Bob
-----Original Message-----
From: John Halley Gotway via RT <met_help at ucar.edu>
Sent: Thursday, October 31, 2019 12:50 PM
To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
<robert.craig.2 at us.af.mil>
Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-OL
<matthew.sittel.ctr at us.af.mil>
Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis
Testing
Bob,
Sorry, no, I don't have a quick and easy answer for you. In my
opinion, the real crux of this is being able to compute pairwise
differences. And STAT-Analysis is not set up to compute them. All of
that logic lives in METviewer.
Michelle mentioned that she's talked to Evan about the AF using AWS
cloud solutions for modelling. Tatiana already has METviewer up and
running on AWS and actively being used by NOAA/EMC staff. If it makes
life easier, one option would be setting METviewer in the AF corner of
AWS. So use that cloud solution without installing METviewer on any
local machines.
John
On Thu, Oct 31, 2019 at 11:25 AM robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884 >
>
> John, we will have MET Viewer eventually probably within a couple
> months - I was just looking for something easy I could code up for
> hypothesis testing until MET Viewer. Oh well.
>
> Thanks
> Bob
>
> -----Original Message-----
> From: John Halley Gotway via RT <met_help at ucar.edu>
> Sent: Thursday, October 31, 2019 12:19 PM
> To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
> <robert.craig.2 at us.af.mil>
> Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-OL
<
> matthew.sittel.ctr at us.af.mil>
> Subject: Re: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884]
Hypothesis
> Testing
>
> Bob,
>
> The good news is that the METviewer source code is publicly
available
> via GitHub, and everything you need is in there:
> https://github.com/NCAR/METviewer
> The bad news is that it's pretty complex and would likely be very
> difficult to extract the specific functionality you need.
>
> I'm sorry to hear that the installation of METviewer has become a
> political hot potato. In the long run, I think the DTC
collaborating
> with the AF on using/enhancing METviewer would be much more
efficient
> than reimplementing its methodology in house. Please let us know if
> there's anything we can do to help make that happen. For example,
we
> could host some AF data in the NCAR instance which you could access
> remotely to demonstrate its utility.
> Or if you have a machine on which Docker could be installed, we
could
> show you how to use that type of METviewer install.
>
> The current major development task for Tatiana (and others) is
> transitioning METviewer over from using R to Python. And in that
> process, when possible, they are factoring out functionality from
> METviewer into GitHub repositories named METplotpy and METcalcpy.
The
> goal is to make the statistics calculations and plotting routines
> useful to both METviewer and other applications (like METexpress and
> user scripts). When that work is more mature, calling the
statistics
> calculation routines from your own scripts should be much easier.
>
> Thanks,
> John
>
> On Thu, Oct 31, 2019 at 9:50 AM robert.craig.2 at us.af.mil via RT <
> met_help at ucar.edu> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884 >
> >
> > John, would it be possible to get the algorithm which generates
the
> > P-values? It is going to be awhile before we can run MET Viewer -
> > it is a political thing on who loads new software on our systems
and
> > MET Viewer became the test case unfortunately. So until we have
MET
> > viewer, we have to use Stat Analysis and python code to generate
> > the significance beyond what the error bars provide.
> >
> > Thanks
> > Bob
> >
> > -----Original Message-----
> > From: John Halley Gotway via RT <met_help at ucar.edu>
> > Sent: Thursday, October 31, 2019 10:40 AM
> > To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
> > <robert.craig.2 at us.af.mil>
> > Cc: harrold at ucar.edu; SITTEL, MATTHEW C CTR USAF AFMC AFLCMC/HBAW-
OL
> > < matthew.sittel.ctr at us.af.mil>
> > Subject: [Non-DoD Source] Re: [rt.rap.ucar.edu #92884] Hypothesis
> > Testing
> >
> > Bob,
> >
> > We have been using METviewer to determine statistical
significance.
> > Most recently, we've been using METviewer to create scorecards
which
> > compute pairwise differences for many variables, levels, lead
times,
> > and masking regions. It computes p-values which indicate a
> > confidence level for whether the value 0 falls outside the
> > confidence intervals for those differences. And it's those p-
values
> > which are shown in each cell of the scorecard.
> >
> > The determination of practical significance is a bit more manual.
> > The scientists determine some reasonable values for practical
> > significance, often based on the precision of the observation
values
> > used for verification. In the past, we've written scripts to
> > post-process data files that METviewer creates when generating
> > plots/scorecards and applied the practical significance logic that
> > the
> scientists have specified.
> >
> > In the long run, I think it'd make more sense to incorporate the
> > handling of practical significance directly into the scorecard
> generation process.
> > But that logic does not yet exist.
> >
> > I believe we have funding this year to assist the AF in spinning
up
> > use of METviewer. I'm hoping you'll find it as useful in your
> > evaluations as we have in ours.
> >
> > I've cc'ed Michelle Harrold on this ticket in case she has
anything
> > else to add about statistical/practical significance in our
> > evaluations for the AF.
> >
> > Thanks,
> > John
> >
> > On Thu, Oct 31, 2019 at 6:57 AM robert.craig.2 at us.af.mil via RT <
> > met_help at ucar.edu> wrote:
> >
> > >
> > > Thu Oct 31 06:57:19 2019: Request 92884 was acted upon.
> > > Transaction: Ticket created by robert.craig.2 at us.af.mil
> > > Queue: met_help
> > > Subject: Hypothesis Testing
> > > Owner: Nobody
> > > Requestors: robert.craig.2 at us.af.mil
> > > Status: new
> > > Ticket <URL:
> > > https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=92884
> > > >
> > >
> > >
> > > John, can you or someone else explain how you come up with
> > > practically significant and significant on data studies you have
> > > done for us and others. How are you using MET to get those
values?
> > > We seem to be always comparing models and need know if the
> > > differences are
> > significant or not.
> > >
> > > Thanks
> > > Bob
> > >
> > >
> >
> >
> >
> >
>
>
>
>
------------------------------------------------
More information about the Met_help
mailing list