[Met_help] [rt.rap.ucar.edu #76782] History for Incorrect Results From MET

John Halley Gotway via RT met_help at ucar.edu
Thu Jun 23 14:43:00 MDT 2016


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

John, now that I have MET generating BSS, I thought it prudent to verify if
it is calculating the correct values for brier score and brier skill score.
I cut down my MPR files to two files of two lines to simplify.  Below is a
listing of the -dump_row which shows the lines it used.  Attached are the
actual files.

VERSION MODEL  FCST_LEAD FCST_VALID_BEG  FCST_VALID_END  OBS_LEAD
OBS_VALID_BEG   OBS_VALID_END   FCST_VAR FCST_LEV OBS_VAR OBS_LEV OBTYPE
VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THR
ESH ALPHA LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT OBS_LON  OBS_LVL OBS_ELV
FCST     OBS     CLIMO    OBS_QC
V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
20160503_000000 20160503_000000 APCP     L0       APCP    L0      ADPSFC
FULL    NEAREST     9           >=1         >=1        NA     
    NA    MPR       14612     1   10014 59.7900  5.34000      NA      NA
1.00000  0.00000 1.00000      NA
V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
20160503_000000 20160503_000000 APCP     L0       APCP    L0      ADPSFC
FULL    NEAREST     9           >=1         >=1        NA     
    NA    MPR       14612     2   10060 78.2500 22.8200       NA      NA
0.00000  0.00000 0.00000      NA
V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
20160502_000000 20160502_000000 APCP     L0       APCP    L0      ADPSFC
FULL    NEAREST     9           >=1         >=1        NA     
    NA    MPR       14655     1   10014 59.7900  5.34000      NA      NA
0.777778 1.00000 0.444444     NA
V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
20160502_000000 20160502_000000 APCP     L0       APCP    L0      ADPSFC
FULL    NEAREST     9           >=1         >=1        NA     
    NA    MPR       14655     2   10060 78.2500 22.8200       NA      NA
0.00000  0.00000 0.00000      NA

The Brier Score for the model should be the following:

Model    Ob
1                0
0                0
.7777778 1
0                0

BS=.2623 (model)


Climo    Ob
1              0
0              0
.444444 1
0              0

BSr=.327  (reference)

BSS=1-BS/BSr=.18

However, below is MET's output:

JOB_LIST:      -job aggregate_stat -fcst_lead 240000 -line_type MPR -by
FCST_VAR -by FCST_THRESH -dump_row
/h/data/global/WXQC/data/met/filter_job.stat -out_line_type PSTD
-out_fcst_thresh >=0 -out_fcs
t_thresh >=0.1 -out_fcst_thresh >=0.2 -out_fcst_thresh >=0.3
-out_fcst_thresh >=0.4 -out_fcst_thresh >=0.5 -out_fcst_thresh >=0.6
-out_fcst_thresh >=0.7 -out_fcst_thresh >=0.8 -out_fcst_thresh >=0.9 -o
ut_fcst_thresh >=1.0 -out_obs_thresh >=1 -out_alpha 0.05000 
COL_NAME: FCST_VAR FCST_THRESH TOTAL N_THRESH BASER BASER_NCL BASER_NCU
RELIABILITY RESOLUTION UNCERTAINTY ROC_AUC  BRIER BRIER_NCL BRIER_NCU
BRIERCL BRIERCL_NCL BRIERCL_NCU      BSS THRESH_1 THRESH_2 
THRESH_3 THRESH_4 THRESH_5 THRESH_6 THRESH_7 THRESH_8 THRESH_9 THRESH_10
THRESH_11
    PSTD: APCP     >=1             4       11  0.25  0.045587   0.69936
0.2425     0.1875      0.1875 0.66667 0.2425  -0.69807   1.18307  0.3025
-0.59297     1.19797 -0.19835        0      0.1 
     0.2      0.3      0.4      0.5      0.6      0.7      0.8       0.9
1

BSS=-.198
BS model = .2425

(values from attached file GALWEM*)

The command line used to generate is:

/h/WXQC/met-5.1/bin/stat_analysis -lookin
/h/data/global/WXQC/data/met/mdlob_pairs/TP -out
/h/data/global/WXQC/data/met/summary/GALWEM_APCP_24hr_9_PSTD_0Z -job
aggregate_sta t -line_type MPR -out_line_type PSTD -fcst_lead 240000
-out_fcst_thresh
ge0,ge0.1,ge0.2,ge0.3,ge0.4,ge0.5,ge0.6,ge0.7,ge0.8,ge0.9,ge1.0
-out_obs_thresh ge1 -by FCST_VAR -by FCST_THRESH -v 6 -dump_row
/h/data/global/WXQC/data/met/filter_job.stat

Did do something wrong?

Bob



----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Incorrect Results From MET
From: John Halley Gotway
Time: Thu Jun 16 15:09:59 2016

Bob,

I'm glad you worked through an example.  I understand what's going on
but I
doubt you'll like the answer.

All of the probabilistic verification done in MET is based on an Nx2
probabilistic contingency table where N is determined by the number of
probability thresholds the user defines.  In your example, you're
defining
probabilities bins of width 0.1.  The value of 0.77 falls in the 0.7
to 0.8
bin, and the value of 0.44 falls in the 0.4 to 0.5 bin.

The mid-point of each bin is used as the probability value for all
points
falling within that bin.  So in your example, 0.75 is used instead of
0.77
and 0.45 is used instead of 0.44.  If you were to increase the number
of
forecast probability thresholds from 10 to 10,000, the computed brier
scores and BSS value would get very, very close to the values you
computed.  But of course that wouldn't be very practical.

I suspect this difference explains the numerical differences you're
seeing.

Once we understand "what" the code is doing, the obvious next question
is
what "should" it be doing.

I added Tressa Fowler, our resident statistician, to this ticket.
Hopefully, she can explain why we're doing probabilistic vx using an
Nx2
contingency table rather than operating on the raw probabilistic
values
directly.

Thanks,
John


On Thu, Jun 16, 2016 at 2:14 PM, robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:

>
> Thu Jun 16 14:14:13 2016: Request 76782 was acted upon.
> Transaction: Ticket created by robert.craig.2 at us.af.mil
>        Queue: met_help
>      Subject: Incorrect Results From MET
>        Owner: Nobody
>   Requestors: robert.craig.2 at us.af.mil
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=76782 >
>
>
> John, now that I have MET generating BSS, I thought it prudent to
verify if
> it is calculating the correct values for brier score and brier skill
score.
> I cut down my MPR files to two files of two lines to simplify.
Below is a
> listing of the -dump_row which shows the lines it used.  Attached
are the
> actual files.
>
> VERSION MODEL  FCST_LEAD FCST_VALID_BEG  FCST_VALID_END  OBS_LEAD
> OBS_VALID_BEG   OBS_VALID_END   FCST_VAR FCST_LEV OBS_VAR OBS_LEV
OBTYPE
> VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THR
> ESH ALPHA LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT OBS_LON  OBS_LVL
OBS_ELV
> FCST     OBS     CLIMO    OBS_QC
> V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
> 20160503_000000 20160503_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14612     1   10014 59.7900  5.34000      NA
NA
> 1.00000  0.00000 1.00000      NA
> V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
> 20160503_000000 20160503_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14612     2   10060 78.2500 22.8200       NA
NA
> 0.00000  0.00000 0.00000      NA
> V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
> 20160502_000000 20160502_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14655     1   10014 59.7900  5.34000      NA
NA
> 0.777778 1.00000 0.444444     NA
> V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
> 20160502_000000 20160502_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14655     2   10060 78.2500 22.8200       NA
NA
> 0.00000  0.00000 0.00000      NA
>
> The Brier Score for the model should be the following:
>
> Model    Ob
> 1                0
> 0                0
> .7777778 1
> 0                0
>
> BS=.2623 (model)
>
>
> Climo    Ob
> 1              0
> 0              0
> .444444 1
> 0              0
>
> BSr=.327  (reference)
>
> BSS=1-BS/BSr=.18
>
> However, below is MET's output:
>
> JOB_LIST:      -job aggregate_stat -fcst_lead 240000 -line_type MPR
-by
> FCST_VAR -by FCST_THRESH -dump_row
> /h/data/global/WXQC/data/met/filter_job.stat -out_line_type PSTD
> -out_fcst_thresh >=0 -out_fcs
> t_thresh >=0.1 -out_fcst_thresh >=0.2 -out_fcst_thresh >=0.3
> -out_fcst_thresh >=0.4 -out_fcst_thresh >=0.5 -out_fcst_thresh >=0.6
> -out_fcst_thresh >=0.7 -out_fcst_thresh >=0.8 -out_fcst_thresh >=0.9
-o
> ut_fcst_thresh >=1.0 -out_obs_thresh >=1 -out_alpha 0.05000
> COL_NAME: FCST_VAR FCST_THRESH TOTAL N_THRESH BASER BASER_NCL
BASER_NCU
> RELIABILITY RESOLUTION UNCERTAINTY ROC_AUC  BRIER BRIER_NCL
BRIER_NCU
> BRIERCL BRIERCL_NCL BRIERCL_NCU      BSS THRESH_1 THRESH_2
> THRESH_3 THRESH_4 THRESH_5 THRESH_6 THRESH_7 THRESH_8 THRESH_9
THRESH_10
> THRESH_11
>     PSTD: APCP     >=1             4       11  0.25  0.045587
0.69936
> 0.2425     0.1875      0.1875 0.66667 0.2425  -0.69807   1.18307
0.3025
> -0.59297     1.19797 -0.19835        0      0.1
>      0.2      0.3      0.4      0.5      0.6      0.7      0.8
0.9
> 1
>
> BSS=-.198
> BS model = .2425
>
> (values from attached file GALWEM*)
>
> The command line used to generate is:
>
> /h/WXQC/met-5.1/bin/stat_analysis -lookin
> /h/data/global/WXQC/data/met/mdlob_pairs/TP -out
> /h/data/global/WXQC/data/met/summary/GALWEM_APCP_24hr_9_PSTD_0Z -job
> aggregate_sta t -line_type MPR -out_line_type PSTD -fcst_lead 240000
> -out_fcst_thresh
> ge0,ge0.1,ge0.2,ge0.3,ge0.4,ge0.5,ge0.6,ge0.7,ge0.8,ge0.9,ge1.0
> -out_obs_thresh ge1 -by FCST_VAR -by FCST_THRESH -v 6 -dump_row
> /h/data/global/WXQC/data/met/filter_job.stat
>
> Did do something wrong?
>
> Bob
>
>
>

------------------------------------------------
Subject: RE: [rt.rap.ucar.edu #76782] Incorrect Results From MET
From: robert.craig.2 at us.af.mil
Time: Thu Jun 16 15:20:54 2016

That makes sense in the difference in the Brier Score but not the
difference the Brier Skill Score.  Calculating a brier score from the
climo column in the MPR files should give a brier score near .32 and
this combined with the brier score for the model of .24 should give a
positive brier skill score.   Please check this out as well.  I can
live with difference caused by the 10 bins provided I get a reasonable
brier skill score.

Thanks
Bob

-----Original Message-----
From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday, June 16, 2016 4:10 PM
To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
<robert.craig.2 at us.af.mil>
Cc: tressa at ucar.edu
Subject: Re: [rt.rap.ucar.edu #76782] Incorrect Results From MET

Bob,

I'm glad you worked through an example.  I understand what's going on
but I doubt you'll like the answer.

All of the probabilistic verification done in MET is based on an Nx2
probabilistic contingency table where N is determined by the number of
probability thresholds the user defines.  In your example, you're
defining probabilities bins of width 0.1.  The value of 0.77 falls in
the 0.7 to 0.8 bin, and the value of 0.44 falls in the 0.4 to 0.5 bin.

The mid-point of each bin is used as the probability value for all
points falling within that bin.  So in your example, 0.75 is used
instead of 0.77 and 0.45 is used instead of 0.44.  If you were to
increase the number of forecast probability thresholds from 10 to
10,000, the computed brier scores and BSS value would get very, very
close to the values you computed.  But of course that wouldn't be very
practical.

I suspect this difference explains the numerical differences you're
seeing.

Once we understand "what" the code is doing, the obvious next question
is what "should" it be doing.

I added Tressa Fowler, our resident statistician, to this ticket.
Hopefully, she can explain why we're doing probabilistic vx using an
Nx2 contingency table rather than operating on the raw probabilistic
values directly.

Thanks,
John


On Thu, Jun 16, 2016 at 2:14 PM, robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:

>
> Thu Jun 16 14:14:13 2016: Request 76782 was acted upon.
> Transaction: Ticket created by robert.craig.2 at us.af.mil
>        Queue: met_help
>      Subject: Incorrect Results From MET
>        Owner: Nobody
>   Requestors: robert.craig.2 at us.af.mil
>       Status: new
>  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=76782
> >
>
>
> John, now that I have MET generating BSS, I thought it prudent to
> verify if it is calculating the correct values for brier score and
brier skill score.
> I cut down my MPR files to two files of two lines to simplify.
Below
> is a listing of the -dump_row which shows the lines it used.
Attached
> are the actual files.
>
> VERSION MODEL  FCST_LEAD FCST_VALID_BEG  FCST_VALID_END  OBS_LEAD
> OBS_VALID_BEG   OBS_VALID_END   FCST_VAR FCST_LEV OBS_VAR OBS_LEV
OBTYPE
> VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THR ESH
> ALPHA LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT OBS_LON  OBS_LVL OBS_ELV
> FCST     OBS     CLIMO    OBS_QC
> V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
> 20160503_000000 20160503_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14612     1   10014 59.7900  5.34000      NA
NA
> 1.00000  0.00000 1.00000      NA
> V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
> 20160503_000000 20160503_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14612     2   10060 78.2500 22.8200       NA
NA
> 0.00000  0.00000 0.00000      NA
> V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
> 20160502_000000 20160502_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14655     1   10014 59.7900  5.34000      NA
NA
> 0.777778 1.00000 0.444444     NA
> V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
> 20160502_000000 20160502_000000 APCP     L0       APCP    L0
ADPSFC
> FULL    NEAREST     9           >=1         >=1        NA
>     NA    MPR       14655     2   10060 78.2500 22.8200       NA
NA
> 0.00000  0.00000 0.00000      NA
>
> The Brier Score for the model should be the following:
>
> Model    Ob
> 1                0
> 0                0
> .7777778 1
> 0                0
>
> BS=.2623 (model)
>
>
> Climo    Ob
> 1              0
> 0              0
> .444444 1
> 0              0
>
> BSr=.327  (reference)
>
> BSS=1-BS/BSr=.18
>
> However, below is MET's output:
>
> JOB_LIST:      -job aggregate_stat -fcst_lead 240000 -line_type MPR
-by
> FCST_VAR -by FCST_THRESH -dump_row
> /h/data/global/WXQC/data/met/filter_job.stat -out_line_type PSTD
> -out_fcst_thresh >=0 -out_fcs t_thresh >=0.1 -out_fcst_thresh >=0.2
> -out_fcst_thresh >=0.3 -out_fcst_thresh >=0.4 -out_fcst_thresh >=0.5
> -out_fcst_thresh >=0.6 -out_fcst_thresh >=0.7 -out_fcst_thresh >=0.8
> -out_fcst_thresh >=0.9 -o ut_fcst_thresh >=1.0 -out_obs_thresh >=1
> -out_alpha 0.05000
> COL_NAME: FCST_VAR FCST_THRESH TOTAL N_THRESH BASER BASER_NCL
> BASER_NCU RELIABILITY RESOLUTION UNCERTAINTY ROC_AUC  BRIER
BRIER_NCL BRIER_NCU
> BRIERCL BRIERCL_NCL BRIERCL_NCU      BSS THRESH_1 THRESH_2
> THRESH_3 THRESH_4 THRESH_5 THRESH_6 THRESH_7 THRESH_8 THRESH_9
> THRESH_10
> THRESH_11
>     PSTD: APCP     >=1             4       11  0.25  0.045587
0.69936
> 0.2425     0.1875      0.1875 0.66667 0.2425  -0.69807   1.18307
0.3025
> -0.59297     1.19797 -0.19835        0      0.1
>      0.2      0.3      0.4      0.5      0.6      0.7      0.8
0.9
> 1
>
> BSS=-.198
> BS model = .2425
>
> (values from attached file GALWEM*)
>
> The command line used to generate is:
>
> /h/WXQC/met-5.1/bin/stat_analysis -lookin
> /h/data/global/WXQC/data/met/mdlob_pairs/TP -out
> /h/data/global/WXQC/data/met/summary/GALWEM_APCP_24hr_9_PSTD_0Z -job
> aggregate_sta t -line_type MPR -out_line_type PSTD -fcst_lead 240000
> -out_fcst_thresh
> ge0,ge0.1,ge0.2,ge0.3,ge0.4,ge0.5,ge0.6,ge0.7,ge0.8,ge0.9,ge1.0
> -out_obs_thresh ge1 -by FCST_VAR -by FCST_THRESH -v 6 -dump_row
> /h/data/global/WXQC/data/met/filter_job.stat
>
> Did do something wrong?
>
> Bob
>
>
>



------------------------------------------------
Subject: Incorrect Results From MET
From: John Halley Gotway
Time: Thu Jun 16 16:50:27 2016

Bob,

Indeed, it looks like you're correct.  Here's the equation from line
2216
of the file "met_stats.cc":
   bss = (brier.v - briercl.v)/briercl.v;

This is computing BSS as (BS - BSref)/BSref, but that's not correct.
We're
missing the negative sign in the denominator as listed here:
   http://www.cawcr.gov.au/projects/verification/

Wow, this is really embarrassing how bad these bugs are!  The MODE
issue
that Matt found was present all the way back to version 3.1.  Users
must be
generating the PostScript or NetCDF output files very regularly.  This
BSS
issue is very new.  We added BSS in version 5.1 with the addition of
climatologies... but we obviously did not do sufficient testing.

I apologize for these issues!

I just updated the MET website with patches:
   http://www.dtcenter.org/met/users/support/known_issues/METv5.1/index.php

I also pushed the patch file (met-5.1_patches_20160616.tar.gz
<ftp://ftp.rap.ucar.edu/incoming/irap/met_help/met-5.1_patches/met-
5.1_patches_20160616.tar.gz>)
and the full release plus patches (met-5.1_bugfix.20160616.tar.gz
<ftp://ftp.rap.ucar.edu/incoming/irap/met_help/met-5.1_patches/met-
5.1_bugfix.20160616.tar.gz>)
and a screenshot of that webpage (met-5.1_known_issues.png
<ftp://ftp.rap.ucar.edu/incoming/irap/met_help/met-5.1_patches/met-
5.1_known_issues.png>)
to our anonymous ftp site:

   ftp://ftp.rap.ucar.edu/incoming/irap/met_help/met-5.1_patches

Thanks,
John


On Thu, Jun 16, 2016 at 3:20 PM, robert.craig.2 at us.af.mil via RT <
met_help at ucar.edu> wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=76782 >
>
> That makes sense in the difference in the Brier Score but not the
> difference the Brier Skill Score.  Calculating a brier score from
the climo
> column in the MPR files should give a brier score near .32 and this
> combined with the brier score for the model of .24 should give a
positive
> brier skill score.   Please check this out as well.  I can live with
> difference caused by the 10 bins provided I get a reasonable brier
skill
> score.
>
> Thanks
> Bob
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, June 16, 2016 4:10 PM
> To: CRAIG, ROBERT J GS-12 USAF ACC 16 WS/WXN
<robert.craig.2 at us.af.mil>
> Cc: tressa at ucar.edu
> Subject: Re: [rt.rap.ucar.edu #76782] Incorrect Results From MET
>
> Bob,
>
> I'm glad you worked through an example.  I understand what's going
on but
> I doubt you'll like the answer.
>
> All of the probabilistic verification done in MET is based on an Nx2
> probabilistic contingency table where N is determined by the number
of
> probability thresholds the user defines.  In your example, you're
defining
> probabilities bins of width 0.1.  The value of 0.77 falls in the 0.7
to 0.8
> bin, and the value of 0.44 falls in the 0.4 to 0.5 bin.
>
> The mid-point of each bin is used as the probability value for all
points
> falling within that bin.  So in your example, 0.75 is used instead
of 0.77
> and 0.45 is used instead of 0.44.  If you were to increase the
number of
> forecast probability thresholds from 10 to 10,000, the computed
brier
> scores and BSS value would get very, very close to the values you
> computed.  But of course that wouldn't be very practical.
>
> I suspect this difference explains the numerical differences you're
seeing.
>
> Once we understand "what" the code is doing, the obvious next
question is
> what "should" it be doing.
>
> I added Tressa Fowler, our resident statistician, to this ticket.
> Hopefully, she can explain why we're doing probabilistic vx using an
Nx2
> contingency table rather than operating on the raw probabilistic
values
> directly.
>
> Thanks,
> John
>
>
> On Thu, Jun 16, 2016 at 2:14 PM, robert.craig.2 at us.af.mil via RT <
> met_help at ucar.edu> wrote:
>
> >
> > Thu Jun 16 14:14:13 2016: Request 76782 was acted upon.
> > Transaction: Ticket created by robert.craig.2 at us.af.mil
> >        Queue: met_help
> >      Subject: Incorrect Results From MET
> >        Owner: Nobody
> >   Requestors: robert.craig.2 at us.af.mil
> >       Status: new
> >  Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=76782
> > >
> >
> >
> > John, now that I have MET generating BSS, I thought it prudent to
> > verify if it is calculating the correct values for brier score and
brier
> skill score.
> > I cut down my MPR files to two files of two lines to simplify.
Below
> > is a listing of the -dump_row which shows the lines it used.
Attached
> > are the actual files.
> >
> > VERSION MODEL  FCST_LEAD FCST_VALID_BEG  FCST_VALID_END  OBS_LEAD
> > OBS_VALID_BEG   OBS_VALID_END   FCST_VAR FCST_LEV OBS_VAR OBS_LEV
OBTYPE
> > VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH COV_THR ESH
> > ALPHA LINE_TYPE TOTAL INDEX OBS_SID OBS_LAT OBS_LON  OBS_LVL
OBS_ELV
> > FCST     OBS     CLIMO    OBS_QC
> > V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
> > 20160503_000000 20160503_000000 APCP     L0       APCP    L0
ADPSFC
> > FULL    NEAREST     9           >=1         >=1        NA
> >     NA    MPR       14612     1   10014 59.7900  5.34000      NA
NA
> > 1.00000  0.00000 1.00000      NA
> > V5.1    GALWEM 240000    20160503_000000 20160503_000000 000000
> > 20160503_000000 20160503_000000 APCP     L0       APCP    L0
ADPSFC
> > FULL    NEAREST     9           >=1         >=1        NA
> >     NA    MPR       14612     2   10060 78.2500 22.8200       NA
NA
> > 0.00000  0.00000 0.00000      NA
> > V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
> > 20160502_000000 20160502_000000 APCP     L0       APCP    L0
ADPSFC
> > FULL    NEAREST     9           >=1         >=1        NA
> >     NA    MPR       14655     1   10014 59.7900  5.34000      NA
NA
> > 0.777778 1.00000 0.444444     NA
> > V5.1    GALWEM 240000    20160502_000000 20160502_000000 000000
> > 20160502_000000 20160502_000000 APCP     L0       APCP    L0
ADPSFC
> > FULL    NEAREST     9           >=1         >=1        NA
> >     NA    MPR       14655     2   10060 78.2500 22.8200       NA
NA
> > 0.00000  0.00000 0.00000      NA
> >
> > The Brier Score for the model should be the following:
> >
> > Model    Ob
> > 1                0
> > 0                0
> > .7777778 1
> > 0                0
> >
> > BS=.2623 (model)
> >
> >
> > Climo    Ob
> > 1              0
> > 0              0
> > .444444 1
> > 0              0
> >
> > BSr=.327  (reference)
> >
> > BSS=1-BS/BSr=.18
> >
> > However, below is MET's output:
> >
> > JOB_LIST:      -job aggregate_stat -fcst_lead 240000 -line_type
MPR -by
> > FCST_VAR -by FCST_THRESH -dump_row
> > /h/data/global/WXQC/data/met/filter_job.stat -out_line_type PSTD
> > -out_fcst_thresh >=0 -out_fcs t_thresh >=0.1 -out_fcst_thresh
>=0.2
> > -out_fcst_thresh >=0.3 -out_fcst_thresh >=0.4 -out_fcst_thresh
>=0.5
> > -out_fcst_thresh >=0.6 -out_fcst_thresh >=0.7 -out_fcst_thresh
>=0.8
> > -out_fcst_thresh >=0.9 -o ut_fcst_thresh >=1.0 -out_obs_thresh >=1
> > -out_alpha 0.05000
> > COL_NAME: FCST_VAR FCST_THRESH TOTAL N_THRESH BASER BASER_NCL
> > BASER_NCU RELIABILITY RESOLUTION UNCERTAINTY ROC_AUC  BRIER
BRIER_NCL
> BRIER_NCU
> > BRIERCL BRIERCL_NCL BRIERCL_NCU      BSS THRESH_1 THRESH_2
> > THRESH_3 THRESH_4 THRESH_5 THRESH_6 THRESH_7 THRESH_8 THRESH_9
> > THRESH_10
> > THRESH_11
> >     PSTD: APCP     >=1             4       11  0.25  0.045587
0.69936
> > 0.2425     0.1875      0.1875 0.66667 0.2425  -0.69807   1.18307
0.3025
> > -0.59297     1.19797 -0.19835        0      0.1
> >      0.2      0.3      0.4      0.5      0.6      0.7      0.8
0.9
> > 1
> >
> > BSS=-.198
> > BS model = .2425
> >
> > (values from attached file GALWEM*)
> >
> > The command line used to generate is:
> >
> > /h/WXQC/met-5.1/bin/stat_analysis -lookin
> > /h/data/global/WXQC/data/met/mdlob_pairs/TP -out
> > /h/data/global/WXQC/data/met/summary/GALWEM_APCP_24hr_9_PSTD_0Z
-job
> > aggregate_sta t -line_type MPR -out_line_type PSTD -fcst_lead
240000
> > -out_fcst_thresh
> > ge0,ge0.1,ge0.2,ge0.3,ge0.4,ge0.5,ge0.6,ge0.7,ge0.8,ge0.9,ge1.0
> > -out_obs_thresh ge1 -by FCST_VAR -by FCST_THRESH -v 6 -dump_row
> > /h/data/global/WXQC/data/met/filter_job.stat
> >
> > Did do something wrong?
> >
> > Bob
> >
> >
> >
>
>
>
>

------------------------------------------------


More information about the Met_help mailing list