[Met_help] [rt.rap.ucar.edu #59926] History for Verification with a persistence reference forecast
John Halley Gotway via RT
met_help at ucar.edu
Tue Mar 26 09:58:17 MDT 2013
----------------------------------------------------------------
Initial Request
----------------------------------------------------------------
Hi John
I was looking to use a persistence forecast as a reference for comparison alongside my forecast verification. From what I can determine, there is no in-built feature in METv4.0 for this? I just wanted to check that I haven't missed anything before implementing a work-around. I know there is the option to specify a climatology file in Point-Stat but that's not quite what I'm after.
If there is no feature in METv4.0 for it, I am thinking the best way to do it would be to specify a forecast from the previous day for a given set of observations - however so that MET would match up the times it would require altering the init_time and valid_time attributes so that MET believes it was the correct day. This is certainly doable but a little clunky. Do you have any other suggestions for producing verification scores for a persistence forecast?
Cheers
Malcolm Nunn
----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #59926] Verification with a persistence reference forecast
From: John Halley Gotway
Time: Wed Jan 23 09:01:29 2013
Malcolm,
I apologize for the delay in getting back to you. I overlooked your
email and just ran across it today.
The answer to your question is no, there is not anything in particular
in the MET that's set up for verifying persistence forecasts. There
are any number of ways a "persistence" forecast could be
defined, so it's not clear what enhancements would be generally useful
for them. However, you do not need to actually modify the date/time
stamps within the input files themselves. When your run the
MET stat tools, you typically pass them a forecast file, an
observation file, and a configuration file to tell the tool which
fields/methods you'd like to verify.
The Grid-Stat and MODE tools will retrieve the fields from the input
files and then print a warning if their valid times do not match.
That will be the case for a persistence forecast. So you'll see
a warning message about the times not matching, but you should still
get meaningful output.
The Point-Stat tool retrieves the gridded fields from the input
forecast file. Then it defines a matching observation time window
around that forecast valid time. It uses the "beg" and "end" values
from the "obs_window" portion of the config file to define the time
window around the forecast valid time. For a persistence forecast,
you'd want that time window defined a little differently. So
you could modify the beg and end values to set the window how you'd
like. Alternatively, you could use the "-obs_valid_beg" and "-
obs_valid_end" command line options to explicitly set that window in
YYYYMMDD[_HH[MMSS]] format. The command line options override the
config file values. I'd probably do it the later way, but either
would work.
So the fact that the valid times don't match up for a persistence
forecast really is fine. There are ways of getting around it without
actually modifying the times in the input files.
One last word of warning - when verifying a persistence forecast, be
sure to set the "model" value in the config file to something
descriptive. Otherwise, you won't be able to distinguish the regular
forecast output from the persistence forecast output.
Hope that helps, and my apologizes again for the delay.
John Halley Gotway
met_help at ucar.edu
On 01/16/2013 06:32 PM, Malcolm.Nunn at csiro.au via RT wrote:
>
> Wed Jan 16 18:32:57 2013: Request 59926 was acted upon.
> Transaction: Ticket created by Malcolm.Nunn at csiro.au
> Queue: met_help
> Subject: Verification with a persistence reference forecast
> Owner: Nobody
> Requestors: Malcolm.Nunn at csiro.au
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=59926 >
>
>
> Hi John
>
> I was looking to use a persistence forecast as a reference for
comparison alongside my forecast verification. From what I can
determine, there is no in-built feature in METv4.0 for this? I just
wanted to check that I haven't missed anything before implementing a
work-around. I know there is the option to specify a climatology file
in Point-Stat but that's not quite what I'm after.
>
> If there is no feature in METv4.0 for it, I am thinking the best way
to do it would be to specify a forecast from the previous day for a
given set of observations - however so that MET would match up the
times it would require altering the init_time and valid_time
attributes so that MET believes it was the correct day. This is
certainly doable but a little clunky. Do you have any other
suggestions for producing verification scores for a persistence
forecast?
>
> Cheers
> Malcolm Nunn
>
------------------------------------------------
Subject: RE: [rt.rap.ucar.edu #59926] Verification with a persistence reference forecast
From: Malcolm.Nunn at csiro.au
Time: Wed Feb 13 16:23:57 2013
Thanks John.
One issue I can see with simply passing Grid_Stat the
correct files without modifying the timestamps though, is that I
imagine the .stat output would still have the same timestamps. So if
to verify against a 24-hour precipitation persistence forecast for
t=0, ideally I want to use the previous day's 24-h APCP observations
(t=t-1) as the forecast and verify it against the observations for
t=0. If I do that, then won't the timestamp in the Grid_Stat output be
for forecast valid_time (t=t-1)?
Cheers
------------------
Malcolm Nunn
New contact address after the 15th February 2012:
malcolm.nunn at uqconnect.edu.au
-----Original Message-----
From:
John Halley Gotway via RT [mailto:met_help at ucar.edu]
Sent: Thursday,
24 January 2013 3:01 AM
To: Nunn, Malcolm (CMAR, Aspendale)
Cc:
met_help at mailman.ucar.edu
Subject: Re: [rt.rap.ucar.edu #59926]
Verification with a persistence reference forecast
Malcolm,
I
apologize for the delay in getting back to you. I overlooked your
email and just ran across it today.
The answer to your question is
no, there is not anything in particular in the MET that's set up for
verifying persistence forecasts. There are any number of ways a
"persistence" forecast could be defined, so it's not clear what
enhancements would be generally useful for them. However, you do not
need to actually modify the date/time stamps within the input files
themselves. When your run the MET stat tools, you typically pass them
a forecast file, an observation file, and a configuration file to tell
the tool which fields/methods you'd like to verify.
The Grid-Stat
and MODE tools will retrieve the fields from the input files and then
print a warning if their valid times do not match. That will be the
case for a persistence forecast. So you'll see a warning message
about the times not matching, but you should still get meaningful
output.
The Point-Stat tool retrieves the gridded fields from the
input forecast file. Then it defines a matching observation time
window around that forecast valid time. It uses the "beg" and "end"
values from the "obs_window" portion of the config file to define the
time window around the forecast valid time. For a persistence
forecast, you'd want that time window defined a little differently.
So you could modify the beg and end values to set the window how you'd
like. Alternatively, you could use the "-obs_valid_beg" and "-
obs_valid_end" command line options to explicitly set that window in
YYYYMMDD[_HH[MMSS]] format. The command line options override the
config file values. I'd probably do it the later way, but either
would work.
So the fact that the valid times don't match up for a
persistence forecast really is fine. There are ways of getting around
it without actually modifying the times in the input files.
One
last word of warning - when verifying a persistence forecast, be sure
to set the "model" value in the config file to something descriptive.
Otherwise, you won't be able to distinguish the regular forecast
output from the persistence forecast output.
Hope that helps, and
my apologizes again for the delay.
John Halley Gotway
met_help at ucar.edu
On 01/16/2013 06:32 PM, Malcolm.Nunn at csiro.au via
RT wrote:
>
> Wed Jan 16 18:32:57 2013: Request 59926 was acted
upon.
> Transaction: Ticket created by Malcolm.Nunn at csiro.au
>
Queue: met_help
> Subject: Verification with a persistence
reference forecast
> Owner: Nobody
> Requestors:
Malcolm.Nunn at csiro.au
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=59926
> >
>
>
>
Hi John
>
> I was looking to use a persistence forecast as a
reference for comparison alongside my forecast verification. From what
I can determine, there is no in-built feature in METv4.0 for this? I
just wanted to check that I haven't missed anything before
implementing a work-around. I know there is the option to specify a
climatology file in Point-Stat but that's not quite what I'm after.
>
> If there is no feature in METv4.0 for it, I am thinking the best way
to do it would be to specify a forecast from the previous day for a
given set of observations - however so that MET would match up the
times it would require altering the init_time and valid_time
attributes so that MET believes it was the correct day. This is
certainly doable but a little clunky. Do you have any other
suggestions for producing verification scores for a persistence
forecast?
>
> Cheers
> Malcolm Nunn
>
------------------------------------------------
Subject: Verification with a persistence reference forecast
From: John Halley Gotway
Time: Thu Feb 14 08:58:59 2013
Malcolm,
Yes, good question. I ran a little test case to confirm the behavior
I expected. Here's what happens:
- If you have a single GRIB file with multiple output times in it,
you can configure Grid-Stat to verify multiple times with a single
call. In the example I ran, I put output for multiple lead
times into a single file. Then I verified 2-meter temperature for
lead times 0, 3, 6, 9, and 12 with a single call to Grid-Stat. I've
attached my sample config file and output cnt file for reference.
- The timing information that's used to create the output file names
for Grid-Stat (lead time and valid time) are taken from the first
field that's verified.
- The timing information that's actually listed in the output .stat
file is correct for each field. The lead and valid times change.
If you decide to verify multiple times in a single call to Grid-Stat,
you need to be careful how you do it. You may also want to consider
using the "output_prefix" option in the config file which
gives you a way to customize the output file names somewhat.
Thanks,
John
On 02/13/2013 04:23 PM, Malcolm.Nunn at csiro.au via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=59926 >
>
> Thanks John.
>
> One issue I can see with simply passing Grid_Stat the correct files
without modifying the timestamps though, is that I imagine the .stat
output would still have the same timestamps. So if to verify against a
24-hour precipitation persistence forecast for t=0, ideally I want to
use the previous day's 24-h APCP observations (t=t-1) as the forecast
and verify it against the observations for t=0. If I do that, then
won't the timestamp in the Grid_Stat output be for forecast valid_time
(t=t-1)?
>
> Cheers
>
> ------------------
> Malcolm Nunn
>
> New contact address after the 15th February 2012:
> malcolm.nunn at uqconnect.edu.au
>
>
>
> -----Original Message-----
> From: John Halley Gotway via RT [mailto:met_help at ucar.edu]
> Sent: Thursday, 24 January 2013 3:01 AM
> To: Nunn, Malcolm (CMAR, Aspendale)
> Cc: met_help at mailman.ucar.edu
> Subject: Re: [rt.rap.ucar.edu #59926] Verification with a
persistence reference forecast
>
> Malcolm,
>
> I apologize for the delay in getting back to you. I overlooked your
email and just ran across it today.
>
> The answer to your question is no, there is not anything in
particular in the MET that's set up for verifying persistence
forecasts. There are any number of ways a "persistence" forecast
could be defined, so it's not clear what enhancements would be
generally useful for them. However, you do not need to actually
modify the date/time stamps within the input files themselves. When
your run the MET stat tools, you typically pass them a forecast file,
an observation file, and a configuration file to tell the tool which
fields/methods you'd like to verify.
>
> The Grid-Stat and MODE tools will retrieve the fields from the input
files and then print a warning if their valid times do not match.
That will be the case for a persistence forecast. So you'll see a
warning message about the times not matching, but you should still get
meaningful output.
>
> The Point-Stat tool retrieves the gridded fields from the input
forecast file. Then it defines a matching observation time window
around that forecast valid time. It uses the "beg" and "end" values
from the "obs_window" portion of the config file to define the time
window around the forecast valid time. For a persistence forecast,
you'd want that time window defined a little differently. So you
could modify the beg and end values to set the window how you'd like.
Alternatively, you could use the "-obs_valid_beg" and "-obs_valid_end"
command line options to explicitly set that window in
YYYYMMDD[_HH[MMSS]] format. The command line options override the
config file values. I'd probably do it the later way, but either
would work.
>
> So the fact that the valid times don't match up for a persistence
forecast really is fine. There are ways of getting around it without
actually modifying the times in the input files.
>
> One last word of warning - when verifying a persistence forecast, be
sure to set the "model" value in the config file to something
descriptive. Otherwise, you won't be able to distinguish the regular
forecast output from the persistence forecast output.
>
> Hope that helps, and my apologizes again for the delay.
>
> John Halley Gotway
> met_help at ucar.edu
>
> On 01/16/2013 06:32 PM, Malcolm.Nunn at csiro.au via RT wrote:
>>
>> Wed Jan 16 18:32:57 2013: Request 59926 was acted upon.
>> Transaction: Ticket created by Malcolm.Nunn at csiro.au
>> Queue: met_help
>> Subject: Verification with a persistence reference forecast
>> Owner: Nobody
>> Requestors: Malcolm.Nunn at csiro.au
>> Status: new
>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=59926
>>>
>>
>>
>> Hi John
>>
>> I was looking to use a persistence forecast as a reference for
comparison alongside my forecast verification. From what I can
determine, there is no in-built feature in METv4.0 for this? I just
wanted to check that I haven't missed anything before implementing a
work-around. I know there is the option to specify a climatology file
in Point-Stat but that's not quite what I'm after.
>>
>> If there is no feature in METv4.0 for it, I am thinking the best
way to do it would be to specify a forecast from the previous day for
a given set of observations - however so that MET would match up the
times it would require altering the init_time and valid_time
attributes so that MET believes it was the correct day. This is
certainly doable but a little clunky. Do you have any other
suggestions for producing verification scores for a persistence
forecast?
>>
>> Cheers
>> Malcolm Nunn
>>
>
------------------------------------------------
Subject: Verification with a persistence reference forecast
From: John Halley Gotway
Time: Thu Feb 14 08:58:59 2013
////////////////////////////////////////////////////////////////////////////////
//
// Grid-Stat configuration file.
//
// For additional information, see the MET_BASE/data/config/README
file.
//
////////////////////////////////////////////////////////////////////////////////
//
// Output model name to be written
//
model = "WRF";
////////////////////////////////////////////////////////////////////////////////
//
// Forecast and observation fields to be verified
//
fcst = {
wind_thresh = [ NA ];
field = [
{ name = "TMP"; level = [ "Z2" ]; lead_time = "00"; },
{ name = "TMP"; level = [ "Z2" ]; lead_time = "03"; },
{ name = "TMP"; level = [ "Z2" ]; lead_time = "06"; },
{ name = "TMP"; level = [ "Z2" ]; lead_time = "09"; },
{ name = "TMP"; level = [ "Z2" ]; lead_time = "12"; }
];
};
obs = fcst;
////////////////////////////////////////////////////////////////////////////////
//
// Verification masking regions
//
mask = {
grid = [ "FULL" ];
poly = [];
};
////////////////////////////////////////////////////////////////////////////////
//
// Confidence interval settings
//
ci_alpha = [ 0.05 ];
boot = {
interval = PCTILE;
rep_prop = 1.0;
n_rep = 0;
rng = "mt19937";
seed = "";
};
////////////////////////////////////////////////////////////////////////////////
//
// Interpolation methods
//
interp = {
field = BOTH;
vld_thresh = 1.0;
type = [
{
method = UW_MEAN;
width = 1;
}
];
};
////////////////////////////////////////////////////////////////////////////////
//
// Neighborhood methods
//
nbrhd = {
vld_thresh = 1.0;
width = [ 1 ];
cov_thresh = [ >=0.5 ];
}
////////////////////////////////////////////////////////////////////////////////
//
// Statistical output types
//
output_flag = {
fho = NONE;
ctc = NONE;
cts = NONE;
mctc = NONE;
mcts = NONE;
cnt = BOTH;
sl1l2 = NONE;
vl1l2 = NONE;
pct = NONE;
pstd = NONE;
pjc = NONE;
prc = NONE;
nbrctc = NONE;
nbrcts = NONE;
nbrcnt = NONE;
};
//
// NetCDF matched pairs output file
//
nc_pairs_flag = FALSE;
////////////////////////////////////////////////////////////////////////////////
rank_corr_flag = FALSE;
tmp_dir = "/tmp";
output_prefix = "";
version = "V4.0";
////////////////////////////////////////////////////////////////////////////////
------------------------------------------------
Subject: Verification with a persistence reference forecast
From: John Halley Gotway
Time: Thu Feb 14 08:58:59 2013
VERSION MODEL FCST_LEAD FCST_VALID_BEG FCST_VALID_END OBS_LEAD
OBS_VALID_BEG OBS_VALID_END FCST_VAR FCST_LEV OBS_VAR OBS_LEV
OBTYPE VX_MASK INTERP_MTHD INTERP_PNTS FCST_THRESH OBS_THRESH
COV_THRESH ALPHA LINE_TYPE TOTAL FBAR FBAR_NCL FBAR_NCU
FBAR_BCL FBAR_BCU FSTDEV FSTDEV_NCL FSTDEV_NCU FSTDEV_BCL FSTDEV_BCU
OBAR OBAR_NCL OBAR_NCU OBAR_BCL OBAR_BCU OSTDEV OSTDEV_NCL
OSTDEV_NCU OSTDEV_BCL OSTDEV_BCU PR_CORR PR_CORR_NCL PR_CORR_NCU
PR_CORR_BCL PR_CORR_BCU SP_CORR KT_CORR RANKS FRANK_TIES ORANK_TIES ME
ME_NCL ME_NCU ME_BCL ME_BCU ESTDEV ESTDEV_NCL ESTDEV_NCU
ESTDEV_BCL ESTDEV_BCU MBIAS MBIAS_BCL MBIAS_BCU MAE MAE_BCL
MAE_BCU MSE MSE_BCL MSE_BCU BCMSE BCMSE_BCL BCMSE_BCU RMSE
RMSE_BCL RMSE_BCU E10 E10_BCL E10_BCU E25 E25_BCL E25_BCU
E50 E50_BCL E50_BCU E75 E75_BCL E75_BCU E90 E90_BCL
E90_BCU
V4.0 WRF 000000 20120409_000000 20120409_000000 000000
20120409_000000 20120409_000000 TMP Z2 TMP Z2
ANALYS FULL UW_MEAN 1 NA NA NA
0.05000 CNT 26026 288.52862 288.46692 288.59032 NA NA
5.07862 5.03537 5.12264 NA NA 288.54626
288.49142 288.60110 NA NA 4.51389 4.47544 4.55301 NA
NA 0.92618 0.92444 0.92789 NA NA NA
NA 0 0 0 -0.01764 -0.04102 0.00574 NA
NA 1.92439 1.90800 1.94106 NA NA 0.99994 NA
NA 1.18239 NA NA 3.70343 NA NA 3.70312 NA
NA 1.92443 NA NA -2.07737 NA NA -0.52687
NA NA 0.12313 NA NA 0.62113 NA NA
1.83213 NA NA
V4.0 WRF 030000 20120409_030000 20120409_030000 030000
20120409_030000 20120409_030000 TMP Z2 TMP Z2
ANALYS FULL UW_MEAN 1 NA NA NA
0.05000 CNT 26026 284.96308 284.91762 285.00853 NA NA
3.74154 3.70968 3.77397 NA NA 285.81708
285.77243 285.86173 NA NA 3.67543 3.64413 3.70728 NA
NA 0.92276 0.92093 0.92454 NA NA NA
NA 0 0 0 -0.85400 -0.87173 -0.83628 NA
NA 1.45905 1.44662 1.47169 NA NA 0.99701 NA
NA 1.08525 NA NA 2.85806 NA NA 2.12873 NA
NA 1.69058 NA NA -2.72519 NA NA -1.36319
NA NA -0.49069 NA NA 0.09781 NA NA
0.38181 NA NA
V4.0 WRF 060000 20120409_060000 20120409_060000 060000
20120409_060000 20120409_060000 TMP Z2 TMP Z2
ANALYS FULL UW_MEAN 1 NA NA NA
0.05000 CNT 26026 283.75134 283.70562 283.79706 NA NA
3.76302 3.73097 3.79563 NA NA 284.22299
284.17720 284.26877 NA NA 3.76833 3.73623 3.80098 NA
NA 0.95251 0.95137 0.95363 NA NA NA
NA 0 0 0 -0.47165 -0.48575 -0.45755 NA
NA 1.16050 1.15061 1.17055 NA NA 0.99834 NA
NA 0.82889 NA NA 1.56915 NA NA 1.34670 NA
NA 1.25266 NA NA -1.93781 NA NA -1.00181
NA NA -0.17081 NA NA 0.24919 NA NA
0.52819 NA NA
V4.0 WRF 090000 20120409_090000 20120409_090000 090000
20120409_090000 20120409_090000 TMP Z2 TMP Z2
ANALYS FULL UW_MEAN 1 NA NA NA
0.05000 CNT 26026 282.93623 282.88660 282.98585 NA NA
4.08481 4.05002 4.12021 NA NA 283.22952
283.17941 283.27962 NA NA 4.12418 4.08905 4.15992 NA
NA 0.96478 0.96393 0.96562 NA NA NA
NA 0 0 0 -0.29329 -0.30653 -0.28005 NA
NA 1.08998 1.08070 1.09943 NA NA 0.99896 NA
NA 0.80120 NA NA 1.27403 NA NA 1.18801 NA
NA 1.12873 NA NA -1.73094 NA NA -0.88194
NA NA -0.07794 NA NA 0.34506 NA NA
0.77206 NA NA
V4.0 WRF 120000 20120409_120000 20120409_120000 120000
20120409_120000 20120409_120000 TMP Z2 TMP Z2
ANALYS FULL UW_MEAN 1 NA NA NA
0.05000 CNT 26026 282.40748 282.35443 282.46053 NA NA
4.36674 4.32955 4.40458 NA NA 282.52950
282.47459 282.58441 NA NA 4.51984 4.48134 4.55901 NA
NA 0.96695 0.96615 0.96773 NA NA NA
NA 0 0 0 -0.12202 -0.13602 -0.10802 NA
NA 1.15237 1.14256 1.16236 NA NA 0.99957 NA
NA 0.84018 NA NA 1.34280 NA NA 1.32792 NA
NA 1.15879 NA NA -1.64781 NA NA -0.75831
NA NA 0.03369 NA NA 0.50869 NA NA
1.10069 NA NA
------------------------------------------------
More information about the Met_help
mailing list