[Met_help] [rt.rap.ucar.edu #41951] History for MBIAS and Wind Verification (?)

RAL HelpDesk {for John Halley Gotway} met_help at ucar.edu
Wed Feb 23 14:50:39 MST 2011


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

I'm helping a colleague with WRF verification of 5-minute winds using point obs from a tower, and have been asked to use BIAS as one measure of performance -- MBIAS in this case.  I'm looking at U and V over a single month (28 useable forecasts) and am calculating the verification at the true 5-minute intervals for starters.

The MBIAS values for U are generally consistent from 1 to 3, but for some isolated times become large negative (and sometimes positive) values like -10^2 or even -10^3.  I've used Point-Stat to create MPR data so I can easily look at the 28 matched pairs that went into any large MBIAS values and I don't see anything unusual.  Clearly the MPR data show the forecast is not perfect, but I'm surprised by these large values of MBIAS.  In contrast, the MBIAS values for V are very consistent and don't exhibit anything like what I see for U.  Is this result reasonable?  Perhaps MBIAS is not an appropriate stat to use for wind components?  Although I am struck by the disparity in the results for U and V.  Any guidance you can offer would be greatly appreciated!

Thanks,
Scott



----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #41951] MBIAS and Wind Verification (?)
From: John Halley Gotway
Time: Fri Nov 05 12:38:39 2010

Scott,

So you're running Point-Stat for one month looking at a the
verification of winds at a single tower.  And you're looking at the
MBIAS values coming out of Point-Stat for each of those 28 times, and
some of the values look surprisingly large (negative and positive).
Is that all correct?

Assuming my understanding is correct...

Each time you run Point-Stat, it sounds like you're computing the
continuous statistics using a single matched pair.  Multiplicative
bias is computed as the mean fcst value divided by the mean obs
value.  And with only one matched pair, you've just dividing the
forecast value at the site by the obs value at the site.  If the obs
value is near zero that'd lead to very large values of MBIAS.

You really should not be computing continuous statistics for each
individual run.  The statistics computed are not meaningful unless you
have a large enough sample size.  So for each run of
Point-Stat, I'd suggest disabling all line types except for the MPR
line type.  And then run STAT-Analysis jobs to aggregate together
those MPR lines and recompute statistics through time.

For a single location, I would think looking at a time series of the
forecast and observation values might be helpful.  For more
statistical advice, I'd refer you to our MET project manager and local
statistician, Tressa Fowler (tressa at ucar.edu).

Hope this helps.

John

On 11/05/2010 10:20 AM, RAL HelpDesk {for Dembek, Scott
Robert[Universities Space Research Association]} wrote:
> I'm helping a colleague with WRF verification of 5-minute winds
using point obs from a tower, and have been asked to use BIAS as one
measure of performance -- MBIAS in this case.  I'm looking at U and V
over a single month (28 useable forecasts) and am calculating the
verification at the true 5-minute intervals for starters.

------------------------------------------------
Subject: MBIAS and Wind Verification (?)
From: Dembek, Scott Robert[Universities Space Research Association]
Time: Fri Nov 05 13:02:21 2010

Hi John,

The stats are being computed at the 5-minute interval aggregating all
available forecasts (28 forecasts in this case).  So I have a value
for MBIAS for each 5-minutes of a 24-hour forecast that is based on 28
matched pairs, i.e., 28 pairs for 0-5 minutes, 28 pairs for 5-10
minutes, ..., 28 pairs for 23 hours 55 minutes - 24 hours.  Perhaps as
you suggest not the best way to aggregate the forecasts, but this is
what was suggested to me for starters.  I was just surprised to see
the values jump around so much for U but remain consistent for V.
Wind speed looked fine as well.

I will definitely contact Tressa for some advice.  Thanks for the
information.

Scott


________________________________
From: RAL HelpDesk {for John Halley Gotway} [met_help at ucar.edu]
Sent: Friday, November 05, 2010 2:38 PM
To: Dembek, Scott Robert (MSFC-VP61)[Universities Space Research
Association (USRA)]
Subject: Re: [rt.rap.ucar.edu #41951] MBIAS and Wind Verification (?)

Scott,

So you're running Point-Stat for one month looking at a the
verification of winds at a single tower.  And you're looking at the
MBIAS values coming out of Point-Stat for each of those 28 times, and
some of the values look surprisingly large (negative and positive).
Is that all correct?

Assuming my understanding is correct...

Each time you run Point-Stat, it sounds like you're computing the
continuous statistics using a single matched pair.  Multiplicative
bias is computed as the mean fcst value divided by the mean obs
value.  And with only one matched pair, you've just dividing the
forecast value at the site by the obs value at the site.  If the obs
value is near zero that'd lead to very large values of MBIAS.

You really should not be computing continuous statistics for each
individual run.  The statistics computed are not meaningful unless you
have a large enough sample size.  So for each run of
Point-Stat, I'd suggest disabling all line types except for the MPR
line type.  And then run STAT-Analysis jobs to aggregate together
those MPR lines and recompute statistics through time.

For a single location, I would think looking at a time series of the
forecast and observation values might be helpful.  For more
statistical advice, I'd refer you to our MET project manager and local
statistician, Tressa Fowler (tressa at ucar.edu).

Hope this helps.

John

On 11/05/2010 10:20 AM, RAL HelpDesk {for Dembek, Scott
Robert[Universities Space Research Association]} wrote:
> I'm helping a colleague with WRF verification of 5-minute winds
using point obs from a tower, and have been asked to use BIAS as one
measure of performance -- MBIAS in this case.  I'm looking at U and V
over a single month (28 useable forecasts) and am calculating the
verification at the true 5-minute intervals for starters.


------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #41951] MBIAS and Wind Verification (?)
From: John Halley Gotway
Time: Fri Nov 05 15:17:19 2010

Scott,

OK, I understand better now.

As a software developer, I am of course mostly concerned that the
software is doing the correct computations.  So my question for you is
this... If you compute the continuous statistics from a single
run of Point-Stat (for example, the 5-10 minute forecast) and then use
the MPR output from Point-Stat and run it through the STAT-Analysis
tool to compute continuous statistics, do you get the same
numbers out?

You should of course get the same results.  And when I check this
myself, I do.  But I just want to make sure you're not seeing some odd
behavior.

Thanks,
John

On 11/05/2010 01:02 PM, RAL HelpDesk {for Dembek, Scott
Robert[Universities Space Research Association]} wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=41951 >
>
> Hi John,
>
> The stats are being computed at the 5-minute interval aggregating
all available forecasts (28 forecasts in this case).  So I have a
value for MBIAS for each 5-minutes of a 24-hour forecast that is based
on 28 matched pairs, i.e., 28 pairs for 0-5 minutes, 28 pairs for 5-10
minutes, ..., 28 pairs for 23 hours 55 minutes - 24 hours.  Perhaps as
you suggest not the best way to aggregate the forecasts, but this is
what was suggested to me for starters.  I was just surprised to see
the values jump around so much for U but remain consistent for V.
Wind speed looked fine as well.
>
> I will definitely contact Tressa for some advice.  Thanks for the
information.
>
> Scott
>
>
> ________________________________
> From: RAL HelpDesk {for John Halley Gotway} [met_help at ucar.edu]
> Sent: Friday, November 05, 2010 2:38 PM
> To: Dembek, Scott Robert (MSFC-VP61)[Universities Space Research
Association (USRA)]
> Subject: Re: [rt.rap.ucar.edu #41951] MBIAS and Wind Verification
(?)
>
> Scott,
>
> So you're running Point-Stat for one month looking at a the
verification of winds at a single tower.  And you're looking at the
MBIAS values coming out of Point-Stat for each of those 28 times, and
> some of the values look surprisingly large (negative and positive).
Is that all correct?
>
> Assuming my understanding is correct...
>
> Each time you run Point-Stat, it sounds like you're computing the
continuous statistics using a single matched pair.  Multiplicative
bias is computed as the mean fcst value divided by the mean obs
> value.  And with only one matched pair, you've just dividing the
forecast value at the site by the obs value at the site.  If the obs
value is near zero that'd lead to very large values of MBIAS.
>
> You really should not be computing continuous statistics for each
individual run.  The statistics computed are not meaningful unless you
have a large enough sample size.  So for each run of
> Point-Stat, I'd suggest disabling all line types except for the MPR
line type.  And then run STAT-Analysis jobs to aggregate together
those MPR lines and recompute statistics through time.
>
> For a single location, I would think looking at a time series of the
forecast and observation values might be helpful.  For more
statistical advice, I'd refer you to our MET project manager and local
> statistician, Tressa Fowler (tressa at ucar.edu).
>
> Hope this helps.
>
> John
>
> On 11/05/2010 10:20 AM, RAL HelpDesk {for Dembek, Scott
Robert[Universities Space Research Association]} wrote:
>> I'm helping a colleague with WRF verification of 5-minute winds
using point obs from a tower, and have been asked to use BIAS as one
measure of performance -- MBIAS in this case.  I'm looking at U and V
over a single month (28 useable forecasts) and am calculating the
verification at the true 5-minute intervals for starters.
>

------------------------------------------------


More information about the Met_help mailing list