[Met_help] [Fwd: Re: point_stat]
Mark Seefeldt
mark.seefeldt at Colorado.EDU
Tue May 5 18:59:26 MDT 2009
Tressa,
Thank you for the explanation of the way MET handles correlation.
Mark
Tressa Fowler wrote:
> Mark,
>
> John forwarded your message in the hope that I might shed a little light
> on your question regarding correlation calculations in MET.
>
> The correlations in MET are not calculated over space or time. They are
> not within field correlation values, but between two fields. They are
> calculated at a single time, between each forecast and observation pair
> in the domain, treating each pair as if it is independent from all other
> pairs (obviously false). The information you get from this is how well
> the forecasts and observations "match up" in a linear way. It gives you
> no information about how forecasts are related to the other surrounding
> forecasts in either space or time.
>
> Hope that helps. Please let me know if you have further questions.
>
> Tressa
>
>
> On Apr 30, 2009, at 12:53 PM, John Halley Gotway wrote:
>
>> Tressa and Barb,
>>
>> FYI - Here's a user who is trying to use MET to verify a single
>> station over the course of a month. Please see the message traffic
>> below.
>>
>> I explained that MET isn't set up well to handle that case, but
>> outlined what he'd have to do to accomplish it. He's decided not to
>> use MET because the steps are too cumbersome to accomplish what he
>> wants to do.
>>
>> Tressa, there's a question in there about the CI's being computed in
>> space rather than in time.
>>
>> Feel free to send any advice or suggestions you might have.
>>
>> Thanks,
>> John
>>
>> -------- Original Message --------
>> Subject: Re: [Met_help] point_stat
>> Date: Thu, 30 Apr 2009 12:48:37 -0600
>> From: Mark Seefeldt <mark.seefeldt at colorado.edu>
>> To: John Halley Gotway <johnhg at rap.ucar.edu>
>> References: <49F64DAE.6030402 at colorado.edu>
>> <49F6FEB3.70306 at rap.ucar.edu> <49F73450.9060501 at colorado.edu>
>> <49F73F6D.9060804 at rap.ucar.edu> <49F74659.3080005 at colorado.edu>
>> <49F85BA8.3080307 at rap.ucar.edu> <49F8A2F6.5060507 at colorado.edu>
>> <49F9A7A3.7060502 at rap.ucar.edu>
>>
>> John,
>>
>> Thank you for the thorough description of MET in relation to my current
>> application. Naturally, I am quite disappointed that it can not be used
>> for my current evaluation of the performance of WRF over time. I
>> appreciate the steps which you outlined. Unfortunately they are just
>> too cumbersome for this evaluation. I have run 120 different WRF
>> simulations with variations in the physics parameterizations. Each
>> simulation has a 50km and a 10km domain. The simulations are one month
>> in length, or 720/744 3-hourly values. To run point_stat over all of
>> those values (120 * 720/744) for the two different observation locations
>> would make things quickly unbearable. I will now return to my own
>> methods of calculating model evaluation statistics, which unfortunately
>> do not include confidence intervals.
>>
>> I am a little confused as to how you can get a correlation value if the
>> verification if it is only done at a single point in time. That would
>> seem to indicate to me that the correlation is a spatial value, which
>> has limited meaning as it depends on how one progresses through the
>> observations spatially. I'll take a closer look at the documentation to
>> answer that question.
>>
>> Thanks again for your assistance and providing clear answers.
>> Unfortunately, I am going to have to shelve MET as a post-processing
>> tool for WRF.
>>
>> Mark
>>
>> John Halley Gotway wrote:
>>> Mark,
>>>
>>> Glad it's working now. I did notice how your observations were laid
>>> out and was wondering what type of verification you were trying to
>>> do.
>>>
>>> Basically, you'd like to collect matched forecast/observation pairs
>>> at a single location through time, and then compute statistics on
>>> that set of matched pairs. Unfortunately, MET isn't set up to handle
>>> that type of task well. You can use MET to do it, but at this point,
>>> it's a bit more cumbersome than I'd like.
>>>
>>> Point-Stat is designed to compare a forecast field to a set of
>>> observations at a SINGLE point in time. Point-Stat able to aggregate
>>> matched pairs in space but not in time, as you'd like. Unfortunately,
>>> 'cat'ing together all of your forecast files does not have the
>>> desired effect. Since the output of WPP is one file for each valid
>>> time, that's the type of data that MET expects. When you specify the
>>> forecast field as "PRES/Z0" in the configuration file, Point-Stat
>>> looks in the input forecast file for a matching record. It uses the
>>> first one it finds, so it'd only use the data for the first valid
>>> time in your file and ignore the rest of the records.
>>>
>>> Here's how you'd need to do this: (1) Do NOT cat together your
>>> forecast GRIB files - keep them separate. But all your observation
>>> points can be in the same file. (2) Create a masking station id file
>>> that lists the stations you'd like to verify (just "Barro", I
>>> suppose). (3) In the Point-Stat config file, set the following:
>>>
>>> - Set "mask_sid" variable to point to that station id file. - Set
>>> "beg_ds" and "end_ds" to define a matching time window around each
>>> forecast valid time. This should be set carefully so that you get
>>> exactly one matched pair for Barro for each run. You don't want to
>>> accidentally include the one from the day before or the day after. -
>>> Set the output_flag as follows to dump out only the matched pair
>>> data: output_flag[] = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]; -
>>> Set interp_method[] = [ "UW_MEAN" ]; for unweighted mean. - Set
>>> interp_width[] = [ 1 ]; to only use the nearest neighbor. - You may
>>> want to consider using additional interpolation methods and widths.
>>> That way you could see how the results change by smoothing over
>>> larger areas.
>>>
>>> (4) For each forecast valid time, run Point-Stat once. You need to
>>> pass it the forecast file for that time and the point observation
>>> file. (5) Now you've run Point-Stat about 30 times and have generated
>>> about 30 STAT files - each only containing one MPR line (or multiple
>>> if you use multiple interpolation methods). To aggregate through
>>> time, you can run the STAT-Analysis tool, passing to it the directory
>>> containing those 30 STAT files "-lookin stat_dir". You'll want to
>>> run the "-job aggregate_stat" job with "-line_type MPR". And you can
>>> select the type of stats you want computed by setting the
>>> "-out_line_type" argument. If you've using multiple interpolation
>>> methods, you can use the "-interp_mthd" and "-interp_pnts" arguments
>>> to specify which matched pairs should go into the calculations.
>>>
>>> So that'd be the way to do it. Sorry it's so cumbersome. We do
>>> realize that it'd be nice to perform this type of verification more
>>> directly in a single step. We're considering how best to support
>>> this type of verification through time.
>>>
>>> Thanks, John
>>>
>>> Mark Seefeldt wrote:
>>>> John,
>>>>
>>>> Thank you for posting the fix so quickly. I have retrieved the set
>>>> of patches and have recompiled MET. I am now getting matched
>>>> pairs.
>>>>
>>>> I appear to still be struggling in producing the output which I
>>>> desire. I am guessing this is more of a user problem. I'd like to
>>>> step you through what I am trying to do and see if you have any
>>>> tips to improve what I am doing.
>>>>
>>>> I have a month-long simulation from WRF. The simulation runs for
>>>> one month plus one-day, starting the last day of the previous
>>>> month. I have created GRIB files using WPP from the wrfout files.
>>>> The GRIB files were created in the three-hour intervals of the
>>>> history file. I have cat'ed the GRIB files together making a
>>>> single file which represents the entire month long simulation.
>>>> That GRIB file is: wppout_d01_1998-04-30_00.grb
>>>>
>>>> I have one month of observations for a given observation location.
>>>> In this case the Barrow Baseline Surface Radiation Network (BSRN)
>>>> observations for May 1998. From the original observation file I
>>>> have a program which creates a text file of the observations. That
>>>> text file is then processed by ascii2nc to create the netcdf input
>>>> file for point_stat. The netcdf file is:
>>>> phy_sheba-barrow-d01-199805.nc
>>>>
>>>> The end desired result is a list of the continuous statistics for
>>>> the entire month between the WRF model simulations for the nearest
>>>> point to the Barrow observations in comparison to the actual Barrow
>>>> observations. There would be a maximum of 744 observation/forecast
>>>> pairs (31 days x 24 hours). This represents a maximum and not the
>>>> expected because there are some missing values.
>>>>
>>>> When I initially ran point_stat I only got 2 matching pairs. I
>>>> added the command-line options -valid_beg 19980501_00 and
>>>> -valid_end 19980531_23. I then got 743, 741, 703, 731, and 736
>>>> matching pairs, depending on the variable of interest (i.e. T_2m).
>>>> That is what I would expect.
>>>>
>>>> When looking at the CNT file things became a little more suspect.
>>>> -I initially noticed the following: FCST_VALID_BEG :
>>>> 19980501_000000 FCST_VALID_END : 19980501_000000 OBS_VALID_BEG :
>>>> 19980501_000000 OBS_VALID_END : 19980531_230000 The OBS fields are
>>>> what I would expect. I would expect FCST_END to be 19980531_2300.
>>>>
>>>> -I also noticed that I do not have FSTDEV, FSTDEV_NCL, FSTDEV_NCU,
>>>> FSTDEV_BCL, and FSTDEV_BCU values (all are listed as NA). I am
>>>> wondering if it is only using the forecast value for 19980501_0000,
>>>> therefore it cannot calculate a FSTDEV.
>>>>
>>>> My questions lie in: -Is what I am trying to do reasonable? -Is the
>>>> methodology which I am doing correct? -Why does the FCST_VALID_END
>>>> not go to the end of the month? -Why do I not have FSTDEV values?
>>>>
>>>> Thanks
>>>>
>>>> Mark
>>>>
>>>> John Halley Gotway wrote:
>>>>> Mark,
>>>>>
>>>>> I posted a fix for this issue. Please retrieve the fix from the
>>>>> MET Known Issues page:
>>>>> http://www.dtcenter.org/met/users/support/known_issues/METv2.0/index.php
>>>>>
>>>>>
>>>>>
>>>>> I'd suggest following the instructions in the "All Recommended
>>>>> Updates" section. There are now two bug fixes available and some
>>>>> minor updates to the user's guide, and doing it this way, you'll
>>>>> grab all of the updates.
>>>>>
>>>>> Feel free to write with any more questions or problems.
>>>>>
>>>>> Thanks, John
>>>>>
>>>>> Mark Seefeldt wrote:
>>>>>> John,
>>>>>>
>>>>>> Thanks for the update. Please pass around the fix when you
>>>>>> have it completed. The information you provided is valuable as
>>>>>> it means that I can start processing the GRIB files for the
>>>>>> complete evaluation.
>>>>>>
>>>>>> Mark
>>>>>>
>>>>>> John Halley Gotway wrote:
>>>>>>> Mark,
>>>>>>>
>>>>>>> Thanks for sending the data. I see what the problem is -
>>>>>>> there's a bug in the library code that reads the valid time
>>>>>>> of the GRIB forecast file. It thinks it 2098 as opposed to
>>>>>>> 1998. So Point-Stat is looking for observation values that
>>>>>>> are in the time window 20980430 +/- 5400 seconds. And of
>>>>>>> course, it doesn't find any!
>>>>>>>
>>>>>>> I'm headed out for the day, but I'll put together a fix and
>>>>>>> send it to you tomorrow.
>>>>>>>
>>>>>>> In the meantime, try using the "-valid_beg" and "-valid_end"
>>>>>>> command line options to manually set the matching time
>>>>>>> window. That should get you non-zero matched pairs.
>>>>>>>
>>>>>>> Thanks for finding this issue!
>>>>>>>
>>>>>>> John
>>>>>>>
>>>>>>> Mark Seefeldt wrote:
>>>>>>>> John,
>>>>>>>>
>>>>>>>> Thank you for all of the tips and suggestions which you
>>>>>>>> have provided. I have worked through the different items
>>>>>>>> and I am still not getting matched pairs when I should be.
>>>>>>>>
>>>>>>>> I have uploaded the following files to the anonymous ftp:
>>>>>>>> phy_sheba-barrow-d01-199805.nc - nc observation file
>>>>>>>> phy_sheba-barrow-d01-199805.txt - text observation file
>>>>>>>> wppout_d01_1998-04-30_00.grb - GRIB output from using
>>>>>>>> WPPv3.1 PointStatConfig-phy_sheba - point_stat
>>>>>>>> configuration file
>>>>>>>>
>>>>>>>> The WRF simulation is for an entire month, centered over
>>>>>>>> Alaska. The observations are surface pressure, temperature,
>>>>>>>> relative humidity, downwelling shortwave, and downwelling
>>>>>>>> longwave radiation for a single site, Barrow, Alaska.
>>>>>>>>
>>>>>>>> Let me know if you have any additional questions.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Mark
>>>>>>>>
>>>>>>>> John Halley Gotway wrote:
>>>>>>>>> Mark,
>>>>>>>>>
>>>>>>>>> Let me make a few comments about this.
>>>>>>>>>
>>>>>>>>> First, depending on how you configure Point-Stat, getting
>>>>>>>>> 0 matched pairs for certain combinations of
>>>>>>>>> variables/message type may be fine. For example, if you
>>>>>>>>> configure Point-Stat to verify Temperature at 2-meters
>>>>>>>>> above the surface (TMP/Z2) and at 500mb (TMP/P500) using
>>>>>>>>> message types of ADPSFC (surface obs) and APDUPA (upper
>>>>>>>>> air obs), you would actually expect to get 0 matched
>>>>>>>>> pairs for TMP/Z2 vs APDUPA and 0 matched pairs for
>>>>>>>>> TMP/P500 vs ADPSFC. So sometimes having 0 matched pairs
>>>>>>>>> is fine.
>>>>>>>>>
>>>>>>>>> However, if you're getting 0 matched pairs when you
>>>>>>>>> expect that you should actually be finding some, here's
>>>>>>>>> what I'd ask myself:
>>>>>>>>>
>>>>>>>>> - Am I applying some masking region (a grid or a
>>>>>>>>> polyline) that is perhaps not working like I expect? Try
>>>>>>>>> rerunning with the masking grid set to FULL to verify
>>>>>>>>> over the whole domain.
>>>>>>>>>
>>>>>>>>> - Does my forecast field contain valid data? Clearly
>>>>>>>>> Point-Stat is finding the fields you'd like to verify,
>>>>>>>>> otherwise it'd error out. But if what it's finding
>>>>>>>>> contains only bad data, it won't find any matched pairs.
>>>>>>>>> Can you view the forecast field with some other tool to
>>>>>>>>> check that the field contains valid data? For NetCDF
>>>>>>>>> format, use ncview. For GRIB, "wgrib -V" will tell you
>>>>>>>>> the min/max data values. Or you could view the GRIB file
>>>>>>>>> using NCL or IDV. Or you could run it through the
>>>>>>>>> MET-MODE tool and look at the output plot.
>>>>>>>>>
>>>>>>>>> - Do I have my valid times correct? Am I using
>>>>>>>>> observations that are valid around the same time that my
>>>>>>>>> forecast file is valid? In the Point-Stat config file,
>>>>>>>>> you could set the "beg_ds" and "end_ds" values to define
>>>>>>>>> a VERY large time window to see if you can get some
>>>>>>>>> matched pairs.
>>>>>>>>>
>>>>>>>>> - Lastly, do the observations I'm using not match my
>>>>>>>>> forecast for some other reason? For example, are the
>>>>>>>>> message types for the observations correct? You could
>>>>>>>>> try doing an ncdump to see what message types are in your
>>>>>>>>> point observation file (ncdump -v hdr_typ file_name.nc |
>>>>>>>>> sort -u). Or are the observations not matching for some
>>>>>>>>> other reason? This would be the most difficult to
>>>>>>>>> determine!
>>>>>>>>>
>>>>>>>>> Hopefully that'll help you figure out what's going on
>>>>>>>>> with your data. I'd suggest "opening" things up as much
>>>>>>>>> as possible (mask grid = FULL and set beg_ds/end_ds very
>>>>>>>>> large) to try to get non-zero matched pairs, and go from
>>>>>>>>> there.
>>>>>>>>>
>>>>>>>>> If you're still having problems after trying these
>>>>>>>>> things, feel free to send me some sample files, and I
>>>>>>>>> could take a look to see what going on. You'd need to
>>>>>>>>> send me: (1) Forecast file input for Point-Stat. (2)
>>>>>>>>> Observation file input for Point-Stat. (3) Configuration
>>>>>>>>> file input for Point-Stat. And you could post those files
>>>>>>>>> to RAL's anonymous ftp site: ftp ftp.rap.ucar.edu
>>>>>>>>> username = anonymous password = "your email address" cd
>>>>>>>>> incoming/irap/johnhg put "those 3 files" bye (to exit
>>>>>>>>> ftp)
>>>>>>>>>
>>>>>>>>> Thanks and good luck, John
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Mark Seefeldt wrote:
>>>>>>>>>> I am working on a model evaluation using point_stat in
>>>>>>>>>> MET. As it processes I am getting 0 pairs matched,
>>>>>>>>>> therefore no statistics. Is there a preferred method to
>>>>>>>>>> identify if it is the observation file, the forecast
>>>>>>>>>> file, or the configuration file where the error resides
>>>>>>>>>> resulting in the lack of matched obs/fcst values? I am
>>>>>>>>>> at a loss as to what is wrong in my setup which is
>>>>>>>>>> preventing the obs/fcst pairs to be matched and to
>>>>>>>>>> create the output.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>> Mark _______________________________________________
>>>>>>>>>> Met_help mailing list Met_help at mailman.ucar.edu
>>>>>>>>>> http://mailman.ucar.edu/mailman/listinfo/met_help
>>>>>> _______________________________________________ Met_help
>>>>>> mailing list Met_help at mailman.ucar.edu
>>>>>> http://mailman.ucar.edu/mailman/listinfo/met_help
>>>
>
More information about the Met_help
mailing list