[Met_help] [rt.rap.ucar.edu #66543] History for Question on online tutorial

John Halley Gotway via RT met_help at ucar.edu
Mon Jul 7 14:45:17 MDT 2014


----------------------------------------------------------------
  Initial Request
----------------------------------------------------------------

Dear whom it may concern,

I have one question on MET online tutorial.

I downloaded MET source code, and compiled it successfully.

I tried to run PB2NC tool following online tutorial, but I got error an
message as belows.

-----------------------------------------------------------------------------------------------------------------
DEBUG 1: Default Config File:
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
DEBUG 1: User Config File:
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
DEBUG 1: Creating NetCDF File:
 /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
DEBUG 1: Processing PrepBufr File:
 /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
ndas.t00z.prepbufr.tm12.20070401.nr
DEBUG 1: Blocking PrepBufr file to:     /tmp/tmp_pb2nc_blk_8994_0
Segmentation fault
-----------------------------------------------------------------------------------------------------------------

Could you give me some advices on this problem?

And could you give me output of PB2NC (i.e., tutorial_pb.nc)? Because I
need this file to run Point-Stat Tool tutorial.

Thank you for your kindness.

Best regards
Yonghan Choi


----------------------------------------------------------------
  Complete Ticket History
----------------------------------------------------------------

Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Wed Apr 30 13:22:48 2014

Yonghan,

Sorry to hear that you're having trouble running pb2nc in the online
tutorial.  Can you tell me, are you able to run it fine using the
script included in the tarball?

After you compiled MET, did you go into the scripts directory and run
the test scripts?

   cd METv4.1/scripts
   ./test_all.sh >& test_all.log

Does pb2nc run OK in the test scripts, or do you see a segmentation
fault there as well?

Thanks,
John Halley Gotway
met_help at ucar.edu

On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>
> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> Transaction: Ticket created by cyh082 at gmail.com
>         Queue: met_help
>       Subject: Question on online tutorial
>         Owner: Nobody
>    Requestors: cyh082 at gmail.com
>        Status: new
>   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
>
> Dear whom it may concern,
>
> I have one question on MET online tutorial.
>
> I downloaded MET source code, and compiled it successfully.
>
> I tried to run PB2NC tool following online tutorial, but I got error
an
> message as belows.
>
>
-----------------------------------------------------------------------------------------------------------------
> DEBUG 1: Default Config File:
> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> DEBUG 1: User Config File:
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> DEBUG 1: Creating NetCDF File:
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
> DEBUG 1: Processing PrepBufr File:
>   /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> ndas.t00z.prepbufr.tm12.20070401.nr
> DEBUG 1: Blocking PrepBufr file to:     /tmp/tmp_pb2nc_blk_8994_0
> Segmentation fault
>
-----------------------------------------------------------------------------------------------------------------
>
> Could you give me some advices on this problem?
>
> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because I
> need this file to run Point-Stat Tool tutorial.
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Thu May 01 03:28:19 2014

Dear John Halley Gotway,

Yes, I ran the test script. I checked the log file, and running pb2nc
resulted in the same error (segmentation fault).

And I have another question.

Actually, I would like to run Point-Stat Tool with AWS observations
and WRF
model outputs as inputs.

Then, should I run ascii2nc to make input observation file for Point-
Stat
Tool using my own AWS observations?

And, should I run Unified Post Processor or pinterp to make input
gridded
file for Point-Stat Tool using my WRF forecasts? Is it necessary to
run
pcp_combine after running UPP or pinterp?

Thank you for your kindness.

Best regards
Yonghan Choi


On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT
<met_help at ucar.edu
> wrote:

> Yonghan,
>
> Sorry to hear that you're having trouble running pb2nc in the online
> tutorial.  Can you tell me, are you able to run it fine using the
script
> included in the tarball?
>
> After you compiled MET, did you go into the scripts directory and
run the
> test scripts?
>
>    cd METv4.1/scripts
>    ./test_all.sh >& test_all.log
>
> Does pb2nc run OK in the test scripts, or do you see a segmentation
fault
> there as well?
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >
> > Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> > Transaction: Ticket created by cyh082 at gmail.com
> >         Queue: met_help
> >       Subject: Question on online tutorial
> >         Owner: Nobody
> >    Requestors: cyh082 at gmail.com
> >        Status: new
> >   Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> >
> > Dear whom it may concern,
> >
> > I have one question on MET online tutorial.
> >
> > I downloaded MET source code, and compiled it successfully.
> >
> > I tried to run PB2NC tool following online tutorial, but I got
error an
> > message as belows.
> >
> >
>
-----------------------------------------------------------------------------------------------------------------
> > DEBUG 1: Default Config File:
> >
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> > DEBUG 1: User Config File:
> >
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> > DEBUG 1: Creating NetCDF File:
> >
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
> > DEBUG 1: Processing PrepBufr File:
> >   /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> > ndas.t00z.prepbufr.tm12.20070401.nr
> > DEBUG 1: Blocking PrepBufr file to:     /tmp/tmp_pb2nc_blk_8994_0
> > Segmentation fault
> >
>
-----------------------------------------------------------------------------------------------------------------
> >
> > Could you give me some advices on this problem?
> >
> > And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because I
> > need this file to run Point-Stat Tool tutorial.
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Thu May 01 09:34:45 2014

Yonghan,

If you will not be using PREPBUFR point observations, you won't need
the PB2NC utility.  I'm happy to help you debug the issue to try to
figure out what's going on, but it's up to you.

To answer your questions, yes, if your AWS observations are in ASCII,
I'd suggest reformatting them into the 11-column format that ASCII2NC
expects.  After you run them through ASCII2NC, you'll be
able to use them in point_stat.

I'd suggest using the Unified Post Processor (UPP) whose output format
is GRIB.  MET handles GRIB files very well.  It can read the pinterp
output as well, but not variables on staggered dimensions,
such as the winds.  For that reason, using UPP is better.

The pcp_combine tool is run to modify precipitation accumulation
intervals.  This is all driven by your observations.  For example,
suppose you have 24-hour, daily observations of accumulated
precipitation.  You'd want to compare a 24-hour forecast accumulation
to that 24-hour observed accumulation.  So you may need to run
pcp_combine to add or subtract accumulated precipitation across
your WRF output files.  If you're only verifying instantaneous
variables, such as temperature or winds, you wouldn't need to run
pcp_combine.

Hope that helps.

Thanks,
John

On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear John Halley Gotway,
>
> Yes, I ran the test script. I checked the log file, and running
pb2nc
> resulted in the same error (segmentation fault).
>
> And I have another question.
>
> Actually, I would like to run Point-Stat Tool with AWS observations
and WRF
> model outputs as inputs.
>
> Then, should I run ascii2nc to make input observation file for
Point-Stat
> Tool using my own AWS observations?
>
> And, should I run Unified Post Processor or pinterp to make input
gridded
> file for Point-Stat Tool using my WRF forecasts? Is it necessary to
run
> pcp_combine after running UPP or pinterp?
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT
<met_help at ucar.edu
>> wrote:
>
>> Yonghan,
>>
>> Sorry to hear that you're having trouble running pb2nc in the
online
>> tutorial.  Can you tell me, are you able to run it fine using the
script
>> included in the tarball?
>>
>> After you compiled MET, did you go into the scripts directory and
run the
>> test scripts?
>>
>>     cd METv4.1/scripts
>>     ./test_all.sh >& test_all.log
>>
>> Does pb2nc run OK in the test scripts, or do you see a segmentation
fault
>> there as well?
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>
>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>> Transaction: Ticket created by cyh082 at gmail.com
>>>          Queue: met_help
>>>        Subject: Question on online tutorial
>>>          Owner: Nobody
>>>     Requestors: cyh082 at gmail.com
>>>         Status: new
>>>    Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>>
>>> Dear whom it may concern,
>>>
>>> I have one question on MET online tutorial.
>>>
>>> I downloaded MET source code, and compiled it successfully.
>>>
>>> I tried to run PB2NC tool following online tutorial, but I got
error an
>>> message as belows.
>>>
>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>> DEBUG 1: Default Config File:
>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>> DEBUG 1: User Config File:
>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>> DEBUG 1: Creating NetCDF File:
>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
>>> DEBUG 1: Processing PrepBufr File:
>>>    /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>> DEBUG 1: Blocking PrepBufr file to:     /tmp/tmp_pb2nc_blk_8994_0
>>> Segmentation fault
>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>
>>> Could you give me some advices on this problem?
>>>
>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because I
>>> need this file to run Point-Stat Tool tutorial.
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>
>>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Thu May 08 02:38:58 2014

Dear Dr. John Halley Gotway,

I decided to use point observations (AWS observations) in ASCII
format.

I have some questions on how to make 11-column format.

1. If I use AWS observations, is message type (column #1) "ADPSFC"?

2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013 to
12
UTC 4 July 2013, is valid time (column #3) "20130704_120000"?

3. Is grib code (column #7) for accumulated precipitation "61"?

4. Is level (column #8) "12"?

5. What is appropriate value for QC string (column #10)?

And... I will use UPP as suggested.

6. Should I modify wrf_cntrl.parm (included in the code) to output
accumulated total rainfall amount?

Finally, as you know, WRF output includes accumulated rainfall up to
forecast time.
For example, if initial time for WRF forecast is 00 UTC 4 July 2013,
output
file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.

7. Then, should I use pcp-combine tool to make 12-h accumulated WRF
forecast?
If yes, how can I do this?

Thank you for your kindness.

Best regards
Yonghan Choi


On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:

> Yonghan,
>
> If you will not be using PREPBUFR point observations, you won't need
the
> PB2NC utility.  I'm happy to help you debug the issue to try to
figure out
> what's going on, but it's up to you.
>
> To answer your questions, yes, if your AWS observations are in
ASCII, I'd
> suggest reformatting them into the 11-column format that ASCII2NC
expects.
>  After you run them through ASCII2NC, you'll be
> able to use them in point_stat.
>
> I'd suggest using the Unified Post Processor (UPP) whose output
format is
> GRIB.  MET handles GRIB files very well.  It can read the pinterp
output as
> well, but not variables on staggered dimensions,
> such as the winds.  For that reason, using UPP is better.
>
> The pcp_combine tool is run to modify precipitation accumulation
> intervals.  This is all driven by your observations.  For example,
suppose
> you have 24-hour, daily observations of accumulated
> precipitation.  You'd want to compare a 24-hour forecast
accumulation to
> that 24-hour observed accumulation.  So you may need to run
pcp_combine to
> add or subtract accumulated precipitation across
> your WRF output files.  If you're only verifying instantaneous
variables,
> such as temperature or winds, you wouldn't need to run pcp_combine.
>
> Hope that helps.
>
> Thanks,
> John
>
> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear John Halley Gotway,
> >
> > Yes, I ran the test script. I checked the log file, and running
pb2nc
> > resulted in the same error (segmentation fault).
> >
> > And I have another question.
> >
> > Actually, I would like to run Point-Stat Tool with AWS
observations and
> WRF
> > model outputs as inputs.
> >
> > Then, should I run ascii2nc to make input observation file for
Point-Stat
> > Tool using my own AWS observations?
> >
> > And, should I run Unified Post Processor or pinterp to make input
gridded
> > file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
> > pcp_combine after running UPP or pinterp?
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> met_help at ucar.edu
> >> wrote:
> >
> >> Yonghan,
> >>
> >> Sorry to hear that you're having trouble running pb2nc in the
online
> >> tutorial.  Can you tell me, are you able to run it fine using the
script
> >> included in the tarball?
> >>
> >> After you compiled MET, did you go into the scripts directory and
run
> the
> >> test scripts?
> >>
> >>     cd METv4.1/scripts
> >>     ./test_all.sh >& test_all.log
> >>
> >> Does pb2nc run OK in the test scripts, or do you see a
segmentation
> fault
> >> there as well?
> >>
> >> Thanks,
> >> John Halley Gotway
> >> met_help at ucar.edu
> >>
> >> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>
> >>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>> Transaction: Ticket created by cyh082 at gmail.com
> >>>          Queue: met_help
> >>>        Subject: Question on online tutorial
> >>>          Owner: Nobody
> >>>     Requestors: cyh082 at gmail.com
> >>>         Status: new
> >>>    Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>>
> >>> Dear whom it may concern,
> >>>
> >>> I have one question on MET online tutorial.
> >>>
> >>> I downloaded MET source code, and compiled it successfully.
> >>>
> >>> I tried to run PB2NC tool following online tutorial, but I got
error an
> >>> message as belows.
> >>>
> >>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>> DEBUG 1: Default Config File:
> >>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>> DEBUG 1: User Config File:
> >>>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>> DEBUG 1: Creating NetCDF File:
> >>>    /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> tutorial_pb.nc
> >>> DEBUG 1: Processing PrepBufr File:
> >>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>> Segmentation fault
> >>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>
> >>> Could you give me some advices on this problem?
> >>>
> >>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because
> I
> >>> need this file to run Point-Stat Tool tutorial.
> >>>
> >>> Thank you for your kindness.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>
> >>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Thu May 08 12:15:08 2014

Yonghan,

1. For precipitation, I often see people using the message type of
"MC_PCP", but I don't think it really matters.

2. Yes, the valid time is the end of the accumulation interval.
20130704_120000 is correct.

3. Yes, the GRIB code for accumulated precip is 61.

4. Yes, the level would be "12" for 12 hours of accumulation.  Using
"120000" would work too.

5. The QC column can be filled with any string.  If you don't have any
quality control values for this data, I'd suggest just putting "NA" in
the column.

6. I would guess that accumulated total rainfall amount is already
turned on in the default wrf_cntrl.parm file.  I'd suggest running UPP
once and looking at the output GRIB file.  Run the GRIB file
through the "wgrib" utility to dump it's contents and look for "APCP"
in the output.  APCP is the GRIB code abbreviation for accumulated
precipitation.

7. As you've described, by default, WRF-ARW computes a runtime
accumulation of precipitation.  So your 48-hour forecast contains 48
hours of accumulated precipitation.  To get 12-hour accumulations,
you have 2 choices:
   - You could modify the TPREC setting when running WRF to "dump the
accumulation bucket" every 12 hours.  That'd give you 12-hour
accumulations in your GRIB files.
   - Or you could keep it as a runtime accumulation and run
pcp_combine to compute the 12-hour accumulations.  For example, you
subtract 60 hours of accumulation minus 48 hours of accumulation to
get
the 12 hours in between.

Hope that helps get you going.

Thanks,
John

On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> I decided to use point observations (AWS observations) in ASCII
format.
>
> I have some questions on how to make 11-column format.
>
> 1. If I use AWS observations, is message type (column #1) "ADPSFC"?
>
> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013
to 12
> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>
> 3. Is grib code (column #7) for accumulated precipitation "61"?
>
> 4. Is level (column #8) "12"?
>
> 5. What is appropriate value for QC string (column #10)?
>
> And... I will use UPP as suggested.
>
> 6. Should I modify wrf_cntrl.parm (included in the code) to output
> accumulated total rainfall amount?
>
> Finally, as you know, WRF output includes accumulated rainfall up to
> forecast time.
> For example, if initial time for WRF forecast is 00 UTC 4 July 2013,
output
> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
>
> 7. Then, should I use pcp-combine tool to make 12-h accumulated WRF
> forecast?
> If yes, how can I do this?
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Yonghan,
>>
>> If you will not be using PREPBUFR point observations, you won't
need the
>> PB2NC utility.  I'm happy to help you debug the issue to try to
figure out
>> what's going on, but it's up to you.
>>
>> To answer your questions, yes, if your AWS observations are in
ASCII, I'd
>> suggest reformatting them into the 11-column format that ASCII2NC
expects.
>>   After you run them through ASCII2NC, you'll be
>> able to use them in point_stat.
>>
>> I'd suggest using the Unified Post Processor (UPP) whose output
format is
>> GRIB.  MET handles GRIB files very well.  It can read the pinterp
output as
>> well, but not variables on staggered dimensions,
>> such as the winds.  For that reason, using UPP is better.
>>
>> The pcp_combine tool is run to modify precipitation accumulation
>> intervals.  This is all driven by your observations.  For example,
suppose
>> you have 24-hour, daily observations of accumulated
>> precipitation.  You'd want to compare a 24-hour forecast
accumulation to
>> that 24-hour observed accumulation.  So you may need to run
pcp_combine to
>> add or subtract accumulated precipitation across
>> your WRF output files.  If you're only verifying instantaneous
variables,
>> such as temperature or winds, you wouldn't need to run pcp_combine.
>>
>> Hope that helps.
>>
>> Thanks,
>> John
>>
>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear John Halley Gotway,
>>>
>>> Yes, I ran the test script. I checked the log file, and running
pb2nc
>>> resulted in the same error (segmentation fault).
>>>
>>> And I have another question.
>>>
>>> Actually, I would like to run Point-Stat Tool with AWS
observations and
>> WRF
>>> model outputs as inputs.
>>>
>>> Then, should I run ascii2nc to make input observation file for
Point-Stat
>>> Tool using my own AWS observations?
>>>
>>> And, should I run Unified Post Processor or pinterp to make input
gridded
>>> file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
>>> pcp_combine after running UPP or pinterp?
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>> met_help at ucar.edu
>>>> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> Sorry to hear that you're having trouble running pb2nc in the
online
>>>> tutorial.  Can you tell me, are you able to run it fine using the
script
>>>> included in the tarball?
>>>>
>>>> After you compiled MET, did you go into the scripts directory and
run
>> the
>>>> test scripts?
>>>>
>>>>      cd METv4.1/scripts
>>>>      ./test_all.sh >& test_all.log
>>>>
>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
>> fault
>>>> there as well?
>>>>
>>>> Thanks,
>>>> John Halley Gotway
>>>> met_help at ucar.edu
>>>>
>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>>           Queue: met_help
>>>>>         Subject: Question on online tutorial
>>>>>           Owner: Nobody
>>>>>      Requestors: cyh082 at gmail.com
>>>>>          Status: new
>>>>>     Ticket <URL:
>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>>
>>>>> Dear whom it may concern,
>>>>>
>>>>> I have one question on MET online tutorial.
>>>>>
>>>>> I downloaded MET source code, and compiled it successfully.
>>>>>
>>>>> I tried to run PB2NC tool following online tutorial, but I got
error an
>>>>> message as belows.
>>>>>
>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>> DEBUG 1: Default Config File:
>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>> DEBUG 1: User Config File:
>>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>> DEBUG 1: Creating NetCDF File:
>>>>>     /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>> tutorial_pb.nc
>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>> Segmentation fault
>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>
>>>>> Could you give me some advices on this problem?
>>>>>
>>>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because
>> I
>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>
>>>>> Thank you for your kindness.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>
>>>>
>>
>>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Fri May 09 06:22:00 2014

Dear Dr. John Halley Gotway,

Thank you for your kind tips on MET.

I successfully ran ASCII2NC, UPP, and Point-Stat tools.

I have additional questions.

1. When I make 11-column format for ASCII2NC tool, how can I deal with
missing value?
Can I use "-9999.00" to indicate missing value for observed value
(11th
column)?

2. How can I use pcp-combine tool in subtract mode?
Usage only for sum mode is provided in users' guide.

3. If I would like to run MODE tool or Wavelet-Stat tool,
I need gridded model forecast and gridded observations.

Could you recommend some methods to make gridded observations using
point
(ascii format) observations?

Thank you.

Best regards
Yonghan Choi


On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT
<met_help at ucar.edu
> wrote:

> Yonghan,
>
> 1. For precipitation, I often see people using the message type of
> "MC_PCP", but I don't think it really matters.
>
> 2. Yes, the valid time is the end of the accumulation interval.
>  20130704_120000 is correct.
>
> 3. Yes, the GRIB code for accumulated precip is 61.
>
> 4. Yes, the level would be "12" for 12 hours of accumulation.  Using
> "120000" would work too.
>
> 5. The QC column can be filled with any string.  If you don't have
any
> quality control values for this data, I'd suggest just putting "NA"
in the
> column.
>
> 6. I would guess that accumulated total rainfall amount is already
turned
> on in the default wrf_cntrl.parm file.  I'd suggest running UPP once
and
> looking at the output GRIB file.  Run the GRIB file
> through the "wgrib" utility to dump it's contents and look for
"APCP" in
> the output.  APCP is the GRIB code abbreviation for accumulated
> precipitation.
>
> 7. As you've described, by default, WRF-ARW computes a runtime
> accumulation of precipitation.  So your 48-hour forecast contains 48
hours
> of accumulated precipitation.  To get 12-hour accumulations,
> you have 2 choices:
>    - You could modify the TPREC setting when running WRF to "dump
the
> accumulation bucket" every 12 hours.  That'd give you 12-hour
accumulations
> in your GRIB files.
>    - Or you could keep it as a runtime accumulation and run
pcp_combine to
> compute the 12-hour accumulations.  For example, you subtract 60
hours of
> accumulation minus 48 hours of accumulation to get
> the 12 hours in between.
>
> Hope that helps get you going.
>
> Thanks,
> John
>
> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > I decided to use point observations (AWS observations) in ASCII
format.
> >
> > I have some questions on how to make 11-column format.
> >
> > 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
> >
> > 2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013
to 12
> > UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
> >
> > 3. Is grib code (column #7) for accumulated precipitation "61"?
> >
> > 4. Is level (column #8) "12"?
> >
> > 5. What is appropriate value for QC string (column #10)?
> >
> > And... I will use UPP as suggested.
> >
> > 6. Should I modify wrf_cntrl.parm (included in the code) to output
> > accumulated total rainfall amount?
> >
> > Finally, as you know, WRF output includes accumulated rainfall up
to
> > forecast time.
> > For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
> output
> > file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
> >
> > 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
> > forecast?
> > If yes, how can I do this?
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> > met_help at ucar.edu> wrote:
> >
> >> Yonghan,
> >>
> >> If you will not be using PREPBUFR point observations, you won't
need the
> >> PB2NC utility.  I'm happy to help you debug the issue to try to
figure
> out
> >> what's going on, but it's up to you.
> >>
> >> To answer your questions, yes, if your AWS observations are in
ASCII,
> I'd
> >> suggest reformatting them into the 11-column format that ASCII2NC
> expects.
> >>   After you run them through ASCII2NC, you'll be
> >> able to use them in point_stat.
> >>
> >> I'd suggest using the Unified Post Processor (UPP) whose output
format
> is
> >> GRIB.  MET handles GRIB files very well.  It can read the pinterp
> output as
> >> well, but not variables on staggered dimensions,
> >> such as the winds.  For that reason, using UPP is better.
> >>
> >> The pcp_combine tool is run to modify precipitation accumulation
> >> intervals.  This is all driven by your observations.  For
example,
> suppose
> >> you have 24-hour, daily observations of accumulated
> >> precipitation.  You'd want to compare a 24-hour forecast
accumulation to
> >> that 24-hour observed accumulation.  So you may need to run
pcp_combine
> to
> >> add or subtract accumulated precipitation across
> >> your WRF output files.  If you're only verifying instantaneous
> variables,
> >> such as temperature or winds, you wouldn't need to run
pcp_combine.
> >>
> >> Hope that helps.
> >>
> >> Thanks,
> >> John
> >>
> >> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>> Dear John Halley Gotway,
> >>>
> >>> Yes, I ran the test script. I checked the log file, and running
pb2nc
> >>> resulted in the same error (segmentation fault).
> >>>
> >>> And I have another question.
> >>>
> >>> Actually, I would like to run Point-Stat Tool with AWS
observations and
> >> WRF
> >>> model outputs as inputs.
> >>>
> >>> Then, should I run ascii2nc to make input observation file for
> Point-Stat
> >>> Tool using my own AWS observations?
> >>>
> >>> And, should I run Unified Post Processor or pinterp to make
input
> gridded
> >>> file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
> >>> pcp_combine after running UPP or pinterp?
> >>>
> >>> Thank you for your kindness.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>>
> >>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> >> met_help at ucar.edu
> >>>> wrote:
> >>>
> >>>> Yonghan,
> >>>>
> >>>> Sorry to hear that you're having trouble running pb2nc in the
online
> >>>> tutorial.  Can you tell me, are you able to run it fine using
the
> script
> >>>> included in the tarball?
> >>>>
> >>>> After you compiled MET, did you go into the scripts directory
and run
> >> the
> >>>> test scripts?
> >>>>
> >>>>      cd METv4.1/scripts
> >>>>      ./test_all.sh >& test_all.log
> >>>>
> >>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
> >> fault
> >>>> there as well?
> >>>>
> >>>> Thanks,
> >>>> John Halley Gotway
> >>>> met_help at ucar.edu
> >>>>
> >>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>>>
> >>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>>>> Transaction: Ticket created by cyh082 at gmail.com
> >>>>>           Queue: met_help
> >>>>>         Subject: Question on online tutorial
> >>>>>           Owner: Nobody
> >>>>>      Requestors: cyh082 at gmail.com
> >>>>>          Status: new
> >>>>>     Ticket <URL:
> >> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>
> >>>>>
> >>>>> Dear whom it may concern,
> >>>>>
> >>>>> I have one question on MET online tutorial.
> >>>>>
> >>>>> I downloaded MET source code, and compiled it successfully.
> >>>>>
> >>>>> I tried to run PB2NC tool following online tutorial, but I got
error
> an
> >>>>> message as belows.
> >>>>>
> >>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>> DEBUG 1: Default Config File:
> >>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>>>> DEBUG 1: User Config File:
> >>>>>
> >>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>>>> DEBUG 1: Creating NetCDF File:
> >>>>>     /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> >> tutorial_pb.nc
> >>>>> DEBUG 1: Processing PrepBufr File:
> >>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>>>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>>>> Segmentation fault
> >>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>
> >>>>> Could you give me some advices on this problem?
> >>>>>
> >>>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
> Because
> >> I
> >>>>> need this file to run Point-Stat Tool tutorial.
> >>>>>
> >>>>> Thank you for your kindness.
> >>>>>
> >>>>> Best regards
> >>>>> Yonghan Choi
> >>>>>
> >>>>
> >>>>
> >>
> >>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Fri May 09 10:10:37 2014

Yonghan,

1. Yes, -9999 is the value to use for missing data.  However, I don't
understand why you'd use -9999 in the 11th column for the "observed
value".  When point-stat encounters a bad observation value,
it'll just skip that record.  So you should just skip over any
observations with a bad data value.

2. I was surprised to see that we don't have any examples of running
pcp_combine in the subtraction mode on our website.  Here's an example
using the sample data that's included in the MET tarball:
    METv4.1/bin/pcp_combine -subtract \
    METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12 \
    METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06 \
    wrfprs_ruc13_APCP_06_to_12.nc

I've passed one file name followed by an accumulation interval, then a
second file name followed by an accumulation interval, and lastly, the
output file name.  It grabs the 12-hour accumulation from
the first file and subtracts off the 6-hour accumulation from the
second file.  The result is the 6 hours of accumulation in between.

3. There is no general purpose way of converting point data to gridded
data.  It's a pretty difficult task and would be very specific to the
data you're using.  Generally, I wouldn't recommend trying
to do that.  Instead, I'd suggest looking for other available gridded
datasets.  Are you looking for gridded observations of precipitation?
What is your region of interest?  You could send me a
sample GRIB file if you'd like, and I could look at the grid you're
using.  There may be some satellite observations of precipitation you
could use.

Thanks,
John


On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> Thank you for your kind tips on MET.
>
> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
>
> I have additional questions.
>
> 1. When I make 11-column format for ASCII2NC tool, how can I deal
with
> missing value?
> Can I use "-9999.00" to indicate missing value for observed value
(11th
> column)?
>
> 2. How can I use pcp-combine tool in subtract mode?
> Usage only for sum mode is provided in users' guide.
>
> 3. If I would like to run MODE tool or Wavelet-Stat tool,
> I need gridded model forecast and gridded observations.
>
> Could you recommend some methods to make gridded observations using
point
> (ascii format) observations?
>
> Thank you.
>
> Best regards
> Yonghan Choi
>
>
> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT
<met_help at ucar.edu
>> wrote:
>
>> Yonghan,
>>
>> 1. For precipitation, I often see people using the message type of
>> "MC_PCP", but I don't think it really matters.
>>
>> 2. Yes, the valid time is the end of the accumulation interval.
>>   20130704_120000 is correct.
>>
>> 3. Yes, the GRIB code for accumulated precip is 61.
>>
>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
>> "120000" would work too.
>>
>> 5. The QC column can be filled with any string.  If you don't have
any
>> quality control values for this data, I'd suggest just putting "NA"
in the
>> column.
>>
>> 6. I would guess that accumulated total rainfall amount is already
turned
>> on in the default wrf_cntrl.parm file.  I'd suggest running UPP
once and
>> looking at the output GRIB file.  Run the GRIB file
>> through the "wgrib" utility to dump it's contents and look for
"APCP" in
>> the output.  APCP is the GRIB code abbreviation for accumulated
>> precipitation.
>>
>> 7. As you've described, by default, WRF-ARW computes a runtime
>> accumulation of precipitation.  So your 48-hour forecast contains
48 hours
>> of accumulated precipitation.  To get 12-hour accumulations,
>> you have 2 choices:
>>     - You could modify the TPREC setting when running WRF to "dump
the
>> accumulation bucket" every 12 hours.  That'd give you 12-hour
accumulations
>> in your GRIB files.
>>     - Or you could keep it as a runtime accumulation and run
pcp_combine to
>> compute the 12-hour accumulations.  For example, you subtract 60
hours of
>> accumulation minus 48 hours of accumulation to get
>> the 12 hours in between.
>>
>> Hope that helps get you going.
>>
>> Thanks,
>> John
>>
>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear Dr. John Halley Gotway,
>>>
>>> I decided to use point observations (AWS observations) in ASCII
format.
>>>
>>> I have some questions on how to make 11-column format.
>>>
>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
>>>
>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013
to 12
>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>>>
>>> 3. Is grib code (column #7) for accumulated precipitation "61"?
>>>
>>> 4. Is level (column #8) "12"?
>>>
>>> 5. What is appropriate value for QC string (column #10)?
>>>
>>> And... I will use UPP as suggested.
>>>
>>> 6. Should I modify wrf_cntrl.parm (included in the code) to output
>>> accumulated total rainfall amount?
>>>
>>> Finally, as you know, WRF output includes accumulated rainfall up
to
>>> forecast time.
>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
>> output
>>> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
>>>
>>> 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
>>> forecast?
>>> If yes, how can I do this?
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
>>> met_help at ucar.edu> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> If you will not be using PREPBUFR point observations, you won't
need the
>>>> PB2NC utility.  I'm happy to help you debug the issue to try to
figure
>> out
>>>> what's going on, but it's up to you.
>>>>
>>>> To answer your questions, yes, if your AWS observations are in
ASCII,
>> I'd
>>>> suggest reformatting them into the 11-column format that ASCII2NC
>> expects.
>>>>    After you run them through ASCII2NC, you'll be
>>>> able to use them in point_stat.
>>>>
>>>> I'd suggest using the Unified Post Processor (UPP) whose output
format
>> is
>>>> GRIB.  MET handles GRIB files very well.  It can read the pinterp
>> output as
>>>> well, but not variables on staggered dimensions,
>>>> such as the winds.  For that reason, using UPP is better.
>>>>
>>>> The pcp_combine tool is run to modify precipitation accumulation
>>>> intervals.  This is all driven by your observations.  For
example,
>> suppose
>>>> you have 24-hour, daily observations of accumulated
>>>> precipitation.  You'd want to compare a 24-hour forecast
accumulation to
>>>> that 24-hour observed accumulation.  So you may need to run
pcp_combine
>> to
>>>> add or subtract accumulated precipitation across
>>>> your WRF output files.  If you're only verifying instantaneous
>> variables,
>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
>>>>
>>>> Hope that helps.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>> Dear John Halley Gotway,
>>>>>
>>>>> Yes, I ran the test script. I checked the log file, and running
pb2nc
>>>>> resulted in the same error (segmentation fault).
>>>>>
>>>>> And I have another question.
>>>>>
>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations and
>>>> WRF
>>>>> model outputs as inputs.
>>>>>
>>>>> Then, should I run ascii2nc to make input observation file for
>> Point-Stat
>>>>> Tool using my own AWS observations?
>>>>>
>>>>> And, should I run Unified Post Processor or pinterp to make
input
>> gridded
>>>>> file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
>>>>> pcp_combine after running UPP or pinterp?
>>>>>
>>>>> Thank you for your kindness.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>>
>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>>>> met_help at ucar.edu
>>>>>> wrote:
>>>>>
>>>>>> Yonghan,
>>>>>>
>>>>>> Sorry to hear that you're having trouble running pb2nc in the
online
>>>>>> tutorial.  Can you tell me, are you able to run it fine using
the
>> script
>>>>>> included in the tarball?
>>>>>>
>>>>>> After you compiled MET, did you go into the scripts directory
and run
>>>> the
>>>>>> test scripts?
>>>>>>
>>>>>>       cd METv4.1/scripts
>>>>>>       ./test_all.sh >& test_all.log
>>>>>>
>>>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
>>>> fault
>>>>>> there as well?
>>>>>>
>>>>>> Thanks,
>>>>>> John Halley Gotway
>>>>>> met_help at ucar.edu
>>>>>>
>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>>>
>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>>>>            Queue: met_help
>>>>>>>          Subject: Question on online tutorial
>>>>>>>            Owner: Nobody
>>>>>>>       Requestors: cyh082 at gmail.com
>>>>>>>           Status: new
>>>>>>>      Ticket <URL:
>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>
>>>>>>>
>>>>>>> Dear whom it may concern,
>>>>>>>
>>>>>>> I have one question on MET online tutorial.
>>>>>>>
>>>>>>> I downloaded MET source code, and compiled it successfully.
>>>>>>>
>>>>>>> I tried to run PB2NC tool following online tutorial, but I got
error
>> an
>>>>>>> message as belows.
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>> DEBUG 1: Default Config File:
>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>>>> DEBUG 1: User Config File:
>>>>>>>
>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>>>> DEBUG 1: Creating NetCDF File:
>>>>>>>      /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>>>> tutorial_pb.nc
>>>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>>>> Segmentation fault
>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>
>>>>>>> Could you give me some advices on this problem?
>>>>>>>
>>>>>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
>> Because
>>>> I
>>>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>>>
>>>>>>> Thank you for your kindness.
>>>>>>>
>>>>>>> Best regards
>>>>>>> Yonghan Choi
>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>
>>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Sat May 10 01:27:13 2014

Dear Dr. John Halley Gotway,

First of all, thank you for your advices.

Actually, I would like to verify WRF-model forecasts (using AWS point
observations; these are currently-available data to me), especially
precipitation forecast.

My region of interest is East Asia, focusing on South Korea.

I used Lambert-Conformal map projection when running the WRF model.

Thank you for your kindness.

Best regards
Yonghan Choi


On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:

> Yonghan,
>
> 1. Yes, -9999 is the value to use for missing data.  However, I
don't
> understand why you'd use -9999 in the 11th column for the "observed
value".
>  When point-stat encounters a bad observation value,
> it'll just skip that record.  So you should just skip over any
> observations with a bad data value.
>
> 2. I was surprised to see that we don't have any examples of running
> pcp_combine in the subtraction mode on our website.  Here's an
example
> using the sample data that's included in the MET tarball:
>     METv4.1/bin/pcp_combine -subtract \
>     METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12
\
>     METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06
\
>     wrfprs_ruc13_APCP_06_to_12.nc
>
> I've passed one file name followed by an accumulation interval, then
a
> second file name followed by an accumulation interval, and lastly,
the
> output file name.  It grabs the 12-hour accumulation from
> the first file and subtracts off the 6-hour accumulation from the
second
> file.  The result is the 6 hours of accumulation in between.
>
> 3. There is no general purpose way of converting point data to
gridded
> data.  It's a pretty difficult task and would be very specific to
the data
> you're using.  Generally, I wouldn't recommend trying
> to do that.  Instead, I'd suggest looking for other available
gridded
> datasets.  Are you looking for gridded observations of
precipitation?  What
> is your region of interest?  You could send me a
> sample GRIB file if you'd like, and I could look at the grid you're
using.
>  There may be some satellite observations of precipitation you could
use.
>
> Thanks,
> John
>
>
> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > Thank you for your kind tips on MET.
> >
> > I successfully ran ASCII2NC, UPP, and Point-Stat tools.
> >
> > I have additional questions.
> >
> > 1. When I make 11-column format for ASCII2NC tool, how can I deal
with
> > missing value?
> > Can I use "-9999.00" to indicate missing value for observed value
(11th
> > column)?
> >
> > 2. How can I use pcp-combine tool in subtract mode?
> > Usage only for sum mode is provided in users' guide.
> >
> > 3. If I would like to run MODE tool or Wavelet-Stat tool,
> > I need gridded model forecast and gridded observations.
> >
> > Could you recommend some methods to make gridded observations
using point
> > (ascii format) observations?
> >
> > Thank you.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
> met_help at ucar.edu
> >> wrote:
> >
> >> Yonghan,
> >>
> >> 1. For precipitation, I often see people using the message type
of
> >> "MC_PCP", but I don't think it really matters.
> >>
> >> 2. Yes, the valid time is the end of the accumulation interval.
> >>   20130704_120000 is correct.
> >>
> >> 3. Yes, the GRIB code for accumulated precip is 61.
> >>
> >> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
> >> "120000" would work too.
> >>
> >> 5. The QC column can be filled with any string.  If you don't
have any
> >> quality control values for this data, I'd suggest just putting
"NA" in
> the
> >> column.
> >>
> >> 6. I would guess that accumulated total rainfall amount is
already
> turned
> >> on in the default wrf_cntrl.parm file.  I'd suggest running UPP
once and
> >> looking at the output GRIB file.  Run the GRIB file
> >> through the "wgrib" utility to dump it's contents and look for
"APCP" in
> >> the output.  APCP is the GRIB code abbreviation for accumulated
> >> precipitation.
> >>
> >> 7. As you've described, by default, WRF-ARW computes a runtime
> >> accumulation of precipitation.  So your 48-hour forecast contains
48
> hours
> >> of accumulated precipitation.  To get 12-hour accumulations,
> >> you have 2 choices:
> >>     - You could modify the TPREC setting when running WRF to
"dump the
> >> accumulation bucket" every 12 hours.  That'd give you 12-hour
> accumulations
> >> in your GRIB files.
> >>     - Or you could keep it as a runtime accumulation and run
> pcp_combine to
> >> compute the 12-hour accumulations.  For example, you subtract 60
hours
> of
> >> accumulation minus 48 hours of accumulation to get
> >> the 12 hours in between.
> >>
> >> Hope that helps get you going.
> >>
> >> Thanks,
> >> John
> >>
> >> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>> Dear Dr. John Halley Gotway,
> >>>
> >>> I decided to use point observations (AWS observations) in ASCII
format.
> >>>
> >>> I have some questions on how to make 11-column format.
> >>>
> >>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
> >>>
> >>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
> 12
> >>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
> >>>
> >>> 3. Is grib code (column #7) for accumulated precipitation "61"?
> >>>
> >>> 4. Is level (column #8) "12"?
> >>>
> >>> 5. What is appropriate value for QC string (column #10)?
> >>>
> >>> And... I will use UPP as suggested.
> >>>
> >>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
> >>> accumulated total rainfall amount?
> >>>
> >>> Finally, as you know, WRF output includes accumulated rainfall
up to
> >>> forecast time.
> >>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
> >> output
> >>> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
> >>>
> >>> 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
> >>> forecast?
> >>> If yes, how can I do this?
> >>>
> >>> Thank you for your kindness.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>>
> >>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> >>> met_help at ucar.edu> wrote:
> >>>
> >>>> Yonghan,
> >>>>
> >>>> If you will not be using PREPBUFR point observations, you won't
need
> the
> >>>> PB2NC utility.  I'm happy to help you debug the issue to try to
figure
> >> out
> >>>> what's going on, but it's up to you.
> >>>>
> >>>> To answer your questions, yes, if your AWS observations are in
ASCII,
> >> I'd
> >>>> suggest reformatting them into the 11-column format that
ASCII2NC
> >> expects.
> >>>>    After you run them through ASCII2NC, you'll be
> >>>> able to use them in point_stat.
> >>>>
> >>>> I'd suggest using the Unified Post Processor (UPP) whose output
format
> >> is
> >>>> GRIB.  MET handles GRIB files very well.  It can read the
pinterp
> >> output as
> >>>> well, but not variables on staggered dimensions,
> >>>> such as the winds.  For that reason, using UPP is better.
> >>>>
> >>>> The pcp_combine tool is run to modify precipitation
accumulation
> >>>> intervals.  This is all driven by your observations.  For
example,
> >> suppose
> >>>> you have 24-hour, daily observations of accumulated
> >>>> precipitation.  You'd want to compare a 24-hour forecast
accumulation
> to
> >>>> that 24-hour observed accumulation.  So you may need to run
> pcp_combine
> >> to
> >>>> add or subtract accumulated precipitation across
> >>>> your WRF output files.  If you're only verifying instantaneous
> >> variables,
> >>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
> >>>>
> >>>> Hope that helps.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
> >>>>>
> >>>>> Dear John Halley Gotway,
> >>>>>
> >>>>> Yes, I ran the test script. I checked the log file, and
running pb2nc
> >>>>> resulted in the same error (segmentation fault).
> >>>>>
> >>>>> And I have another question.
> >>>>>
> >>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
> and
> >>>> WRF
> >>>>> model outputs as inputs.
> >>>>>
> >>>>> Then, should I run ascii2nc to make input observation file for
> >> Point-Stat
> >>>>> Tool using my own AWS observations?
> >>>>>
> >>>>> And, should I run Unified Post Processor or pinterp to make
input
> >> gridded
> >>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
> run
> >>>>> pcp_combine after running UPP or pinterp?
> >>>>>
> >>>>> Thank you for your kindness.
> >>>>>
> >>>>> Best regards
> >>>>> Yonghan Choi
> >>>>>
> >>>>>
> >>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> >>>> met_help at ucar.edu
> >>>>>> wrote:
> >>>>>
> >>>>>> Yonghan,
> >>>>>>
> >>>>>> Sorry to hear that you're having trouble running pb2nc in the
online
> >>>>>> tutorial.  Can you tell me, are you able to run it fine using
the
> >> script
> >>>>>> included in the tarball?
> >>>>>>
> >>>>>> After you compiled MET, did you go into the scripts directory
and
> run
> >>>> the
> >>>>>> test scripts?
> >>>>>>
> >>>>>>       cd METv4.1/scripts
> >>>>>>       ./test_all.sh >& test_all.log
> >>>>>>
> >>>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
> >>>> fault
> >>>>>> there as well?
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John Halley Gotway
> >>>>>> met_help at ucar.edu
> >>>>>>
> >>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>>>>>
> >>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>>>>>> Transaction: Ticket created by cyh082 at gmail.com
> >>>>>>>            Queue: met_help
> >>>>>>>          Subject: Question on online tutorial
> >>>>>>>            Owner: Nobody
> >>>>>>>       Requestors: cyh082 at gmail.com
> >>>>>>>           Status: new
> >>>>>>>      Ticket <URL:
> >>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>>>
> >>>>>>>
> >>>>>>> Dear whom it may concern,
> >>>>>>>
> >>>>>>> I have one question on MET online tutorial.
> >>>>>>>
> >>>>>>> I downloaded MET source code, and compiled it successfully.
> >>>>>>>
> >>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
> error
> >> an
> >>>>>>> message as belows.
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>> DEBUG 1: Default Config File:
> >>>>>>>
> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>>>>>> DEBUG 1: User Config File:
> >>>>>>>
> >>>>
> >>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>>>>>> DEBUG 1: Creating NetCDF File:
> >>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> >>>> tutorial_pb.nc
> >>>>>>> DEBUG 1: Processing PrepBufr File:
> >>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>>>>>> Segmentation fault
> >>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>>
> >>>>>>> Could you give me some advices on this problem?
> >>>>>>>
> >>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
> >> Because
> >>>> I
> >>>>>>> need this file to run Point-Stat Tool tutorial.
> >>>>>>>
> >>>>>>> Thank you for your kindness.
> >>>>>>>
> >>>>>>> Best regards
> >>>>>>> Yonghan Choi
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>
> >>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Tue May 13 17:16:20 2014

Yonghan,

If you're using AWS point observations, reformatting them into the
format expected by ascii2nc is the right way to go.

If you'd like to use gridded satellite data for verification, you
could consider the TRMM data products described on this page:
    http://www.dtcenter.org/met/users/downloads/observation_data.php

We also provide an Rscript on that page that will help you reformat
the data into a version that MET expects.

Here's a link directly to the NASA data:
http://gdata1.sci.gsfc.nasa.gov/daac-
bin/G3/gui.cgi?instance_id=TRMM_3-Hourly

Hope that helps.

Thanks,
John Halley Gotway
met_help at ucar.edu

On 05/10/2014 01:27 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> First of all, thank you for your advices.
>
> Actually, I would like to verify WRF-model forecasts (using AWS
point
> observations; these are currently-available data to me), especially
> precipitation forecast.
>
> My region of interest is East Asia, focusing on South Korea.
>
> I used Lambert-Conformal map projection when running the WRF model.
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Yonghan,
>>
>> 1. Yes, -9999 is the value to use for missing data.  However, I
don't
>> understand why you'd use -9999 in the 11th column for the "observed
value".
>>   When point-stat encounters a bad observation value,
>> it'll just skip that record.  So you should just skip over any
>> observations with a bad data value.
>>
>> 2. I was surprised to see that we don't have any examples of
running
>> pcp_combine in the subtraction mode on our website.  Here's an
example
>> using the sample data that's included in the MET tarball:
>>      METv4.1/bin/pcp_combine -subtract \
>>      METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212
12 \
>>      METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212
06 \
>>      wrfprs_ruc13_APCP_06_to_12.nc
>>
>> I've passed one file name followed by an accumulation interval,
then a
>> second file name followed by an accumulation interval, and lastly,
the
>> output file name.  It grabs the 12-hour accumulation from
>> the first file and subtracts off the 6-hour accumulation from the
second
>> file.  The result is the 6 hours of accumulation in between.
>>
>> 3. There is no general purpose way of converting point data to
gridded
>> data.  It's a pretty difficult task and would be very specific to
the data
>> you're using.  Generally, I wouldn't recommend trying
>> to do that.  Instead, I'd suggest looking for other available
gridded
>> datasets.  Are you looking for gridded observations of
precipitation?  What
>> is your region of interest?  You could send me a
>> sample GRIB file if you'd like, and I could look at the grid you're
using.
>>   There may be some satellite observations of precipitation you
could use.
>>
>> Thanks,
>> John
>>
>>
>> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear Dr. John Halley Gotway,
>>>
>>> Thank you for your kind tips on MET.
>>>
>>> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
>>>
>>> I have additional questions.
>>>
>>> 1. When I make 11-column format for ASCII2NC tool, how can I deal
with
>>> missing value?
>>> Can I use "-9999.00" to indicate missing value for observed value
(11th
>>> column)?
>>>
>>> 2. How can I use pcp-combine tool in subtract mode?
>>> Usage only for sum mode is provided in users' guide.
>>>
>>> 3. If I would like to run MODE tool or Wavelet-Stat tool,
>>> I need gridded model forecast and gridded observations.
>>>
>>> Could you recommend some methods to make gridded observations
using point
>>> (ascii format) observations?
>>>
>>> Thank you.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
>> met_help at ucar.edu
>>>> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> 1. For precipitation, I often see people using the message type
of
>>>> "MC_PCP", but I don't think it really matters.
>>>>
>>>> 2. Yes, the valid time is the end of the accumulation interval.
>>>>    20130704_120000 is correct.
>>>>
>>>> 3. Yes, the GRIB code for accumulated precip is 61.
>>>>
>>>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
>>>> "120000" would work too.
>>>>
>>>> 5. The QC column can be filled with any string.  If you don't
have any
>>>> quality control values for this data, I'd suggest just putting
"NA" in
>> the
>>>> column.
>>>>
>>>> 6. I would guess that accumulated total rainfall amount is
already
>> turned
>>>> on in the default wrf_cntrl.parm file.  I'd suggest running UPP
once and
>>>> looking at the output GRIB file.  Run the GRIB file
>>>> through the "wgrib" utility to dump it's contents and look for
"APCP" in
>>>> the output.  APCP is the GRIB code abbreviation for accumulated
>>>> precipitation.
>>>>
>>>> 7. As you've described, by default, WRF-ARW computes a runtime
>>>> accumulation of precipitation.  So your 48-hour forecast contains
48
>> hours
>>>> of accumulated precipitation.  To get 12-hour accumulations,
>>>> you have 2 choices:
>>>>      - You could modify the TPREC setting when running WRF to
"dump the
>>>> accumulation bucket" every 12 hours.  That'd give you 12-hour
>> accumulations
>>>> in your GRIB files.
>>>>      - Or you could keep it as a runtime accumulation and run
>> pcp_combine to
>>>> compute the 12-hour accumulations.  For example, you subtract 60
hours
>> of
>>>> accumulation minus 48 hours of accumulation to get
>>>> the 12 hours in between.
>>>>
>>>> Hope that helps get you going.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>> Dear Dr. John Halley Gotway,
>>>>>
>>>>> I decided to use point observations (AWS observations) in ASCII
format.
>>>>>
>>>>> I have some questions on how to make 11-column format.
>>>>>
>>>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
>>>>>
>>>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
>> 12
>>>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>>>>>
>>>>> 3. Is grib code (column #7) for accumulated precipitation "61"?
>>>>>
>>>>> 4. Is level (column #8) "12"?
>>>>>
>>>>> 5. What is appropriate value for QC string (column #10)?
>>>>>
>>>>> And... I will use UPP as suggested.
>>>>>
>>>>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
>>>>> accumulated total rainfall amount?
>>>>>
>>>>> Finally, as you know, WRF output includes accumulated rainfall
up to
>>>>> forecast time.
>>>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
>>>> output
>>>>> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
>>>>>
>>>>> 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
>>>>> forecast?
>>>>> If yes, how can I do this?
>>>>>
>>>>> Thank you for your kindness.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>>
>>>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
>>>>> met_help at ucar.edu> wrote:
>>>>>
>>>>>> Yonghan,
>>>>>>
>>>>>> If you will not be using PREPBUFR point observations, you won't
need
>> the
>>>>>> PB2NC utility.  I'm happy to help you debug the issue to try to
figure
>>>> out
>>>>>> what's going on, but it's up to you.
>>>>>>
>>>>>> To answer your questions, yes, if your AWS observations are in
ASCII,
>>>> I'd
>>>>>> suggest reformatting them into the 11-column format that
ASCII2NC
>>>> expects.
>>>>>>     After you run them through ASCII2NC, you'll be
>>>>>> able to use them in point_stat.
>>>>>>
>>>>>> I'd suggest using the Unified Post Processor (UPP) whose output
format
>>>> is
>>>>>> GRIB.  MET handles GRIB files very well.  It can read the
pinterp
>>>> output as
>>>>>> well, but not variables on staggered dimensions,
>>>>>> such as the winds.  For that reason, using UPP is better.
>>>>>>
>>>>>> The pcp_combine tool is run to modify precipitation
accumulation
>>>>>> intervals.  This is all driven by your observations.  For
example,
>>>> suppose
>>>>>> you have 24-hour, daily observations of accumulated
>>>>>> precipitation.  You'd want to compare a 24-hour forecast
accumulation
>> to
>>>>>> that 24-hour observed accumulation.  So you may need to run
>> pcp_combine
>>>> to
>>>>>> add or subtract accumulated precipitation across
>>>>>> your WRF output files.  If you're only verifying instantaneous
>>>> variables,
>>>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
>>>>>>
>>>>>> Hope that helps.
>>>>>>
>>>>>> Thanks,
>>>>>> John
>>>>>>
>>>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
>>>>>>>
>>>>>>> Dear John Halley Gotway,
>>>>>>>
>>>>>>> Yes, I ran the test script. I checked the log file, and
running pb2nc
>>>>>>> resulted in the same error (segmentation fault).
>>>>>>>
>>>>>>> And I have another question.
>>>>>>>
>>>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
>> and
>>>>>> WRF
>>>>>>> model outputs as inputs.
>>>>>>>
>>>>>>> Then, should I run ascii2nc to make input observation file for
>>>> Point-Stat
>>>>>>> Tool using my own AWS observations?
>>>>>>>
>>>>>>> And, should I run Unified Post Processor or pinterp to make
input
>>>> gridded
>>>>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
>> run
>>>>>>> pcp_combine after running UPP or pinterp?
>>>>>>>
>>>>>>> Thank you for your kindness.
>>>>>>>
>>>>>>> Best regards
>>>>>>> Yonghan Choi
>>>>>>>
>>>>>>>
>>>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>>>>>> met_help at ucar.edu
>>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yonghan,
>>>>>>>>
>>>>>>>> Sorry to hear that you're having trouble running pb2nc in the
online
>>>>>>>> tutorial.  Can you tell me, are you able to run it fine using
the
>>>> script
>>>>>>>> included in the tarball?
>>>>>>>>
>>>>>>>> After you compiled MET, did you go into the scripts directory
and
>> run
>>>>>> the
>>>>>>>> test scripts?
>>>>>>>>
>>>>>>>>        cd METv4.1/scripts
>>>>>>>>        ./test_all.sh >& test_all.log
>>>>>>>>
>>>>>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
>>>>>> fault
>>>>>>>> there as well?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> John Halley Gotway
>>>>>>>> met_help at ucar.edu
>>>>>>>>
>>>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>>>>>
>>>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>>>>>>             Queue: met_help
>>>>>>>>>           Subject: Question on online tutorial
>>>>>>>>>             Owner: Nobody
>>>>>>>>>        Requestors: cyh082 at gmail.com
>>>>>>>>>            Status: new
>>>>>>>>>       Ticket <URL:
>>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Dear whom it may concern,
>>>>>>>>>
>>>>>>>>> I have one question on MET online tutorial.
>>>>>>>>>
>>>>>>>>> I downloaded MET source code, and compiled it successfully.
>>>>>>>>>
>>>>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
>> error
>>>> an
>>>>>>>>> message as belows.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>> DEBUG 1: Default Config File:
>>>>>>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>>>>>> DEBUG 1: User Config File:
>>>>>>>>>
>>>>>>
>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>>>>>> DEBUG 1: Creating NetCDF File:
>>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>>>>>> tutorial_pb.nc
>>>>>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>>>>>> Segmentation fault
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>> Could you give me some advices on this problem?
>>>>>>>>>
>>>>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
>>>> Because
>>>>>> I
>>>>>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>>>>>
>>>>>>>>> Thank you for your kindness.
>>>>>>>>>
>>>>>>>>> Best regards
>>>>>>>>> Yonghan Choi
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>
>>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Wed May 14 08:57:00 2014

Dear Dr. John Halley Gotway,

Yes, I would like to use gridded observation data such as TRMM data.

As I understand, I can get NetCDF-format file that MET expects using
the R
script.

According to users' guide, model grid should be the same as
observation
grid.

However, model grid (in my case, defined by WRF model) is different
from
observation grid (in my case, of TRMM).

Could you give me some suggestions on this problem?

Thank you.

Best regards
Yonghan Choi


On Wed, May 14, 2014 at 8:16 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:

> Yonghan,
>
> If you're using AWS point observations, reformatting them into the
format
> expected by ascii2nc is the right way to go.
>
> If you'd like to use gridded satellite data for verification, you
could
> consider the TRMM data products described on this page:
>     http://www.dtcenter.org/met/users/downloads/observation_data.php
>
> We also provide an Rscript on that page that will help you reformat
the
> data into a version that MET expects.
>
> Here's a link directly to the NASA data:
>
> http://gdata1.sci.gsfc.nasa.gov/daac-
bin/G3/gui.cgi?instance_id=TRMM_3-Hourly
>
> Hope that helps.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 05/10/2014 01:27 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > First of all, thank you for your advices.
> >
> > Actually, I would like to verify WRF-model forecasts (using AWS
point
> > observations; these are currently-available data to me),
especially
> > precipitation forecast.
> >
> > My region of interest is East Asia, focusing on South Korea.
> >
> > I used Lambert-Conformal map projection when running the WRF
model.
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
> > met_help at ucar.edu> wrote:
> >
> >> Yonghan,
> >>
> >> 1. Yes, -9999 is the value to use for missing data.  However, I
don't
> >> understand why you'd use -9999 in the 11th column for the
"observed
> value".
> >>   When point-stat encounters a bad observation value,
> >> it'll just skip that record.  So you should just skip over any
> >> observations with a bad data value.
> >>
> >> 2. I was surprised to see that we don't have any examples of
running
> >> pcp_combine in the subtraction mode on our website.  Here's an
example
> >> using the sample data that's included in the MET tarball:
> >>      METv4.1/bin/pcp_combine -subtract \
> >>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12 \
> >>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06 \
> >>      wrfprs_ruc13_APCP_06_to_12.nc
> >>
> >> I've passed one file name followed by an accumulation interval,
then a
> >> second file name followed by an accumulation interval, and
lastly, the
> >> output file name.  It grabs the 12-hour accumulation from
> >> the first file and subtracts off the 6-hour accumulation from the
second
> >> file.  The result is the 6 hours of accumulation in between.
> >>
> >> 3. There is no general purpose way of converting point data to
gridded
> >> data.  It's a pretty difficult task and would be very specific to
the
> data
> >> you're using.  Generally, I wouldn't recommend trying
> >> to do that.  Instead, I'd suggest looking for other available
gridded
> >> datasets.  Are you looking for gridded observations of
precipitation?
>  What
> >> is your region of interest?  You could send me a
> >> sample GRIB file if you'd like, and I could look at the grid
you're
> using.
> >>   There may be some satellite observations of precipitation you
could
> use.
> >>
> >> Thanks,
> >> John
> >>
> >>
> >> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>> Dear Dr. John Halley Gotway,
> >>>
> >>> Thank you for your kind tips on MET.
> >>>
> >>> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
> >>>
> >>> I have additional questions.
> >>>
> >>> 1. When I make 11-column format for ASCII2NC tool, how can I
deal with
> >>> missing value?
> >>> Can I use "-9999.00" to indicate missing value for observed
value (11th
> >>> column)?
> >>>
> >>> 2. How can I use pcp-combine tool in subtract mode?
> >>> Usage only for sum mode is provided in users' guide.
> >>>
> >>> 3. If I would like to run MODE tool or Wavelet-Stat tool,
> >>> I need gridded model forecast and gridded observations.
> >>>
> >>> Could you recommend some methods to make gridded observations
using
> point
> >>> (ascii format) observations?
> >>>
> >>> Thank you.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>>
> >>> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
> >> met_help at ucar.edu
> >>>> wrote:
> >>>
> >>>> Yonghan,
> >>>>
> >>>> 1. For precipitation, I often see people using the message type
of
> >>>> "MC_PCP", but I don't think it really matters.
> >>>>
> >>>> 2. Yes, the valid time is the end of the accumulation interval.
> >>>>    20130704_120000 is correct.
> >>>>
> >>>> 3. Yes, the GRIB code for accumulated precip is 61.
> >>>>
> >>>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
> >>>> "120000" would work too.
> >>>>
> >>>> 5. The QC column can be filled with any string.  If you don't
have any
> >>>> quality control values for this data, I'd suggest just putting
"NA" in
> >> the
> >>>> column.
> >>>>
> >>>> 6. I would guess that accumulated total rainfall amount is
already
> >> turned
> >>>> on in the default wrf_cntrl.parm file.  I'd suggest running UPP
once
> and
> >>>> looking at the output GRIB file.  Run the GRIB file
> >>>> through the "wgrib" utility to dump it's contents and look for
"APCP"
> in
> >>>> the output.  APCP is the GRIB code abbreviation for accumulated
> >>>> precipitation.
> >>>>
> >>>> 7. As you've described, by default, WRF-ARW computes a runtime
> >>>> accumulation of precipitation.  So your 48-hour forecast
contains 48
> >> hours
> >>>> of accumulated precipitation.  To get 12-hour accumulations,
> >>>> you have 2 choices:
> >>>>      - You could modify the TPREC setting when running WRF to
"dump
> the
> >>>> accumulation bucket" every 12 hours.  That'd give you 12-hour
> >> accumulations
> >>>> in your GRIB files.
> >>>>      - Or you could keep it as a runtime accumulation and run
> >> pcp_combine to
> >>>> compute the 12-hour accumulations.  For example, you subtract
60 hours
> >> of
> >>>> accumulation minus 48 hours of accumulation to get
> >>>> the 12 hours in between.
> >>>>
> >>>> Hope that helps get you going.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
> >>>>>
> >>>>> Dear Dr. John Halley Gotway,
> >>>>>
> >>>>> I decided to use point observations (AWS observations) in
ASCII
> format.
> >>>>>
> >>>>> I have some questions on how to make 11-column format.
> >>>>>
> >>>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
> >>>>>
> >>>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
> >> 12
> >>>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
> >>>>>
> >>>>> 3. Is grib code (column #7) for accumulated precipitation
"61"?
> >>>>>
> >>>>> 4. Is level (column #8) "12"?
> >>>>>
> >>>>> 5. What is appropriate value for QC string (column #10)?
> >>>>>
> >>>>> And... I will use UPP as suggested.
> >>>>>
> >>>>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
> >>>>> accumulated total rainfall amount?
> >>>>>
> >>>>> Finally, as you know, WRF output includes accumulated rainfall
up to
> >>>>> forecast time.
> >>>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
> >>>> output
> >>>>> file for 12 UTC 4 July 2013 includes 12-h accumulated
rainfall.
> >>>>>
> >>>>> 7. Then, should I use pcp-combine tool to make 12-h
accumulated WRF
> >>>>> forecast?
> >>>>> If yes, how can I do this?
> >>>>>
> >>>>> Thank you for your kindness.
> >>>>>
> >>>>> Best regards
> >>>>> Yonghan Choi
> >>>>>
> >>>>>
> >>>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> >>>>> met_help at ucar.edu> wrote:
> >>>>>
> >>>>>> Yonghan,
> >>>>>>
> >>>>>> If you will not be using PREPBUFR point observations, you
won't need
> >> the
> >>>>>> PB2NC utility.  I'm happy to help you debug the issue to try
to
> figure
> >>>> out
> >>>>>> what's going on, but it's up to you.
> >>>>>>
> >>>>>> To answer your questions, yes, if your AWS observations are
in
> ASCII,
> >>>> I'd
> >>>>>> suggest reformatting them into the 11-column format that
ASCII2NC
> >>>> expects.
> >>>>>>     After you run them through ASCII2NC, you'll be
> >>>>>> able to use them in point_stat.
> >>>>>>
> >>>>>> I'd suggest using the Unified Post Processor (UPP) whose
output
> format
> >>>> is
> >>>>>> GRIB.  MET handles GRIB files very well.  It can read the
pinterp
> >>>> output as
> >>>>>> well, but not variables on staggered dimensions,
> >>>>>> such as the winds.  For that reason, using UPP is better.
> >>>>>>
> >>>>>> The pcp_combine tool is run to modify precipitation
accumulation
> >>>>>> intervals.  This is all driven by your observations.  For
example,
> >>>> suppose
> >>>>>> you have 24-hour, daily observations of accumulated
> >>>>>> precipitation.  You'd want to compare a 24-hour forecast
> accumulation
> >> to
> >>>>>> that 24-hour observed accumulation.  So you may need to run
> >> pcp_combine
> >>>> to
> >>>>>> add or subtract accumulated precipitation across
> >>>>>> your WRF output files.  If you're only verifying
instantaneous
> >>>> variables,
> >>>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
> >>>>>>
> >>>>>> Hope that helps.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John
> >>>>>>
> >>>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>>>
> >>>>>>> Dear John Halley Gotway,
> >>>>>>>
> >>>>>>> Yes, I ran the test script. I checked the log file, and
running
> pb2nc
> >>>>>>> resulted in the same error (segmentation fault).
> >>>>>>>
> >>>>>>> And I have another question.
> >>>>>>>
> >>>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
> >> and
> >>>>>> WRF
> >>>>>>> model outputs as inputs.
> >>>>>>>
> >>>>>>> Then, should I run ascii2nc to make input observation file
for
> >>>> Point-Stat
> >>>>>>> Tool using my own AWS observations?
> >>>>>>>
> >>>>>>> And, should I run Unified Post Processor or pinterp to make
input
> >>>> gridded
> >>>>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
> >> run
> >>>>>>> pcp_combine after running UPP or pinterp?
> >>>>>>>
> >>>>>>> Thank you for your kindness.
> >>>>>>>
> >>>>>>> Best regards
> >>>>>>> Yonghan Choi
> >>>>>>>
> >>>>>>>
> >>>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> >>>>>> met_help at ucar.edu
> >>>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Yonghan,
> >>>>>>>>
> >>>>>>>> Sorry to hear that you're having trouble running pb2nc in
the
> online
> >>>>>>>> tutorial.  Can you tell me, are you able to run it fine
using the
> >>>> script
> >>>>>>>> included in the tarball?
> >>>>>>>>
> >>>>>>>> After you compiled MET, did you go into the scripts
directory and
> >> run
> >>>>>> the
> >>>>>>>> test scripts?
> >>>>>>>>
> >>>>>>>>        cd METv4.1/scripts
> >>>>>>>>        ./test_all.sh >& test_all.log
> >>>>>>>>
> >>>>>>>> Does pb2nc run OK in the test scripts, or do you see a
> segmentation
> >>>>>> fault
> >>>>>>>> there as well?
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> John Halley Gotway
> >>>>>>>> met_help at ucar.edu
> >>>>>>>>
> >>>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>>>>>>>
> >>>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
> >>>>>>>>>             Queue: met_help
> >>>>>>>>>           Subject: Question on online tutorial
> >>>>>>>>>             Owner: Nobody
> >>>>>>>>>        Requestors: cyh082 at gmail.com
> >>>>>>>>>            Status: new
> >>>>>>>>>       Ticket <URL:
> >>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Dear whom it may concern,
> >>>>>>>>>
> >>>>>>>>> I have one question on MET online tutorial.
> >>>>>>>>>
> >>>>>>>>> I downloaded MET source code, and compiled it
successfully.
> >>>>>>>>>
> >>>>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
> >> error
> >>>> an
> >>>>>>>>> message as belows.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>>>> DEBUG 1: Default Config File:
> >>>>>>>>>
> >>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>>>>>>>> DEBUG 1: User Config File:
> >>>>>>>>>
> >>>>>>
> >>>>
> >>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>>>>>>>> DEBUG 1: Creating NetCDF File:
> >>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> >>>>>> tutorial_pb.nc
> >>>>>>>>> DEBUG 1: Processing PrepBufr File:
> >>>>>>>>>
> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>>>>>>>> Segmentation fault
> >>>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>>>>
> >>>>>>>>> Could you give me some advices on this problem?
> >>>>>>>>>
> >>>>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
> >>>> Because
> >>>>>> I
> >>>>>>>>> need this file to run Point-Stat Tool tutorial.
> >>>>>>>>>
> >>>>>>>>> Thank you for your kindness.
> >>>>>>>>>
> >>>>>>>>> Best regards
> >>>>>>>>> Yonghan Choi
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>
> >>
>
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Thu May 15 12:55:40 2014

Yonghan,

Yes, putting the forecast and observation data on a common grid is a
necessary first step.  The TRMM data is available at 1/4 degree
resolution on a lat/lon grid.

I'd suggest the following steps:

(1) Retrieve 1/4 degree lat/lon TRMM data that covers the region over
which you're running WRF.  TRMM is available every 3 hours or in daily
accumulations.  So you need to decide which you'd like to
use for your evaluation.  As for data formats, you have 2 options
here...
    - You could use NASA's TOVAS website to get an ASCII version of
the data over your domain of interest, and then run the trmm2nc.R
script to reformat into a NetCDF file for use in MET.
    - You could pull a binary version of the TRMM data and run the
trmmbin2nc.R script to reformat into a NetCDF file for use in MET.
See the details on this page:
    http://www.dtcenter.org/met/users/downloads/observation_data.php

(2) Run copygb to regrid your WRF model output in GRIB format to the
1/4 lat/lon grid of your TRMM data.  Examples of using copygb to
regrid to a lat/lon grid can be found here:
    http://www.dtcenter.org/met/users/support/online_tutorial/METv4.1/copygb/run2.php

And then you should be able to compare the two files with MET.

Hope that helps get you going.

Thanks,
John

On 05/14/2014 08:57 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> Yes, I would like to use gridded observation data such as TRMM data.
>
> As I understand, I can get NetCDF-format file that MET expects using
the R
> script.
>
> According to users' guide, model grid should be the same as
observation
> grid.
>
> However, model grid (in my case, defined by WRF model) is different
from
> observation grid (in my case, of TRMM).
>
> Could you give me some suggestions on this problem?
>
> Thank you.
>
> Best regards
> Yonghan Choi
>
>
> On Wed, May 14, 2014 at 8:16 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Yonghan,
>>
>> If you're using AWS point observations, reformatting them into the
format
>> expected by ascii2nc is the right way to go.
>>
>> If you'd like to use gridded satellite data for verification, you
could
>> consider the TRMM data products described on this page:
>>
http://www.dtcenter.org/met/users/downloads/observation_data.php
>>
>> We also provide an Rscript on that page that will help you reformat
the
>> data into a version that MET expects.
>>
>> Here's a link directly to the NASA data:
>>
>> http://gdata1.sci.gsfc.nasa.gov/daac-
bin/G3/gui.cgi?instance_id=TRMM_3-Hourly
>>
>> Hope that helps.
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>> On 05/10/2014 01:27 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear Dr. John Halley Gotway,
>>>
>>> First of all, thank you for your advices.
>>>
>>> Actually, I would like to verify WRF-model forecasts (using AWS
point
>>> observations; these are currently-available data to me),
especially
>>> precipitation forecast.
>>>
>>> My region of interest is East Asia, focusing on South Korea.
>>>
>>> I used Lambert-Conformal map projection when running the WRF
model.
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
>>> met_help at ucar.edu> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> 1. Yes, -9999 is the value to use for missing data.  However, I
don't
>>>> understand why you'd use -9999 in the 11th column for the
"observed
>> value".
>>>>    When point-stat encounters a bad observation value,
>>>> it'll just skip that record.  So you should just skip over any
>>>> observations with a bad data value.
>>>>
>>>> 2. I was surprised to see that we don't have any examples of
running
>>>> pcp_combine in the subtraction mode on our website.  Here's an
example
>>>> using the sample data that's included in the MET tarball:
>>>>       METv4.1/bin/pcp_combine -subtract \
>>>>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12 \
>>>>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06 \
>>>>       wrfprs_ruc13_APCP_06_to_12.nc
>>>>
>>>> I've passed one file name followed by an accumulation interval,
then a
>>>> second file name followed by an accumulation interval, and
lastly, the
>>>> output file name.  It grabs the 12-hour accumulation from
>>>> the first file and subtracts off the 6-hour accumulation from the
second
>>>> file.  The result is the 6 hours of accumulation in between.
>>>>
>>>> 3. There is no general purpose way of converting point data to
gridded
>>>> data.  It's a pretty difficult task and would be very specific to
the
>> data
>>>> you're using.  Generally, I wouldn't recommend trying
>>>> to do that.  Instead, I'd suggest looking for other available
gridded
>>>> datasets.  Are you looking for gridded observations of
precipitation?
>>   What
>>>> is your region of interest?  You could send me a
>>>> sample GRIB file if you'd like, and I could look at the grid
you're
>> using.
>>>>    There may be some satellite observations of precipitation you
could
>> use.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>>
>>>> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>> Dear Dr. John Halley Gotway,
>>>>>
>>>>> Thank you for your kind tips on MET.
>>>>>
>>>>> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
>>>>>
>>>>> I have additional questions.
>>>>>
>>>>> 1. When I make 11-column format for ASCII2NC tool, how can I
deal with
>>>>> missing value?
>>>>> Can I use "-9999.00" to indicate missing value for observed
value (11th
>>>>> column)?
>>>>>
>>>>> 2. How can I use pcp-combine tool in subtract mode?
>>>>> Usage only for sum mode is provided in users' guide.
>>>>>
>>>>> 3. If I would like to run MODE tool or Wavelet-Stat tool,
>>>>> I need gridded model forecast and gridded observations.
>>>>>
>>>>> Could you recommend some methods to make gridded observations
using
>> point
>>>>> (ascii format) observations?
>>>>>
>>>>> Thank you.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>>
>>>>> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
>>>> met_help at ucar.edu
>>>>>> wrote:
>>>>>
>>>>>> Yonghan,
>>>>>>
>>>>>> 1. For precipitation, I often see people using the message type
of
>>>>>> "MC_PCP", but I don't think it really matters.
>>>>>>
>>>>>> 2. Yes, the valid time is the end of the accumulation interval.
>>>>>>     20130704_120000 is correct.
>>>>>>
>>>>>> 3. Yes, the GRIB code for accumulated precip is 61.
>>>>>>
>>>>>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
>>>>>> "120000" would work too.
>>>>>>
>>>>>> 5. The QC column can be filled with any string.  If you don't
have any
>>>>>> quality control values for this data, I'd suggest just putting
"NA" in
>>>> the
>>>>>> column.
>>>>>>
>>>>>> 6. I would guess that accumulated total rainfall amount is
already
>>>> turned
>>>>>> on in the default wrf_cntrl.parm file.  I'd suggest running UPP
once
>> and
>>>>>> looking at the output GRIB file.  Run the GRIB file
>>>>>> through the "wgrib" utility to dump it's contents and look for
"APCP"
>> in
>>>>>> the output.  APCP is the GRIB code abbreviation for accumulated
>>>>>> precipitation.
>>>>>>
>>>>>> 7. As you've described, by default, WRF-ARW computes a runtime
>>>>>> accumulation of precipitation.  So your 48-hour forecast
contains 48
>>>> hours
>>>>>> of accumulated precipitation.  To get 12-hour accumulations,
>>>>>> you have 2 choices:
>>>>>>       - You could modify the TPREC setting when running WRF to
"dump
>> the
>>>>>> accumulation bucket" every 12 hours.  That'd give you 12-hour
>>>> accumulations
>>>>>> in your GRIB files.
>>>>>>       - Or you could keep it as a runtime accumulation and run
>>>> pcp_combine to
>>>>>> compute the 12-hour accumulations.  For example, you subtract
60 hours
>>>> of
>>>>>> accumulation minus 48 hours of accumulation to get
>>>>>> the 12 hours in between.
>>>>>>
>>>>>> Hope that helps get you going.
>>>>>>
>>>>>> Thanks,
>>>>>> John
>>>>>>
>>>>>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
>>>>>>>
>>>>>>> Dear Dr. John Halley Gotway,
>>>>>>>
>>>>>>> I decided to use point observations (AWS observations) in
ASCII
>> format.
>>>>>>>
>>>>>>> I have some questions on how to make 11-column format.
>>>>>>>
>>>>>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
>>>>>>>
>>>>>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
>>>> 12
>>>>>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>>>>>>>
>>>>>>> 3. Is grib code (column #7) for accumulated precipitation
"61"?
>>>>>>>
>>>>>>> 4. Is level (column #8) "12"?
>>>>>>>
>>>>>>> 5. What is appropriate value for QC string (column #10)?
>>>>>>>
>>>>>>> And... I will use UPP as suggested.
>>>>>>>
>>>>>>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
>>>>>>> accumulated total rainfall amount?
>>>>>>>
>>>>>>> Finally, as you know, WRF output includes accumulated rainfall
up to
>>>>>>> forecast time.
>>>>>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
>>>>>> output
>>>>>>> file for 12 UTC 4 July 2013 includes 12-h accumulated
rainfall.
>>>>>>>
>>>>>>> 7. Then, should I use pcp-combine tool to make 12-h
accumulated WRF
>>>>>>> forecast?
>>>>>>> If yes, how can I do this?
>>>>>>>
>>>>>>> Thank you for your kindness.
>>>>>>>
>>>>>>> Best regards
>>>>>>> Yonghan Choi
>>>>>>>
>>>>>>>
>>>>>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
>>>>>>> met_help at ucar.edu> wrote:
>>>>>>>
>>>>>>>> Yonghan,
>>>>>>>>
>>>>>>>> If you will not be using PREPBUFR point observations, you
won't need
>>>> the
>>>>>>>> PB2NC utility.  I'm happy to help you debug the issue to try
to
>> figure
>>>>>> out
>>>>>>>> what's going on, but it's up to you.
>>>>>>>>
>>>>>>>> To answer your questions, yes, if your AWS observations are
in
>> ASCII,
>>>>>> I'd
>>>>>>>> suggest reformatting them into the 11-column format that
ASCII2NC
>>>>>> expects.
>>>>>>>>      After you run them through ASCII2NC, you'll be
>>>>>>>> able to use them in point_stat.
>>>>>>>>
>>>>>>>> I'd suggest using the Unified Post Processor (UPP) whose
output
>> format
>>>>>> is
>>>>>>>> GRIB.  MET handles GRIB files very well.  It can read the
pinterp
>>>>>> output as
>>>>>>>> well, but not variables on staggered dimensions,
>>>>>>>> such as the winds.  For that reason, using UPP is better.
>>>>>>>>
>>>>>>>> The pcp_combine tool is run to modify precipitation
accumulation
>>>>>>>> intervals.  This is all driven by your observations.  For
example,
>>>>>> suppose
>>>>>>>> you have 24-hour, daily observations of accumulated
>>>>>>>> precipitation.  You'd want to compare a 24-hour forecast
>> accumulation
>>>> to
>>>>>>>> that 24-hour observed accumulation.  So you may need to run
>>>> pcp_combine
>>>>>> to
>>>>>>>> add or subtract accumulated precipitation across
>>>>>>>> your WRF output files.  If you're only verifying
instantaneous
>>>>>> variables,
>>>>>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
>>>>>>>>
>>>>>>>> Hope that helps.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> John
>>>>>>>>
>>>>>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>>>>>>>
>>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>>>
>>>>>>>>> Dear John Halley Gotway,
>>>>>>>>>
>>>>>>>>> Yes, I ran the test script. I checked the log file, and
running
>> pb2nc
>>>>>>>>> resulted in the same error (segmentation fault).
>>>>>>>>>
>>>>>>>>> And I have another question.
>>>>>>>>>
>>>>>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
>>>> and
>>>>>>>> WRF
>>>>>>>>> model outputs as inputs.
>>>>>>>>>
>>>>>>>>> Then, should I run ascii2nc to make input observation file
for
>>>>>> Point-Stat
>>>>>>>>> Tool using my own AWS observations?
>>>>>>>>>
>>>>>>>>> And, should I run Unified Post Processor or pinterp to make
input
>>>>>> gridded
>>>>>>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
>>>> run
>>>>>>>>> pcp_combine after running UPP or pinterp?
>>>>>>>>>
>>>>>>>>> Thank you for your kindness.
>>>>>>>>>
>>>>>>>>> Best regards
>>>>>>>>> Yonghan Choi
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>>>>>>>> met_help at ucar.edu
>>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Yonghan,
>>>>>>>>>>
>>>>>>>>>> Sorry to hear that you're having trouble running pb2nc in
the
>> online
>>>>>>>>>> tutorial.  Can you tell me, are you able to run it fine
using the
>>>>>> script
>>>>>>>>>> included in the tarball?
>>>>>>>>>>
>>>>>>>>>> After you compiled MET, did you go into the scripts
directory and
>>>> run
>>>>>>>> the
>>>>>>>>>> test scripts?
>>>>>>>>>>
>>>>>>>>>>         cd METv4.1/scripts
>>>>>>>>>>         ./test_all.sh >& test_all.log
>>>>>>>>>>
>>>>>>>>>> Does pb2nc run OK in the test scripts, or do you see a
>> segmentation
>>>>>>>> fault
>>>>>>>>>> there as well?
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> John Halley Gotway
>>>>>>>>>> met_help at ucar.edu
>>>>>>>>>>
>>>>>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>>>>>>>
>>>>>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>>>>>>>>              Queue: met_help
>>>>>>>>>>>            Subject: Question on online tutorial
>>>>>>>>>>>              Owner: Nobody
>>>>>>>>>>>         Requestors: cyh082 at gmail.com
>>>>>>>>>>>             Status: new
>>>>>>>>>>>        Ticket <URL:
>>>>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Dear whom it may concern,
>>>>>>>>>>>
>>>>>>>>>>> I have one question on MET online tutorial.
>>>>>>>>>>>
>>>>>>>>>>> I downloaded MET source code, and compiled it
successfully.
>>>>>>>>>>>
>>>>>>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
>>>> error
>>>>>> an
>>>>>>>>>>> message as belows.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>>>> DEBUG 1: Default Config File:
>>>>>>>>>>>
>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>>>>>>>> DEBUG 1: User Config File:
>>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>>>>>>>> DEBUG 1: Creating NetCDF File:
>>>>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>>>>>>>> tutorial_pb.nc
>>>>>>>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>>>>>>>
>> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>>>>>>>> Segmentation fault
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>>>>
>>>>>>>>>>> Could you give me some advices on this problem?
>>>>>>>>>>>
>>>>>>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
>>>>>> Because
>>>>>>>> I
>>>>>>>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>>>>>>>
>>>>>>>>>>> Thank you for your kindness.
>>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Yonghan Choi
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>
>>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Tue Jun 03 08:52:29 2014

Dear Dr. John Halley Gotway,

Actually, I have some additional questions on MET.

1. I used "copygb" program to regrid WRF forecasts to lat/lon grid.
However, I got a message of "floating exception" when the program was
ended. I also obtained regridded WRF forecast in GRIB format.
Is it alright?

2. I would like to use "pcp_combine" program to make 12-h accumulated
precipitation with four 3-h accumulated TRMM precipitation files as
inputs.
Could you give me an advice on how to do that?

3. I ran MODE tool with WRF forecasts (results of UPP and copygb) and
AWS
observations (in appropriate NetCDF format) as inputs. The attached ps
file
is one of outputs of MODE tool.
I think both in observations and WRF forecasts, selected objects are
too
broad (i.e., no specific features of precipitation is included).
Can I obtain different results by modifying configuration file? Could
you
give me advices?

I really think that MET tool is very useful and amazing.

Thank you for your kindness.

Best regards
Yonghan Choi


On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT
<met_help at ucar.edu
> wrote:

> According to our records, your request has been resolved. If you
have any
> further questions or concerns, please respond to this message.
>

------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Resolved: Question on online tutorial
From: John Halley Gotway
Time: Tue Jun 03 09:59:09 2014

Yonghan,

Answers are inline below.

On 06/03/2014 08:52 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> Actually, I have some additional questions on MET.
>
> 1. I used "copygb" program to regrid WRF forecasts to lat/lon grid.
> However, I got a message of "floating exception" when the program
was
> ended. I also obtained regridded WRF forecast in GRIB format.
> Is it alright?
>

I've experienced this behavior as well sometimes.  copygb goes through
the GRIB file record by record.  Every once in a while, it crashes
with a "floating exception" when processing one of the
records.  You'll find that the output GRIB file contains all the
records up to the one that caused the crash.  For example, if your
input GRIB file contains 500 records and your output contains 350
records, it's likely that the problem occurred when processing the
351st record.

Obviously, copygb shouldn't crash!  I'd suggest sending a message to
wrfhelp at ucar.edu, along with the GRIB file and the command line that
caused the exception.  Ideally, they'd be able to fix that
exception.

But is it alright?  It depends on what made it into your output file.
If the output file contains all the records you want to verify, then
its fine.  If not, I'd suggest using a more complex copygb
command that would skip over the records that are causing the crash.
If you need help figuring that out, you could send me the problematic
GRIB file.

> 2. I would like to use "pcp_combine" program to make 12-h
accumulated
> precipitation with four 3-h accumulated TRMM precipitation files as
inputs.
> Could you give me an advice on how to do that?

You'd do something like this:

pcp_combine -add \
    trmm_file1.nc 'name="APCP_03"; level="(*,*)";' \
    trmm_file2.nc 'name="APCP_03"; level="(*,*)";' \
    trmm_file3.nc 'name="APCP_03"; level="(*,*)";' \
    trmm_file4.nc 'name="APCP_03"; level="(*,*)";' \
    trmm_output_file.nc

>
> 3. I ran MODE tool with WRF forecasts (results of UPP and copygb)
and AWS
> observations (in appropriate NetCDF format) as inputs. The attached
ps file
> is one of outputs of MODE tool.
> I think both in observations and WRF forecasts, selected objects are
too
> broad (i.e., no specific features of precipitation is included).
> Can I obtain different results by modifying configuration file?
Could you
> give me advices?
>

Yes, you can definitely change how the objects are defined.  But I
didn't see an attachment.  Can you please post the data to our
anonymous ftp site following these instructions:
    http://www.dtcenter.org/met/users/support/met_help.php#ftp

Please send me the output PostScript file, your input forecast and
observation files, and the MODE configuration file.  I'll run it here,
play around with the config file, and send you some suggestions.

Thanks,
John

> I really think that MET tool is very useful and amazing.
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT
<met_help at ucar.edu
>> wrote:
>
>> According to our records, your request has been resolved. If you
have any
>> further questions or concerns, please respond to this message.
>>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Wed Jun 04 00:57:19 2014

Dear Dr. John Halley Gotway,

I uploaded output ps file, input forecast file, observation file, and
configuration file according to your instruction (to Choi_data
directory).

Thank you for your kindness.

Best regards
Yonghan Choi


On Wed, Jun 4, 2014 at 12:59 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:

> Yonghan,
>
> Answers are inline below.
>
> On 06/03/2014 08:52 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > Actually, I have some additional questions on MET.
> >
> > 1. I used "copygb" program to regrid WRF forecasts to lat/lon
grid.
> > However, I got a message of "floating exception" when the program
was
> > ended. I also obtained regridded WRF forecast in GRIB format.
> > Is it alright?
> >
>
> I've experienced this behavior as well sometimes.  copygb goes
through the
> GRIB file record by record.  Every once in a while, it crashes with
a
> "floating exception" when processing one of the
> records.  You'll find that the output GRIB file contains all the
records
> up to the one that caused the crash.  For example, if your input
GRIB file
> contains 500 records and your output contains 350
> records, it's likely that the problem occurred when processing the
351st
> record.
>
> Obviously, copygb shouldn't crash!  I'd suggest sending a message to
> wrfhelp at ucar.edu, along with the GRIB file and the command line that
> caused the exception.  Ideally, they'd be able to fix that
> exception.
>
> But is it alright?  It depends on what made it into your output
file.  If
> the output file contains all the records you want to verify, then
its fine.
>  If not, I'd suggest using a more complex copygb
> command that would skip over the records that are causing the crash.
If
> you need help figuring that out, you could send me the problematic
GRIB
> file.
>
> > 2. I would like to use "pcp_combine" program to make 12-h
accumulated
> > precipitation with four 3-h accumulated TRMM precipitation files
as
> inputs.
> > Could you give me an advice on how to do that?
>
> You'd do something like this:
>
> pcp_combine -add \
>     trmm_file1.nc 'name="APCP_03"; level="(*,*)";' \
>     trmm_file2.nc 'name="APCP_03"; level="(*,*)";' \
>     trmm_file3.nc 'name="APCP_03"; level="(*,*)";' \
>     trmm_file4.nc 'name="APCP_03"; level="(*,*)";' \
>     trmm_output_file.nc
>
> >
> > 3. I ran MODE tool with WRF forecasts (results of UPP and copygb)
and AWS
> > observations (in appropriate NetCDF format) as inputs. The
attached ps
> file
> > is one of outputs of MODE tool.
> > I think both in observations and WRF forecasts, selected objects
are too
> > broad (i.e., no specific features of precipitation is included).
> > Can I obtain different results by modifying configuration file?
Could you
> > give me advices?
> >
>
> Yes, you can definitely change how the objects are defined.  But I
didn't
> see an attachment.  Can you please post the data to our anonymous
ftp site
> following these instructions:
>     http://www.dtcenter.org/met/users/support/met_help.php#ftp
>
> Please send me the output PostScript file, your input forecast and
> observation files, and the MODE configuration file.  I'll run it
here, play
> around with the config file, and send you some suggestions.
>
> Thanks,
> John
>
> > I really think that MET tool is very useful and amazing.
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT <
> met_help at ucar.edu
> >> wrote:
> >
> >> According to our records, your request has been resolved. If you
have
> any
> >> further questions or concerns, please respond to this message.
> >>
>
>

------------------------------------------------
Subject: Question on online tutorial
From: John Halley Gotway
Time: Wed Jun 04 15:32:13 2014

Yonghan,

Thanks for sending the data.  I grabbed it and was able to run MODE to
reproduce the output you're seeing.

One immediate problem I see is the missing value in the observation
field.  Looking on page 3, I see the color scale for the data goes
down to -999.  Unfortunately, right now files in the MET NetCDF
format only support a missing value of -9999.  We do plan to make that
more general in future releases.  But in METv4.1, you're stuck with
-999.

I ran the following commands to switch from -999 to -9999:
    ncdump gridded_aws_test.nc | sed 's/-999.f/-9999.f/g' >
gridded_aws_test.ncdump
    ncgen -o gridded_aws_test_new.nc gridded_aws_test.ncdump

Next, I see that the area of missing data in the two fields isn't the
same.  And that's not really fair.  You probably only want to define
objects using points that contain valid data in both fields.
  So I'd suggest masking the missing data across both fields:
    mask_missing_flag = BOTH;

Next, I tried playing with the convolution radius (amount of
smoothing) and threshold.  Really, you should try playing around with
these to get the type of objects you're trying to study.

I tried setting the convolution radius to 0 (meaning, don't smooth the
data) and the threshold to 25.4mm (which is 1" of precip).  The result
is attached.  Is that what you're looking for?
    conv_radius       = 0;
    conv_thresh       = >=25.4;

If you'd like the objects more smooth, try increasing the smoothing
radius, to maybe 2.  I've attached this result as well.

Please try tweaking the convolution radius and threshold until you
create objects that match the feature you're trying to study.

However, I'd like to give you a word of caution.  As is often the
case, MODE suffers from edge effects.  Looking at the sample data you
sent, we're not capturing the full extent of these objects.
Instead, they're being cut off by the edge of the valid data.  That
makes interpreting the object attributes difficult.  Interpreting MODE
output works best when the size of the individual objects are
small relative to the size of the domain.  That is not the case in the
data you sent me.  So MODE may be of limited use for you.

You might also consider using the Fractions Skill Score (FSS) which is
contained in the NBRCNT line type produced by the Grid-Stat tool.  FSS
is a neighborhood verification method that is a nice,
simple way of assessing model performance.

Hope that helps.

Thanks,
John Halley Gotway



On 06/04/2014 12:57 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> I uploaded output ps file, input forecast file, observation file,
and
> configuration file according to your instruction (to Choi_data
directory).
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Wed, Jun 4, 2014 at 12:59 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Yonghan,
>>
>> Answers are inline below.
>>
>> On 06/03/2014 08:52 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear Dr. John Halley Gotway,
>>>
>>> Actually, I have some additional questions on MET.
>>>
>>> 1. I used "copygb" program to regrid WRF forecasts to lat/lon
grid.
>>> However, I got a message of "floating exception" when the program
was
>>> ended. I also obtained regridded WRF forecast in GRIB format.
>>> Is it alright?
>>>
>>
>> I've experienced this behavior as well sometimes.  copygb goes
through the
>> GRIB file record by record.  Every once in a while, it crashes with
a
>> "floating exception" when processing one of the
>> records.  You'll find that the output GRIB file contains all the
records
>> up to the one that caused the crash.  For example, if your input
GRIB file
>> contains 500 records and your output contains 350
>> records, it's likely that the problem occurred when processing the
351st
>> record.
>>
>> Obviously, copygb shouldn't crash!  I'd suggest sending a message
to
>> wrfhelp at ucar.edu, along with the GRIB file and the command line
that
>> caused the exception.  Ideally, they'd be able to fix that
>> exception.
>>
>> But is it alright?  It depends on what made it into your output
file.  If
>> the output file contains all the records you want to verify, then
its fine.
>>   If not, I'd suggest using a more complex copygb
>> command that would skip over the records that are causing the
crash.  If
>> you need help figuring that out, you could send me the problematic
GRIB
>> file.
>>
>>> 2. I would like to use "pcp_combine" program to make 12-h
accumulated
>>> precipitation with four 3-h accumulated TRMM precipitation files
as
>> inputs.
>>> Could you give me an advice on how to do that?
>>
>> You'd do something like this:
>>
>> pcp_combine -add \
>>      trmm_file1.nc 'name="APCP_03"; level="(*,*)";' \
>>      trmm_file2.nc 'name="APCP_03"; level="(*,*)";' \
>>      trmm_file3.nc 'name="APCP_03"; level="(*,*)";' \
>>      trmm_file4.nc 'name="APCP_03"; level="(*,*)";' \
>>      trmm_output_file.nc
>>
>>>
>>> 3. I ran MODE tool with WRF forecasts (results of UPP and copygb)
and AWS
>>> observations (in appropriate NetCDF format) as inputs. The
attached ps
>> file
>>> is one of outputs of MODE tool.
>>> I think both in observations and WRF forecasts, selected objects
are too
>>> broad (i.e., no specific features of precipitation is included).
>>> Can I obtain different results by modifying configuration file?
Could you
>>> give me advices?
>>>
>>
>> Yes, you can definitely change how the objects are defined.  But I
didn't
>> see an attachment.  Can you please post the data to our anonymous
ftp site
>> following these instructions:
>>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
>>
>> Please send me the output PostScript file, your input forecast and
>> observation files, and the MODE configuration file.  I'll run it
here, play
>> around with the config file, and send you some suggestions.
>>
>> Thanks,
>> John
>>
>>> I really think that MET tool is very useful and amazing.
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT <
>> met_help at ucar.edu
>>>> wrote:
>>>
>>>> According to our records, your request has been resolved. If you
have
>> any
>>>> further questions or concerns, please respond to this message.
>>>>
>>
>>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Mon Jun 09 20:34:15 2014

Dear Dr. John Halley Gotway,

I am sorry for late reply. Actually, I was out of office last week.

Thank you for your kind advices on using MET tools.

I have one additional question on output of MODE tools.

Regardless of convolution threshold or radius, areas of precipitation
over
the Southern Korean Peninsula are identified as one object.

I think that areas of precipitation over the southern peninsula should
be
divided into smaller objects.

Could you give me your opinions on this issue?

Thank you.

Best regards
Yonghan Choi


On Thu, Jun 5, 2014 at 6:32 AM, John Halley Gotway via RT
<met_help at ucar.edu
> wrote:

> Yonghan,
>
> Thanks for sending the data.  I grabbed it and was able to run MODE
to
> reproduce the output you're seeing.
>
> One immediate problem I see is the missing value in the observation
field.
>  Looking on page 3, I see the color scale for the data goes down to
-999.
>  Unfortunately, right now files in the MET NetCDF
> format only support a missing value of -9999.  We do plan to make
that
> more general in future releases.  But in METv4.1, you're stuck with
-999.
>
> I ran the following commands to switch from -999 to -9999:
>     ncdump gridded_aws_test.nc | sed 's/-999.f/-9999.f/g' >
> gridded_aws_test.ncdump
>     ncgen -o gridded_aws_test_new.nc gridded_aws_test.ncdump
>
> Next, I see that the area of missing data in the two fields isn't
the
> same.  And that's not really fair.  You probably only want to define
> objects using points that contain valid data in both fields.
>   So I'd suggest masking the missing data across both fields:
>     mask_missing_flag = BOTH;
>
> Next, I tried playing with the convolution radius (amount of
smoothing)
> and threshold.  Really, you should try playing around with these to
get the
> type of objects you're trying to study.
>
> I tried setting the convolution radius to 0 (meaning, don't smooth
the
> data) and the threshold to 25.4mm (which is 1" of precip).  The
result is
> attached.  Is that what you're looking for?
>     conv_radius       = 0;
>     conv_thresh       = >=25.4;
>
> If you'd like the objects more smooth, try increasing the smoothing
> radius, to maybe 2.  I've attached this result as well.
>
> Please try tweaking the convolution radius and threshold until you
create
> objects that match the feature you're trying to study.
>
> However, I'd like to give you a word of caution.  As is often the
case,
> MODE suffers from edge effects.  Looking at the sample data you
sent, we're
> not capturing the full extent of these objects.
> Instead, they're being cut off by the edge of the valid data.  That
makes
> interpreting the object attributes difficult.  Interpreting MODE
output
> works best when the size of the individual objects are
> small relative to the size of the domain.  That is not the case in
the
> data you sent me.  So MODE may be of limited use for you.
>
> You might also consider using the Fractions Skill Score (FSS) which
is
> contained in the NBRCNT line type produced by the Grid-Stat tool.
FSS is a
> neighborhood verification method that is a nice,
> simple way of assessing model performance.
>
> Hope that helps.
>
> Thanks,
> John Halley Gotway
>
>
>
> On 06/04/2014 12:57 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > I uploaded output ps file, input forecast file, observation file,
and
> > configuration file according to your instruction (to Choi_data
> directory).
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Wed, Jun 4, 2014 at 12:59 AM, John Halley Gotway via RT <
> > met_help at ucar.edu> wrote:
> >
> >> Yonghan,
> >>
> >> Answers are inline below.
> >>
> >> On 06/03/2014 08:52 AM, Yonghan Choi via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>> Dear Dr. John Halley Gotway,
> >>>
> >>> Actually, I have some additional questions on MET.
> >>>
> >>> 1. I used "copygb" program to regrid WRF forecasts to lat/lon
grid.
> >>> However, I got a message of "floating exception" when the
program was
> >>> ended. I also obtained regridded WRF forecast in GRIB format.
> >>> Is it alright?
> >>>
> >>
> >> I've experienced this behavior as well sometimes.  copygb goes
through
> the
> >> GRIB file record by record.  Every once in a while, it crashes
with a
> >> "floating exception" when processing one of the
> >> records.  You'll find that the output GRIB file contains all the
records
> >> up to the one that caused the crash.  For example, if your input
GRIB
> file
> >> contains 500 records and your output contains 350
> >> records, it's likely that the problem occurred when processing
the 351st
> >> record.
> >>
> >> Obviously, copygb shouldn't crash!  I'd suggest sending a message
to
> >> wrfhelp at ucar.edu, along with the GRIB file and the command line
that
> >> caused the exception.  Ideally, they'd be able to fix that
> >> exception.
> >>
> >> But is it alright?  It depends on what made it into your output
file.
>  If
> >> the output file contains all the records you want to verify, then
its
> fine.
> >>   If not, I'd suggest using a more complex copygb
> >> command that would skip over the records that are causing the
crash.  If
> >> you need help figuring that out, you could send me the
problematic GRIB
> >> file.
> >>
> >>> 2. I would like to use "pcp_combine" program to make 12-h
accumulated
> >>> precipitation with four 3-h accumulated TRMM precipitation files
as
> >> inputs.
> >>> Could you give me an advice on how to do that?
> >>
> >> You'd do something like this:
> >>
> >> pcp_combine -add \
> >>      trmm_file1.nc 'name="APCP_03"; level="(*,*)";' \
> >>      trmm_file2.nc 'name="APCP_03"; level="(*,*)";' \
> >>      trmm_file3.nc 'name="APCP_03"; level="(*,*)";' \
> >>      trmm_file4.nc 'name="APCP_03"; level="(*,*)";' \
> >>      trmm_output_file.nc
> >>
> >>>
> >>> 3. I ran MODE tool with WRF forecasts (results of UPP and
copygb) and
> AWS
> >>> observations (in appropriate NetCDF format) as inputs. The
attached ps
> >> file
> >>> is one of outputs of MODE tool.
> >>> I think both in observations and WRF forecasts, selected objects
are
> too
> >>> broad (i.e., no specific features of precipitation is included).
> >>> Can I obtain different results by modifying configuration file?
Could
> you
> >>> give me advices?
> >>>
> >>
> >> Yes, you can definitely change how the objects are defined.  But
I
> didn't
> >> see an attachment.  Can you please post the data to our anonymous
ftp
> site
> >> following these instructions:
> >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> >>
> >> Please send me the output PostScript file, your input forecast
and
> >> observation files, and the MODE configuration file.  I'll run it
here,
> play
> >> around with the config file, and send you some suggestions.
> >>
> >> Thanks,
> >> John
> >>
> >>> I really think that MET tool is very useful and amazing.
> >>>
> >>> Thank you for your kindness.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>>
> >>> On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT <
> >> met_help at ucar.edu
> >>>> wrote:
> >>>
> >>>> According to our records, your request has been resolved. If
you have
> >> any
> >>>> further questions or concerns, please respond to this message.
> >>>>
> >>
> >>
>
>

------------------------------------------------
Subject: Question on online tutorial
From: John Halley Gotway
Time: Tue Jun 24 13:27:00 2014

Yonghan,

Sorry also for the delay in getting back to you.  I was out of the
office
last week.  I see that you'd like to see separate objects over the
South
Korean peninsula in the example you sent.  The simple change that
would do
this is increasing the convolution threshold.  Looking on page two of
the
PostScript output of MODE, I see that areas of dark blue correspond to
about 65 mm of precipitation.  Running MODE with a convolution radius
of 0
and a threshold of 65 mm yields 5 forecast objects and 2 observation
objects.  Using a convolution radius of 2 and a threshold of 55 mm
yields 5
forecast objects and 1 observation object.

When looking at a single case, you could spend a lot of time playing
around
with the radius and threshold to get the objects to look exactly how
you'd
like them.  But you can't do that for each case - it's impractical and
would take too long.

Instead, I'd suggest that you look at a handful of meaningful cases
and try
to set up the config file to create objects that capture the features
you're trying to study.  Hopefully there's a threshold that's
meaningful to
you in some way.  Do you want to see how well the model is doing
capturing
large-scale precipitation events, or are you only interested in small
areas
of intense precipitation?  The phenomenon you're trying to study
should
guide how you set up the config file.

Once you've chosen the radius and threshold that do a good job
capturing
events in your small set of cases, run them over your full dataset.
No
matter how you choose the radius and threshold, I suspect there will
always
be one or two cases where you don't like how the objects were defined.
That will always be an issue.  But you really should keep the radius
and
threshold fixed over the full dataset so that you can fairly compare
objects from one run to another.

Hope that helps.

Thanks,
John


On Mon, Jun 9, 2014 at 8:34 PM, Yonghan Choi via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> I am sorry for late reply. Actually, I was out of office last week.
>
> Thank you for your kind advices on using MET tools.
>
> I have one additional question on output of MODE tools.
>
> Regardless of convolution threshold or radius, areas of
precipitation over
> the Southern Korean Peninsula are identified as one object.
>
> I think that areas of precipitation over the southern peninsula
should be
> divided into smaller objects.
>
> Could you give me your opinions on this issue?
>
> Thank you.
>
> Best regards
> Yonghan Choi
>
>
> On Thu, Jun 5, 2014 at 6:32 AM, John Halley Gotway via RT <
> met_help at ucar.edu
> > wrote:
>
> > Yonghan,
> >
> > Thanks for sending the data.  I grabbed it and was able to run
MODE to
> > reproduce the output you're seeing.
> >
> > One immediate problem I see is the missing value in the
observation
> field.
> >  Looking on page 3, I see the color scale for the data goes down
to -999.
> >  Unfortunately, right now files in the MET NetCDF
> > format only support a missing value of -9999.  We do plan to make
that
> > more general in future releases.  But in METv4.1, you're stuck
with -999.
> >
> > I ran the following commands to switch from -999 to -9999:
> >     ncdump gridded_aws_test.nc | sed 's/-999.f/-9999.f/g' >
> > gridded_aws_test.ncdump
> >     ncgen -o gridded_aws_test_new.nc gridded_aws_test.ncdump
> >
> > Next, I see that the area of missing data in the two fields isn't
the
> > same.  And that's not really fair.  You probably only want to
define
> > objects using points that contain valid data in both fields.
> >   So I'd suggest masking the missing data across both fields:
> >     mask_missing_flag = BOTH;
> >
> > Next, I tried playing with the convolution radius (amount of
smoothing)
> > and threshold.  Really, you should try playing around with these
to get
> the
> > type of objects you're trying to study.
> >
> > I tried setting the convolution radius to 0 (meaning, don't smooth
the
> > data) and the threshold to 25.4mm (which is 1" of precip).  The
result is
> > attached.  Is that what you're looking for?
> >     conv_radius       = 0;
> >     conv_thresh       = >=25.4;
> >
> > If you'd like the objects more smooth, try increasing the
smoothing
> > radius, to maybe 2.  I've attached this result as well.
> >
> > Please try tweaking the convolution radius and threshold until you
create
> > objects that match the feature you're trying to study.
> >
> > However, I'd like to give you a word of caution.  As is often the
case,
> > MODE suffers from edge effects.  Looking at the sample data you
sent,
> we're
> > not capturing the full extent of these objects.
> > Instead, they're being cut off by the edge of the valid data.
That makes
> > interpreting the object attributes difficult.  Interpreting MODE
output
> > works best when the size of the individual objects are
> > small relative to the size of the domain.  That is not the case in
the
> > data you sent me.  So MODE may be of limited use for you.
> >
> > You might also consider using the Fractions Skill Score (FSS)
which is
> > contained in the NBRCNT line type produced by the Grid-Stat tool.
FSS
> is a
> > neighborhood verification method that is a nice,
> > simple way of assessing model performance.
> >
> > Hope that helps.
> >
> > Thanks,
> > John Halley Gotway
> >
> >
> >
> > On 06/04/2014 12:57 AM, Yonghan Choi via RT wrote:
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> > >
> > > Dear Dr. John Halley Gotway,
> > >
> > > I uploaded output ps file, input forecast file, observation
file, and
> > > configuration file according to your instruction (to Choi_data
> > directory).
> > >
> > > Thank you for your kindness.
> > >
> > > Best regards
> > > Yonghan Choi
> > >
> > >
> > > On Wed, Jun 4, 2014 at 12:59 AM, John Halley Gotway via RT <
> > > met_help at ucar.edu> wrote:
> > >
> > >> Yonghan,
> > >>
> > >> Answers are inline below.
> > >>
> > >> On 06/03/2014 08:52 AM, Yonghan Choi via RT wrote:
> > >>>
> > >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
> > >>>
> > >>> Dear Dr. John Halley Gotway,
> > >>>
> > >>> Actually, I have some additional questions on MET.
> > >>>
> > >>> 1. I used "copygb" program to regrid WRF forecasts to lat/lon
grid.
> > >>> However, I got a message of "floating exception" when the
program was
> > >>> ended. I also obtained regridded WRF forecast in GRIB format.
> > >>> Is it alright?
> > >>>
> > >>
> > >> I've experienced this behavior as well sometimes.  copygb goes
through
> > the
> > >> GRIB file record by record.  Every once in a while, it crashes
with a
> > >> "floating exception" when processing one of the
> > >> records.  You'll find that the output GRIB file contains all
the
> records
> > >> up to the one that caused the crash.  For example, if your
input GRIB
> > file
> > >> contains 500 records and your output contains 350
> > >> records, it's likely that the problem occurred when processing
the
> 351st
> > >> record.
> > >>
> > >> Obviously, copygb shouldn't crash!  I'd suggest sending a
message to
> > >> wrfhelp at ucar.edu, along with the GRIB file and the command line
that
> > >> caused the exception.  Ideally, they'd be able to fix that
> > >> exception.
> > >>
> > >> But is it alright?  It depends on what made it into your output
file.
> >  If
> > >> the output file contains all the records you want to verify,
then its
> > fine.
> > >>   If not, I'd suggest using a more complex copygb
> > >> command that would skip over the records that are causing the
crash.
>  If
> > >> you need help figuring that out, you could send me the
problematic
> GRIB
> > >> file.
> > >>
> > >>> 2. I would like to use "pcp_combine" program to make 12-h
accumulated
> > >>> precipitation with four 3-h accumulated TRMM precipitation
files as
> > >> inputs.
> > >>> Could you give me an advice on how to do that?
> > >>
> > >> You'd do something like this:
> > >>
> > >> pcp_combine -add \
> > >>      trmm_file1.nc 'name="APCP_03"; level="(*,*)";' \
> > >>      trmm_file2.nc 'name="APCP_03"; level="(*,*)";' \
> > >>      trmm_file3.nc 'name="APCP_03"; level="(*,*)";' \
> > >>      trmm_file4.nc 'name="APCP_03"; level="(*,*)";' \
> > >>      trmm_output_file.nc
> > >>
> > >>>
> > >>> 3. I ran MODE tool with WRF forecasts (results of UPP and
copygb) and
> > AWS
> > >>> observations (in appropriate NetCDF format) as inputs. The
attached
> ps
> > >> file
> > >>> is one of outputs of MODE tool.
> > >>> I think both in observations and WRF forecasts, selected
objects are
> > too
> > >>> broad (i.e., no specific features of precipitation is
included).
> > >>> Can I obtain different results by modifying configuration
file? Could
> > you
> > >>> give me advices?
> > >>>
> > >>
> > >> Yes, you can definitely change how the objects are defined.
But I
> > didn't
> > >> see an attachment.  Can you please post the data to our
anonymous ftp
> > site
> > >> following these instructions:
> > >>      http://www.dtcenter.org/met/users/support/met_help.php#ftp
> > >>
> > >> Please send me the output PostScript file, your input forecast
and
> > >> observation files, and the MODE configuration file.  I'll run
it here,
> > play
> > >> around with the config file, and send you some suggestions.
> > >>
> > >> Thanks,
> > >> John
> > >>
> > >>> I really think that MET tool is very useful and amazing.
> > >>>
> > >>> Thank you for your kindness.
> > >>>
> > >>> Best regards
> > >>> Yonghan Choi
> > >>>
> > >>>
> > >>> On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT <
> > >> met_help at ucar.edu
> > >>>> wrote:
> > >>>
> > >>>> According to our records, your request has been resolved. If
you
> have
> > >> any
> > >>>> further questions or concerns, please respond to this
message.
> > >>>>
> > >>
> > >>
> >
> >
>
>

------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Wed Jun 25 06:53:52 2014

Dear Dr. John Halley Gotway,

Thank you for your kind and detailed response.

I have additional questions on FSS.

1. Can I use Fractions Skill Score (one of outputs of Grid-Stat tool)
for
comparing several model forecasts (e.g., control experiment and data
assimilation experiment)?

2. As I know, FSS is generally represented as a function of length
scale.
NBRCNT file (one of outputs of Grid-Stat tool) contains INTERP_PNTS
column,
which indicates the number of points used in interpolation. If
INTERP_PNTS
is 25 and grid resolution is 6 km, will length scale be 30 km?

Thank you.

Best regards
Yonghan Choi


On Wed, Jun 25, 2014 at 4:27 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:

> Yonghan,
>
> Sorry also for the delay in getting back to you.  I was out of the
office
> last week.  I see that you'd like to see separate objects over the
South
> Korean peninsula in the example you sent.  The simple change that
would do
> this is increasing the convolution threshold.  Looking on page two
of the
> PostScript output of MODE, I see that areas of dark blue correspond
to
> about 65 mm of precipitation.  Running MODE with a convolution
radius of 0
> and a threshold of 65 mm yields 5 forecast objects and 2 observation
> objects.  Using a convolution radius of 2 and a threshold of 55 mm
yields 5
> forecast objects and 1 observation object.
>
> When looking at a single case, you could spend a lot of time playing
around
> with the radius and threshold to get the objects to look exactly how
you'd
> like them.  But you can't do that for each case - it's impractical
and
> would take too long.
>
> Instead, I'd suggest that you look at a handful of meaningful cases
and try
> to set up the config file to create objects that capture the
features
> you're trying to study.  Hopefully there's a threshold that's
meaningful to
> you in some way.  Do you want to see how well the model is doing
capturing
> large-scale precipitation events, or are you only interested in
small areas
> of intense precipitation?  The phenomenon you're trying to study
should
> guide how you set up the config file.
>
> Once you've chosen the radius and threshold that do a good job
capturing
> events in your small set of cases, run them over your full dataset.
No
> matter how you choose the radius and threshold, I suspect there will
always
> be one or two cases where you don't like how the objects were
defined.
> That will always be an issue.  But you really should keep the radius
and
> threshold fixed over the full dataset so that you can fairly compare
> objects from one run to another.
>
> Hope that helps.
>
> Thanks,
> John
>
>
> On Mon, Jun 9, 2014 at 8:34 PM, Yonghan Choi via RT
<met_help at ucar.edu>
> wrote:
>
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > I am sorry for late reply. Actually, I was out of office last
week.
> >
> > Thank you for your kind advices on using MET tools.
> >
> > I have one additional question on output of MODE tools.
> >
> > Regardless of convolution threshold or radius, areas of
precipitation
> over
> > the Southern Korean Peninsula are identified as one object.
> >
> > I think that areas of precipitation over the southern peninsula
should be
> > divided into smaller objects.
> >
> > Could you give me your opinions on this issue?
> >
> > Thank you.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Thu, Jun 5, 2014 at 6:32 AM, John Halley Gotway via RT <
> > met_help at ucar.edu
> > > wrote:
> >
> > > Yonghan,
> > >
> > > Thanks for sending the data.  I grabbed it and was able to run
MODE to
> > > reproduce the output you're seeing.
> > >
> > > One immediate problem I see is the missing value in the
observation
> > field.
> > >  Looking on page 3, I see the color scale for the data goes down
to
> -999.
> > >  Unfortunately, right now files in the MET NetCDF
> > > format only support a missing value of -9999.  We do plan to
make that
> > > more general in future releases.  But in METv4.1, you're stuck
with
> -999.
> > >
> > > I ran the following commands to switch from -999 to -9999:
> > >     ncdump gridded_aws_test.nc | sed 's/-999.f/-9999.f/g' >
> > > gridded_aws_test.ncdump
> > >     ncgen -o gridded_aws_test_new.nc gridded_aws_test.ncdump
> > >
> > > Next, I see that the area of missing data in the two fields
isn't the
> > > same.  And that's not really fair.  You probably only want to
define
> > > objects using points that contain valid data in both fields.
> > >   So I'd suggest masking the missing data across both fields:
> > >     mask_missing_flag = BOTH;
> > >
> > > Next, I tried playing with the convolution radius (amount of
smoothing)
> > > and threshold.  Really, you should try playing around with these
to get
> > the
> > > type of objects you're trying to study.
> > >
> > > I tried setting the convolution radius to 0 (meaning, don't
smooth the
> > > data) and the threshold to 25.4mm (which is 1" of precip).  The
result
> is
> > > attached.  Is that what you're looking for?
> > >     conv_radius       = 0;
> > >     conv_thresh       = >=25.4;
> > >
> > > If you'd like the objects more smooth, try increasing the
smoothing
> > > radius, to maybe 2.  I've attached this result as well.
> > >
> > > Please try tweaking the convolution radius and threshold until
you
> create
> > > objects that match the feature you're trying to study.
> > >
> > > However, I'd like to give you a word of caution.  As is often
the case,
> > > MODE suffers from edge effects.  Looking at the sample data you
sent,
> > we're
> > > not capturing the full extent of these objects.
> > > Instead, they're being cut off by the edge of the valid data.
That
> makes
> > > interpreting the object attributes difficult.  Interpreting MODE
output
> > > works best when the size of the individual objects are
> > > small relative to the size of the domain.  That is not the case
in the
> > > data you sent me.  So MODE may be of limited use for you.
> > >
> > > You might also consider using the Fractions Skill Score (FSS)
which is
> > > contained in the NBRCNT line type produced by the Grid-Stat
tool.  FSS
> > is a
> > > neighborhood verification method that is a nice,
> > > simple way of assessing model performance.
> > >
> > > Hope that helps.
> > >
> > > Thanks,
> > > John Halley Gotway
> > >
> > >
> > >
> > > On 06/04/2014 12:57 AM, Yonghan Choi via RT wrote:
> > > >
> > > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
> > > >
> > > > Dear Dr. John Halley Gotway,
> > > >
> > > > I uploaded output ps file, input forecast file, observation
file, and
> > > > configuration file according to your instruction (to Choi_data
> > > directory).
> > > >
> > > > Thank you for your kindness.
> > > >
> > > > Best regards
> > > > Yonghan Choi
> > > >
> > > >
> > > > On Wed, Jun 4, 2014 at 12:59 AM, John Halley Gotway via RT <
> > > > met_help at ucar.edu> wrote:
> > > >
> > > >> Yonghan,
> > > >>
> > > >> Answers are inline below.
> > > >>
> > > >> On 06/03/2014 08:52 AM, Yonghan Choi via RT wrote:
> > > >>>
> > > >>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> > > >>>
> > > >>> Dear Dr. John Halley Gotway,
> > > >>>
> > > >>> Actually, I have some additional questions on MET.
> > > >>>
> > > >>> 1. I used "copygb" program to regrid WRF forecasts to
lat/lon grid.
> > > >>> However, I got a message of "floating exception" when the
program
> was
> > > >>> ended. I also obtained regridded WRF forecast in GRIB
format.
> > > >>> Is it alright?
> > > >>>
> > > >>
> > > >> I've experienced this behavior as well sometimes.  copygb
goes
> through
> > > the
> > > >> GRIB file record by record.  Every once in a while, it
crashes with
> a
> > > >> "floating exception" when processing one of the
> > > >> records.  You'll find that the output GRIB file contains all
the
> > records
> > > >> up to the one that caused the crash.  For example, if your
input
> GRIB
> > > file
> > > >> contains 500 records and your output contains 350
> > > >> records, it's likely that the problem occurred when
processing the
> > 351st
> > > >> record.
> > > >>
> > > >> Obviously, copygb shouldn't crash!  I'd suggest sending a
message to
> > > >> wrfhelp at ucar.edu, along with the GRIB file and the command
line
> that
> > > >> caused the exception.  Ideally, they'd be able to fix that
> > > >> exception.
> > > >>
> > > >> But is it alright?  It depends on what made it into your
output
> file.
> > >  If
> > > >> the output file contains all the records you want to verify,
then
> its
> > > fine.
> > > >>   If not, I'd suggest using a more complex copygb
> > > >> command that would skip over the records that are causing the
crash.
> >  If
> > > >> you need help figuring that out, you could send me the
problematic
> > GRIB
> > > >> file.
> > > >>
> > > >>> 2. I would like to use "pcp_combine" program to make 12-h
> accumulated
> > > >>> precipitation with four 3-h accumulated TRMM precipitation
files as
> > > >> inputs.
> > > >>> Could you give me an advice on how to do that?
> > > >>
> > > >> You'd do something like this:
> > > >>
> > > >> pcp_combine -add \
> > > >>      trmm_file1.nc 'name="APCP_03"; level="(*,*)";' \
> > > >>      trmm_file2.nc 'name="APCP_03"; level="(*,*)";' \
> > > >>      trmm_file3.nc 'name="APCP_03"; level="(*,*)";' \
> > > >>      trmm_file4.nc 'name="APCP_03"; level="(*,*)";' \
> > > >>      trmm_output_file.nc
> > > >>
> > > >>>
> > > >>> 3. I ran MODE tool with WRF forecasts (results of UPP and
copygb)
> and
> > > AWS
> > > >>> observations (in appropriate NetCDF format) as inputs. The
attached
> > ps
> > > >> file
> > > >>> is one of outputs of MODE tool.
> > > >>> I think both in observations and WRF forecasts, selected
objects
> are
> > > too
> > > >>> broad (i.e., no specific features of precipitation is
included).
> > > >>> Can I obtain different results by modifying configuration
file?
> Could
> > > you
> > > >>> give me advices?
> > > >>>
> > > >>
> > > >> Yes, you can definitely change how the objects are defined.
But I
> > > didn't
> > > >> see an attachment.  Can you please post the data to our
anonymous
> ftp
> > > site
> > > >> following these instructions:
> > > >>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> > > >>
> > > >> Please send me the output PostScript file, your input
forecast and
> > > >> observation files, and the MODE configuration file.  I'll run
it
> here,
> > > play
> > > >> around with the config file, and send you some suggestions.
> > > >>
> > > >> Thanks,
> > > >> John
> > > >>
> > > >>> I really think that MET tool is very useful and amazing.
> > > >>>
> > > >>> Thank you for your kindness.
> > > >>>
> > > >>> Best regards
> > > >>> Yonghan Choi
> > > >>>
> > > >>>
> > > >>> On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT <
> > > >> met_help at ucar.edu
> > > >>>> wrote:
> > > >>>
> > > >>>> According to our records, your request has been resolved.
If you
> > have
> > > >> any
> > > >>>> further questions or concerns, please respond to this
message.
> > > >>>>
> > > >>
> > > >>
> > >
> > >
> >
> >
>
>

------------------------------------------------
Subject: Question on online tutorial
From: John Halley Gotway
Time: Fri Jun 27 09:32:51 2014

Yonghan,

Sure, you could use FSS to inter-compare the performance of two
different
models.

And yes, INTERP_PNTS = 25 means a 5x5 box around each grid.  If the
grid
points are roughly 6km apart, that'd be a 30km by 30km box around each
point.

Thanks,
John Halley Gotway


On Wed, Jun 25, 2014 at 6:53 AM, Yonghan Choi via RT
<met_help at ucar.edu>
wrote:

>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> Thank you for your kind and detailed response.
>
> I have additional questions on FSS.
>
> 1. Can I use Fractions Skill Score (one of outputs of Grid-Stat
tool) for
> comparing several model forecasts (e.g., control experiment and data
> assimilation experiment)?
>
> 2. As I know, FSS is generally represented as a function of length
scale.
> NBRCNT file (one of outputs of Grid-Stat tool) contains INTERP_PNTS
column,
> which indicates the number of points used in interpolation. If
INTERP_PNTS
> is 25 and grid resolution is 6 km, will length scale be 30 km?
>
> Thank you.
>
> Best regards
> Yonghan Choi
>
>
> On Wed, Jun 25, 2014 at 4:27 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
> > Yonghan,
> >
> > Sorry also for the delay in getting back to you.  I was out of the
office
> > last week.  I see that you'd like to see separate objects over the
South
> > Korean peninsula in the example you sent.  The simple change that
would
> do
> > this is increasing the convolution threshold.  Looking on page two
of the
> > PostScript output of MODE, I see that areas of dark blue
correspond to
> > about 65 mm of precipitation.  Running MODE with a convolution
radius of
> 0
> > and a threshold of 65 mm yields 5 forecast objects and 2
observation
> > objects.  Using a convolution radius of 2 and a threshold of 55 mm
> yields 5
> > forecast objects and 1 observation object.
> >
> > When looking at a single case, you could spend a lot of time
playing
> around
> > with the radius and threshold to get the objects to look exactly
how
> you'd
> > like them.  But you can't do that for each case - it's impractical
and
> > would take too long.
> >
> > Instead, I'd suggest that you look at a handful of meaningful
cases and
> try
> > to set up the config file to create objects that capture the
features
> > you're trying to study.  Hopefully there's a threshold that's
meaningful
> to
> > you in some way.  Do you want to see how well the model is doing
> capturing
> > large-scale precipitation events, or are you only interested in
small
> areas
> > of intense precipitation?  The phenomenon you're trying to study
should
> > guide how you set up the config file.
> >
> > Once you've chosen the radius and threshold that do a good job
capturing
> > events in your small set of cases, run them over your full
dataset.  No
> > matter how you choose the radius and threshold, I suspect there
will
> always
> > be one or two cases where you don't like how the objects were
defined.
> > That will always be an issue.  But you really should keep the
radius and
> > threshold fixed over the full dataset so that you can fairly
compare
> > objects from one run to another.
> >
> > Hope that helps.
> >
> > Thanks,
> > John
> >
> >
> > On Mon, Jun 9, 2014 at 8:34 PM, Yonghan Choi via RT
<met_help at ucar.edu>
> > wrote:
> >
> > >
> > > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> > >
> > > Dear Dr. John Halley Gotway,
> > >
> > > I am sorry for late reply. Actually, I was out of office last
week.
> > >
> > > Thank you for your kind advices on using MET tools.
> > >
> > > I have one additional question on output of MODE tools.
> > >
> > > Regardless of convolution threshold or radius, areas of
precipitation
> > over
> > > the Southern Korean Peninsula are identified as one object.
> > >
> > > I think that areas of precipitation over the southern peninsula
should
> be
> > > divided into smaller objects.
> > >
> > > Could you give me your opinions on this issue?
> > >
> > > Thank you.
> > >
> > > Best regards
> > > Yonghan Choi
> > >
> > >
> > > On Thu, Jun 5, 2014 at 6:32 AM, John Halley Gotway via RT <
> > > met_help at ucar.edu
> > > > wrote:
> > >
> > > > Yonghan,
> > > >
> > > > Thanks for sending the data.  I grabbed it and was able to run
MODE
> to
> > > > reproduce the output you're seeing.
> > > >
> > > > One immediate problem I see is the missing value in the
observation
> > > field.
> > > >  Looking on page 3, I see the color scale for the data goes
down to
> > -999.
> > > >  Unfortunately, right now files in the MET NetCDF
> > > > format only support a missing value of -9999.  We do plan to
make
> that
> > > > more general in future releases.  But in METv4.1, you're stuck
with
> > -999.
> > > >
> > > > I ran the following commands to switch from -999 to -9999:
> > > >     ncdump gridded_aws_test.nc | sed 's/-999.f/-9999.f/g' >
> > > > gridded_aws_test.ncdump
> > > >     ncgen -o gridded_aws_test_new.nc gridded_aws_test.ncdump
> > > >
> > > > Next, I see that the area of missing data in the two fields
isn't the
> > > > same.  And that's not really fair.  You probably only want to
define
> > > > objects using points that contain valid data in both fields.
> > > >   So I'd suggest masking the missing data across both fields:
> > > >     mask_missing_flag = BOTH;
> > > >
> > > > Next, I tried playing with the convolution radius (amount of
> smoothing)
> > > > and threshold.  Really, you should try playing around with
these to
> get
> > > the
> > > > type of objects you're trying to study.
> > > >
> > > > I tried setting the convolution radius to 0 (meaning, don't
smooth
> the
> > > > data) and the threshold to 25.4mm (which is 1" of precip).
The
> result
> > is
> > > > attached.  Is that what you're looking for?
> > > >     conv_radius       = 0;
> > > >     conv_thresh       = >=25.4;
> > > >
> > > > If you'd like the objects more smooth, try increasing the
smoothing
> > > > radius, to maybe 2.  I've attached this result as well.
> > > >
> > > > Please try tweaking the convolution radius and threshold until
you
> > create
> > > > objects that match the feature you're trying to study.
> > > >
> > > > However, I'd like to give you a word of caution.  As is often
the
> case,
> > > > MODE suffers from edge effects.  Looking at the sample data
you sent,
> > > we're
> > > > not capturing the full extent of these objects.
> > > > Instead, they're being cut off by the edge of the valid data.
That
> > makes
> > > > interpreting the object attributes difficult.  Interpreting
MODE
> output
> > > > works best when the size of the individual objects are
> > > > small relative to the size of the domain.  That is not the
case in
> the
> > > > data you sent me.  So MODE may be of limited use for you.
> > > >
> > > > You might also consider using the Fractions Skill Score (FSS)
which
> is
> > > > contained in the NBRCNT line type produced by the Grid-Stat
tool.
>  FSS
> > > is a
> > > > neighborhood verification method that is a nice,
> > > > simple way of assessing model performance.
> > > >
> > > > Hope that helps.
> > > >
> > > > Thanks,
> > > > John Halley Gotway
> > > >
> > > >
> > > >
> > > > On 06/04/2014 12:57 AM, Yonghan Choi via RT wrote:
> > > > >
> > > > > <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> > > > >
> > > > > Dear Dr. John Halley Gotway,
> > > > >
> > > > > I uploaded output ps file, input forecast file, observation
file,
> and
> > > > > configuration file according to your instruction (to
Choi_data
> > > > directory).
> > > > >
> > > > > Thank you for your kindness.
> > > > >
> > > > > Best regards
> > > > > Yonghan Choi
> > > > >
> > > > >
> > > > > On Wed, Jun 4, 2014 at 12:59 AM, John Halley Gotway via RT <
> > > > > met_help at ucar.edu> wrote:
> > > > >
> > > > >> Yonghan,
> > > > >>
> > > > >> Answers are inline below.
> > > > >>
> > > > >> On 06/03/2014 08:52 AM, Yonghan Choi via RT wrote:
> > > > >>>
> > > > >>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> > > > >>>
> > > > >>> Dear Dr. John Halley Gotway,
> > > > >>>
> > > > >>> Actually, I have some additional questions on MET.
> > > > >>>
> > > > >>> 1. I used "copygb" program to regrid WRF forecasts to
lat/lon
> grid.
> > > > >>> However, I got a message of "floating exception" when the
program
> > was
> > > > >>> ended. I also obtained regridded WRF forecast in GRIB
format.
> > > > >>> Is it alright?
> > > > >>>
> > > > >>
> > > > >> I've experienced this behavior as well sometimes.  copygb
goes
> > through
> > > > the
> > > > >> GRIB file record by record.  Every once in a while, it
crashes
> with
> > a
> > > > >> "floating exception" when processing one of the
> > > > >> records.  You'll find that the output GRIB file contains
all the
> > > records
> > > > >> up to the one that caused the crash.  For example, if your
input
> > GRIB
> > > > file
> > > > >> contains 500 records and your output contains 350
> > > > >> records, it's likely that the problem occurred when
processing the
> > > 351st
> > > > >> record.
> > > > >>
> > > > >> Obviously, copygb shouldn't crash!  I'd suggest sending a
message
> to
> > > > >> wrfhelp at ucar.edu, along with the GRIB file and the command
line
> > that
> > > > >> caused the exception.  Ideally, they'd be able to fix that
> > > > >> exception.
> > > > >>
> > > > >> But is it alright?  It depends on what made it into your
output
> > file.
> > > >  If
> > > > >> the output file contains all the records you want to
verify, then
> > its
> > > > fine.
> > > > >>   If not, I'd suggest using a more complex copygb
> > > > >> command that would skip over the records that are causing
the
> crash.
> > >  If
> > > > >> you need help figuring that out, you could send me the
problematic
> > > GRIB
> > > > >> file.
> > > > >>
> > > > >>> 2. I would like to use "pcp_combine" program to make 12-h
> > accumulated
> > > > >>> precipitation with four 3-h accumulated TRMM precipitation
files
> as
> > > > >> inputs.
> > > > >>> Could you give me an advice on how to do that?
> > > > >>
> > > > >> You'd do something like this:
> > > > >>
> > > > >> pcp_combine -add \
> > > > >>      trmm_file1.nc 'name="APCP_03"; level="(*,*)";' \
> > > > >>      trmm_file2.nc 'name="APCP_03"; level="(*,*)";' \
> > > > >>      trmm_file3.nc 'name="APCP_03"; level="(*,*)";' \
> > > > >>      trmm_file4.nc 'name="APCP_03"; level="(*,*)";' \
> > > > >>      trmm_output_file.nc
> > > > >>
> > > > >>>
> > > > >>> 3. I ran MODE tool with WRF forecasts (results of UPP and
copygb)
> > and
> > > > AWS
> > > > >>> observations (in appropriate NetCDF format) as inputs. The
> attached
> > > ps
> > > > >> file
> > > > >>> is one of outputs of MODE tool.
> > > > >>> I think both in observations and WRF forecasts, selected
objects
> > are
> > > > too
> > > > >>> broad (i.e., no specific features of precipitation is
included).
> > > > >>> Can I obtain different results by modifying configuration
file?
> > Could
> > > > you
> > > > >>> give me advices?
> > > > >>>
> > > > >>
> > > > >> Yes, you can definitely change how the objects are defined.
But I
> > > > didn't
> > > > >> see an attachment.  Can you please post the data to our
anonymous
> > ftp
> > > > site
> > > > >> following these instructions:
> > > > >>
http://www.dtcenter.org/met/users/support/met_help.php#ftp
> > > > >>
> > > > >> Please send me the output PostScript file, your input
forecast and
> > > > >> observation files, and the MODE configuration file.  I'll
run it
> > here,
> > > > play
> > > > >> around with the config file, and send you some suggestions.
> > > > >>
> > > > >> Thanks,
> > > > >> John
> > > > >>
> > > > >>> I really think that MET tool is very useful and amazing.
> > > > >>>
> > > > >>> Thank you for your kindness.
> > > > >>>
> > > > >>> Best regards
> > > > >>> Yonghan Choi
> > > > >>>
> > > > >>>
> > > > >>> On Tue, Jun 3, 2014 at 5:13 AM, John Halley Gotway via RT
<
> > > > >> met_help at ucar.edu
> > > > >>>> wrote:
> > > > >>>
> > > > >>>> According to our records, your request has been resolved.
If you
> > > have
> > > > >> any
> > > > >>>> further questions or concerns, please respond to this
message.
> > > > >>>>
> > > > >>
> > > > >>
> > > >
> > > >
> > >
> > >
> >
> >
>
>

------------------------------------------------


More information about the Met_help mailing list