[Met_help] [rt.rap.ucar.edu #66543] History for Question on online tutorial
John Halley Gotway via RT
met_help at ucar.edu
Mon Jun 2 14:13:13 MDT 2014
----------------------------------------------------------------
Initial Request
----------------------------------------------------------------
Dear whom it may concern,
I have one question on MET online tutorial.
I downloaded MET source code, and compiled it successfully.
I tried to run PB2NC tool following online tutorial, but I got error an
message as belows.
-----------------------------------------------------------------------------------------------------------------
DEBUG 1: Default Config File:
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
DEBUG 1: User Config File:
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
DEBUG 1: Creating NetCDF File:
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
DEBUG 1: Processing PrepBufr File:
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
ndas.t00z.prepbufr.tm12.20070401.nr
DEBUG 1: Blocking PrepBufr file to: /tmp/tmp_pb2nc_blk_8994_0
Segmentation fault
-----------------------------------------------------------------------------------------------------------------
Could you give me some advices on this problem?
And could you give me output of PB2NC (i.e., tutorial_pb.nc)? Because I
need this file to run Point-Stat Tool tutorial.
Thank you for your kindness.
Best regards
Yonghan Choi
----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Wed Apr 30 13:22:48 2014
Yonghan,
Sorry to hear that you're having trouble running pb2nc in the online
tutorial. Can you tell me, are you able to run it fine using the
script included in the tarball?
After you compiled MET, did you go into the scripts directory and run
the test scripts?
cd METv4.1/scripts
./test_all.sh >& test_all.log
Does pb2nc run OK in the test scripts, or do you see a segmentation
fault there as well?
Thanks,
John Halley Gotway
met_help at ucar.edu
On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>
> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> Transaction: Ticket created by cyh082 at gmail.com
> Queue: met_help
> Subject: Question on online tutorial
> Owner: Nobody
> Requestors: cyh082 at gmail.com
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
>
> Dear whom it may concern,
>
> I have one question on MET online tutorial.
>
> I downloaded MET source code, and compiled it successfully.
>
> I tried to run PB2NC tool following online tutorial, but I got error
an
> message as belows.
>
>
-----------------------------------------------------------------------------------------------------------------
> DEBUG 1: Default Config File:
> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> DEBUG 1: User Config File:
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> DEBUG 1: Creating NetCDF File:
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
> DEBUG 1: Processing PrepBufr File:
> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> ndas.t00z.prepbufr.tm12.20070401.nr
> DEBUG 1: Blocking PrepBufr file to: /tmp/tmp_pb2nc_blk_8994_0
> Segmentation fault
>
-----------------------------------------------------------------------------------------------------------------
>
> Could you give me some advices on this problem?
>
> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because I
> need this file to run Point-Stat Tool tutorial.
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Thu May 01 03:28:19 2014
Dear John Halley Gotway,
Yes, I ran the test script. I checked the log file, and running pb2nc
resulted in the same error (segmentation fault).
And I have another question.
Actually, I would like to run Point-Stat Tool with AWS observations
and WRF
model outputs as inputs.
Then, should I run ascii2nc to make input observation file for Point-
Stat
Tool using my own AWS observations?
And, should I run Unified Post Processor or pinterp to make input
gridded
file for Point-Stat Tool using my WRF forecasts? Is it necessary to
run
pcp_combine after running UPP or pinterp?
Thank you for your kindness.
Best regards
Yonghan Choi
On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT
<met_help at ucar.edu
> wrote:
> Yonghan,
>
> Sorry to hear that you're having trouble running pb2nc in the online
> tutorial. Can you tell me, are you able to run it fine using the
script
> included in the tarball?
>
> After you compiled MET, did you go into the scripts directory and
run the
> test scripts?
>
> cd METv4.1/scripts
> ./test_all.sh >& test_all.log
>
> Does pb2nc run OK in the test scripts, or do you see a segmentation
fault
> there as well?
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >
> > Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> > Transaction: Ticket created by cyh082 at gmail.com
> > Queue: met_help
> > Subject: Question on online tutorial
> > Owner: Nobody
> > Requestors: cyh082 at gmail.com
> > Status: new
> > Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> >
> > Dear whom it may concern,
> >
> > I have one question on MET online tutorial.
> >
> > I downloaded MET source code, and compiled it successfully.
> >
> > I tried to run PB2NC tool following online tutorial, but I got
error an
> > message as belows.
> >
> >
>
-----------------------------------------------------------------------------------------------------------------
> > DEBUG 1: Default Config File:
> >
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> > DEBUG 1: User Config File:
> >
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> > DEBUG 1: Creating NetCDF File:
> >
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
> > DEBUG 1: Processing PrepBufr File:
> > /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> > ndas.t00z.prepbufr.tm12.20070401.nr
> > DEBUG 1: Blocking PrepBufr file to: /tmp/tmp_pb2nc_blk_8994_0
> > Segmentation fault
> >
>
-----------------------------------------------------------------------------------------------------------------
> >
> > Could you give me some advices on this problem?
> >
> > And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because I
> > need this file to run Point-Stat Tool tutorial.
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Thu May 01 09:34:45 2014
Yonghan,
If you will not be using PREPBUFR point observations, you won't need
the PB2NC utility. I'm happy to help you debug the issue to try to
figure out what's going on, but it's up to you.
To answer your questions, yes, if your AWS observations are in ASCII,
I'd suggest reformatting them into the 11-column format that ASCII2NC
expects. After you run them through ASCII2NC, you'll be
able to use them in point_stat.
I'd suggest using the Unified Post Processor (UPP) whose output format
is GRIB. MET handles GRIB files very well. It can read the pinterp
output as well, but not variables on staggered dimensions,
such as the winds. For that reason, using UPP is better.
The pcp_combine tool is run to modify precipitation accumulation
intervals. This is all driven by your observations. For example,
suppose you have 24-hour, daily observations of accumulated
precipitation. You'd want to compare a 24-hour forecast accumulation
to that 24-hour observed accumulation. So you may need to run
pcp_combine to add or subtract accumulated precipitation across
your WRF output files. If you're only verifying instantaneous
variables, such as temperature or winds, you wouldn't need to run
pcp_combine.
Hope that helps.
Thanks,
John
On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear John Halley Gotway,
>
> Yes, I ran the test script. I checked the log file, and running
pb2nc
> resulted in the same error (segmentation fault).
>
> And I have another question.
>
> Actually, I would like to run Point-Stat Tool with AWS observations
and WRF
> model outputs as inputs.
>
> Then, should I run ascii2nc to make input observation file for
Point-Stat
> Tool using my own AWS observations?
>
> And, should I run Unified Post Processor or pinterp to make input
gridded
> file for Point-Stat Tool using my WRF forecasts? Is it necessary to
run
> pcp_combine after running UPP or pinterp?
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT
<met_help at ucar.edu
>> wrote:
>
>> Yonghan,
>>
>> Sorry to hear that you're having trouble running pb2nc in the
online
>> tutorial. Can you tell me, are you able to run it fine using the
script
>> included in the tarball?
>>
>> After you compiled MET, did you go into the scripts directory and
run the
>> test scripts?
>>
>> cd METv4.1/scripts
>> ./test_all.sh >& test_all.log
>>
>> Does pb2nc run OK in the test scripts, or do you see a segmentation
fault
>> there as well?
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>
>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>> Transaction: Ticket created by cyh082 at gmail.com
>>> Queue: met_help
>>> Subject: Question on online tutorial
>>> Owner: Nobody
>>> Requestors: cyh082 at gmail.com
>>> Status: new
>>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>>
>>> Dear whom it may concern,
>>>
>>> I have one question on MET online tutorial.
>>>
>>> I downloaded MET source code, and compiled it successfully.
>>>
>>> I tried to run PB2NC tool following online tutorial, but I got
error an
>>> message as belows.
>>>
>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>> DEBUG 1: Default Config File:
>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>> DEBUG 1: User Config File:
>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>> DEBUG 1: Creating NetCDF File:
>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/tutorial_pb.nc
>>> DEBUG 1: Processing PrepBufr File:
>>> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>> DEBUG 1: Blocking PrepBufr file to: /tmp/tmp_pb2nc_blk_8994_0
>>> Segmentation fault
>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>
>>> Could you give me some advices on this problem?
>>>
>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because I
>>> need this file to run Point-Stat Tool tutorial.
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>
>>
------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Thu May 08 02:38:58 2014
Dear Dr. John Halley Gotway,
I decided to use point observations (AWS observations) in ASCII
format.
I have some questions on how to make 11-column format.
1. If I use AWS observations, is message type (column #1) "ADPSFC"?
2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013 to
12
UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
3. Is grib code (column #7) for accumulated precipitation "61"?
4. Is level (column #8) "12"?
5. What is appropriate value for QC string (column #10)?
And... I will use UPP as suggested.
6. Should I modify wrf_cntrl.parm (included in the code) to output
accumulated total rainfall amount?
Finally, as you know, WRF output includes accumulated rainfall up to
forecast time.
For example, if initial time for WRF forecast is 00 UTC 4 July 2013,
output
file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
7. Then, should I use pcp-combine tool to make 12-h accumulated WRF
forecast?
If yes, how can I do this?
Thank you for your kindness.
Best regards
Yonghan Choi
On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:
> Yonghan,
>
> If you will not be using PREPBUFR point observations, you won't need
the
> PB2NC utility. I'm happy to help you debug the issue to try to
figure out
> what's going on, but it's up to you.
>
> To answer your questions, yes, if your AWS observations are in
ASCII, I'd
> suggest reformatting them into the 11-column format that ASCII2NC
expects.
> After you run them through ASCII2NC, you'll be
> able to use them in point_stat.
>
> I'd suggest using the Unified Post Processor (UPP) whose output
format is
> GRIB. MET handles GRIB files very well. It can read the pinterp
output as
> well, but not variables on staggered dimensions,
> such as the winds. For that reason, using UPP is better.
>
> The pcp_combine tool is run to modify precipitation accumulation
> intervals. This is all driven by your observations. For example,
suppose
> you have 24-hour, daily observations of accumulated
> precipitation. You'd want to compare a 24-hour forecast
accumulation to
> that 24-hour observed accumulation. So you may need to run
pcp_combine to
> add or subtract accumulated precipitation across
> your WRF output files. If you're only verifying instantaneous
variables,
> such as temperature or winds, you wouldn't need to run pcp_combine.
>
> Hope that helps.
>
> Thanks,
> John
>
> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear John Halley Gotway,
> >
> > Yes, I ran the test script. I checked the log file, and running
pb2nc
> > resulted in the same error (segmentation fault).
> >
> > And I have another question.
> >
> > Actually, I would like to run Point-Stat Tool with AWS
observations and
> WRF
> > model outputs as inputs.
> >
> > Then, should I run ascii2nc to make input observation file for
Point-Stat
> > Tool using my own AWS observations?
> >
> > And, should I run Unified Post Processor or pinterp to make input
gridded
> > file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
> > pcp_combine after running UPP or pinterp?
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> met_help at ucar.edu
> >> wrote:
> >
> >> Yonghan,
> >>
> >> Sorry to hear that you're having trouble running pb2nc in the
online
> >> tutorial. Can you tell me, are you able to run it fine using the
script
> >> included in the tarball?
> >>
> >> After you compiled MET, did you go into the scripts directory and
run
> the
> >> test scripts?
> >>
> >> cd METv4.1/scripts
> >> ./test_all.sh >& test_all.log
> >>
> >> Does pb2nc run OK in the test scripts, or do you see a
segmentation
> fault
> >> there as well?
> >>
> >> Thanks,
> >> John Halley Gotway
> >> met_help at ucar.edu
> >>
> >> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>
> >>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>> Transaction: Ticket created by cyh082 at gmail.com
> >>> Queue: met_help
> >>> Subject: Question on online tutorial
> >>> Owner: Nobody
> >>> Requestors: cyh082 at gmail.com
> >>> Status: new
> >>> Ticket <URL:
> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>>
> >>> Dear whom it may concern,
> >>>
> >>> I have one question on MET online tutorial.
> >>>
> >>> I downloaded MET source code, and compiled it successfully.
> >>>
> >>> I tried to run PB2NC tool following online tutorial, but I got
error an
> >>> message as belows.
> >>>
> >>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>> DEBUG 1: Default Config File:
> >>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>> DEBUG 1: User Config File:
> >>>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>> DEBUG 1: Creating NetCDF File:
> >>> /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> tutorial_pb.nc
> >>> DEBUG 1: Processing PrepBufr File:
> >>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>> Segmentation fault
> >>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>
> >>> Could you give me some advices on this problem?
> >>>
> >>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because
> I
> >>> need this file to run Point-Stat Tool tutorial.
> >>>
> >>> Thank you for your kindness.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>
> >>
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Thu May 08 12:15:08 2014
Yonghan,
1. For precipitation, I often see people using the message type of
"MC_PCP", but I don't think it really matters.
2. Yes, the valid time is the end of the accumulation interval.
20130704_120000 is correct.
3. Yes, the GRIB code for accumulated precip is 61.
4. Yes, the level would be "12" for 12 hours of accumulation. Using
"120000" would work too.
5. The QC column can be filled with any string. If you don't have any
quality control values for this data, I'd suggest just putting "NA" in
the column.
6. I would guess that accumulated total rainfall amount is already
turned on in the default wrf_cntrl.parm file. I'd suggest running UPP
once and looking at the output GRIB file. Run the GRIB file
through the "wgrib" utility to dump it's contents and look for "APCP"
in the output. APCP is the GRIB code abbreviation for accumulated
precipitation.
7. As you've described, by default, WRF-ARW computes a runtime
accumulation of precipitation. So your 48-hour forecast contains 48
hours of accumulated precipitation. To get 12-hour accumulations,
you have 2 choices:
- You could modify the TPREC setting when running WRF to "dump the
accumulation bucket" every 12 hours. That'd give you 12-hour
accumulations in your GRIB files.
- Or you could keep it as a runtime accumulation and run
pcp_combine to compute the 12-hour accumulations. For example, you
subtract 60 hours of accumulation minus 48 hours of accumulation to
get
the 12 hours in between.
Hope that helps get you going.
Thanks,
John
On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> I decided to use point observations (AWS observations) in ASCII
format.
>
> I have some questions on how to make 11-column format.
>
> 1. If I use AWS observations, is message type (column #1) "ADPSFC"?
>
> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013
to 12
> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>
> 3. Is grib code (column #7) for accumulated precipitation "61"?
>
> 4. Is level (column #8) "12"?
>
> 5. What is appropriate value for QC string (column #10)?
>
> And... I will use UPP as suggested.
>
> 6. Should I modify wrf_cntrl.parm (included in the code) to output
> accumulated total rainfall amount?
>
> Finally, as you know, WRF output includes accumulated rainfall up to
> forecast time.
> For example, if initial time for WRF forecast is 00 UTC 4 July 2013,
output
> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
>
> 7. Then, should I use pcp-combine tool to make 12-h accumulated WRF
> forecast?
> If yes, how can I do this?
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Yonghan,
>>
>> If you will not be using PREPBUFR point observations, you won't
need the
>> PB2NC utility. I'm happy to help you debug the issue to try to
figure out
>> what's going on, but it's up to you.
>>
>> To answer your questions, yes, if your AWS observations are in
ASCII, I'd
>> suggest reformatting them into the 11-column format that ASCII2NC
expects.
>> After you run them through ASCII2NC, you'll be
>> able to use them in point_stat.
>>
>> I'd suggest using the Unified Post Processor (UPP) whose output
format is
>> GRIB. MET handles GRIB files very well. It can read the pinterp
output as
>> well, but not variables on staggered dimensions,
>> such as the winds. For that reason, using UPP is better.
>>
>> The pcp_combine tool is run to modify precipitation accumulation
>> intervals. This is all driven by your observations. For example,
suppose
>> you have 24-hour, daily observations of accumulated
>> precipitation. You'd want to compare a 24-hour forecast
accumulation to
>> that 24-hour observed accumulation. So you may need to run
pcp_combine to
>> add or subtract accumulated precipitation across
>> your WRF output files. If you're only verifying instantaneous
variables,
>> such as temperature or winds, you wouldn't need to run pcp_combine.
>>
>> Hope that helps.
>>
>> Thanks,
>> John
>>
>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear John Halley Gotway,
>>>
>>> Yes, I ran the test script. I checked the log file, and running
pb2nc
>>> resulted in the same error (segmentation fault).
>>>
>>> And I have another question.
>>>
>>> Actually, I would like to run Point-Stat Tool with AWS
observations and
>> WRF
>>> model outputs as inputs.
>>>
>>> Then, should I run ascii2nc to make input observation file for
Point-Stat
>>> Tool using my own AWS observations?
>>>
>>> And, should I run Unified Post Processor or pinterp to make input
gridded
>>> file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
>>> pcp_combine after running UPP or pinterp?
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>> met_help at ucar.edu
>>>> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> Sorry to hear that you're having trouble running pb2nc in the
online
>>>> tutorial. Can you tell me, are you able to run it fine using the
script
>>>> included in the tarball?
>>>>
>>>> After you compiled MET, did you go into the scripts directory and
run
>> the
>>>> test scripts?
>>>>
>>>> cd METv4.1/scripts
>>>> ./test_all.sh >& test_all.log
>>>>
>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
>> fault
>>>> there as well?
>>>>
>>>> Thanks,
>>>> John Halley Gotway
>>>> met_help at ucar.edu
>>>>
>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>> Queue: met_help
>>>>> Subject: Question on online tutorial
>>>>> Owner: Nobody
>>>>> Requestors: cyh082 at gmail.com
>>>>> Status: new
>>>>> Ticket <URL:
>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>>
>>>>> Dear whom it may concern,
>>>>>
>>>>> I have one question on MET online tutorial.
>>>>>
>>>>> I downloaded MET source code, and compiled it successfully.
>>>>>
>>>>> I tried to run PB2NC tool following online tutorial, but I got
error an
>>>>> message as belows.
>>>>>
>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>> DEBUG 1: Default Config File:
>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>> DEBUG 1: User Config File:
>>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>> DEBUG 1: Creating NetCDF File:
>>>>> /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>> tutorial_pb.nc
>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>> Segmentation fault
>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>
>>>>> Could you give me some advices on this problem?
>>>>>
>>>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
Because
>> I
>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>
>>>>> Thank you for your kindness.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>
>>>>
>>
>>
------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Fri May 09 06:22:00 2014
Dear Dr. John Halley Gotway,
Thank you for your kind tips on MET.
I successfully ran ASCII2NC, UPP, and Point-Stat tools.
I have additional questions.
1. When I make 11-column format for ASCII2NC tool, how can I deal with
missing value?
Can I use "-9999.00" to indicate missing value for observed value
(11th
column)?
2. How can I use pcp-combine tool in subtract mode?
Usage only for sum mode is provided in users' guide.
3. If I would like to run MODE tool or Wavelet-Stat tool,
I need gridded model forecast and gridded observations.
Could you recommend some methods to make gridded observations using
point
(ascii format) observations?
Thank you.
Best regards
Yonghan Choi
On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT
<met_help at ucar.edu
> wrote:
> Yonghan,
>
> 1. For precipitation, I often see people using the message type of
> "MC_PCP", but I don't think it really matters.
>
> 2. Yes, the valid time is the end of the accumulation interval.
> 20130704_120000 is correct.
>
> 3. Yes, the GRIB code for accumulated precip is 61.
>
> 4. Yes, the level would be "12" for 12 hours of accumulation. Using
> "120000" would work too.
>
> 5. The QC column can be filled with any string. If you don't have
any
> quality control values for this data, I'd suggest just putting "NA"
in the
> column.
>
> 6. I would guess that accumulated total rainfall amount is already
turned
> on in the default wrf_cntrl.parm file. I'd suggest running UPP once
and
> looking at the output GRIB file. Run the GRIB file
> through the "wgrib" utility to dump it's contents and look for
"APCP" in
> the output. APCP is the GRIB code abbreviation for accumulated
> precipitation.
>
> 7. As you've described, by default, WRF-ARW computes a runtime
> accumulation of precipitation. So your 48-hour forecast contains 48
hours
> of accumulated precipitation. To get 12-hour accumulations,
> you have 2 choices:
> - You could modify the TPREC setting when running WRF to "dump
the
> accumulation bucket" every 12 hours. That'd give you 12-hour
accumulations
> in your GRIB files.
> - Or you could keep it as a runtime accumulation and run
pcp_combine to
> compute the 12-hour accumulations. For example, you subtract 60
hours of
> accumulation minus 48 hours of accumulation to get
> the 12 hours in between.
>
> Hope that helps get you going.
>
> Thanks,
> John
>
> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > I decided to use point observations (AWS observations) in ASCII
format.
> >
> > I have some questions on how to make 11-column format.
> >
> > 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
> >
> > 2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013
to 12
> > UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
> >
> > 3. Is grib code (column #7) for accumulated precipitation "61"?
> >
> > 4. Is level (column #8) "12"?
> >
> > 5. What is appropriate value for QC string (column #10)?
> >
> > And... I will use UPP as suggested.
> >
> > 6. Should I modify wrf_cntrl.parm (included in the code) to output
> > accumulated total rainfall amount?
> >
> > Finally, as you know, WRF output includes accumulated rainfall up
to
> > forecast time.
> > For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
> output
> > file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
> >
> > 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
> > forecast?
> > If yes, how can I do this?
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> > met_help at ucar.edu> wrote:
> >
> >> Yonghan,
> >>
> >> If you will not be using PREPBUFR point observations, you won't
need the
> >> PB2NC utility. I'm happy to help you debug the issue to try to
figure
> out
> >> what's going on, but it's up to you.
> >>
> >> To answer your questions, yes, if your AWS observations are in
ASCII,
> I'd
> >> suggest reformatting them into the 11-column format that ASCII2NC
> expects.
> >> After you run them through ASCII2NC, you'll be
> >> able to use them in point_stat.
> >>
> >> I'd suggest using the Unified Post Processor (UPP) whose output
format
> is
> >> GRIB. MET handles GRIB files very well. It can read the pinterp
> output as
> >> well, but not variables on staggered dimensions,
> >> such as the winds. For that reason, using UPP is better.
> >>
> >> The pcp_combine tool is run to modify precipitation accumulation
> >> intervals. This is all driven by your observations. For
example,
> suppose
> >> you have 24-hour, daily observations of accumulated
> >> precipitation. You'd want to compare a 24-hour forecast
accumulation to
> >> that 24-hour observed accumulation. So you may need to run
pcp_combine
> to
> >> add or subtract accumulated precipitation across
> >> your WRF output files. If you're only verifying instantaneous
> variables,
> >> such as temperature or winds, you wouldn't need to run
pcp_combine.
> >>
> >> Hope that helps.
> >>
> >> Thanks,
> >> John
> >>
> >> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>> Dear John Halley Gotway,
> >>>
> >>> Yes, I ran the test script. I checked the log file, and running
pb2nc
> >>> resulted in the same error (segmentation fault).
> >>>
> >>> And I have another question.
> >>>
> >>> Actually, I would like to run Point-Stat Tool with AWS
observations and
> >> WRF
> >>> model outputs as inputs.
> >>>
> >>> Then, should I run ascii2nc to make input observation file for
> Point-Stat
> >>> Tool using my own AWS observations?
> >>>
> >>> And, should I run Unified Post Processor or pinterp to make
input
> gridded
> >>> file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
> >>> pcp_combine after running UPP or pinterp?
> >>>
> >>> Thank you for your kindness.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>>
> >>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> >> met_help at ucar.edu
> >>>> wrote:
> >>>
> >>>> Yonghan,
> >>>>
> >>>> Sorry to hear that you're having trouble running pb2nc in the
online
> >>>> tutorial. Can you tell me, are you able to run it fine using
the
> script
> >>>> included in the tarball?
> >>>>
> >>>> After you compiled MET, did you go into the scripts directory
and run
> >> the
> >>>> test scripts?
> >>>>
> >>>> cd METv4.1/scripts
> >>>> ./test_all.sh >& test_all.log
> >>>>
> >>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
> >> fault
> >>>> there as well?
> >>>>
> >>>> Thanks,
> >>>> John Halley Gotway
> >>>> met_help at ucar.edu
> >>>>
> >>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>>>
> >>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>>>> Transaction: Ticket created by cyh082 at gmail.com
> >>>>> Queue: met_help
> >>>>> Subject: Question on online tutorial
> >>>>> Owner: Nobody
> >>>>> Requestors: cyh082 at gmail.com
> >>>>> Status: new
> >>>>> Ticket <URL:
> >> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>
> >>>>>
> >>>>> Dear whom it may concern,
> >>>>>
> >>>>> I have one question on MET online tutorial.
> >>>>>
> >>>>> I downloaded MET source code, and compiled it successfully.
> >>>>>
> >>>>> I tried to run PB2NC tool following online tutorial, but I got
error
> an
> >>>>> message as belows.
> >>>>>
> >>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>> DEBUG 1: Default Config File:
> >>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>>>> DEBUG 1: User Config File:
> >>>>>
> >>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>>>> DEBUG 1: Creating NetCDF File:
> >>>>> /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> >> tutorial_pb.nc
> >>>>> DEBUG 1: Processing PrepBufr File:
> >>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>>>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>>>> Segmentation fault
> >>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>
> >>>>> Could you give me some advices on this problem?
> >>>>>
> >>>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
> Because
> >> I
> >>>>> need this file to run Point-Stat Tool tutorial.
> >>>>>
> >>>>> Thank you for your kindness.
> >>>>>
> >>>>> Best regards
> >>>>> Yonghan Choi
> >>>>>
> >>>>
> >>>>
> >>
> >>
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Fri May 09 10:10:37 2014
Yonghan,
1. Yes, -9999 is the value to use for missing data. However, I don't
understand why you'd use -9999 in the 11th column for the "observed
value". When point-stat encounters a bad observation value,
it'll just skip that record. So you should just skip over any
observations with a bad data value.
2. I was surprised to see that we don't have any examples of running
pcp_combine in the subtraction mode on our website. Here's an example
using the sample data that's included in the MET tarball:
METv4.1/bin/pcp_combine -subtract \
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12 \
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06 \
wrfprs_ruc13_APCP_06_to_12.nc
I've passed one file name followed by an accumulation interval, then a
second file name followed by an accumulation interval, and lastly, the
output file name. It grabs the 12-hour accumulation from
the first file and subtracts off the 6-hour accumulation from the
second file. The result is the 6 hours of accumulation in between.
3. There is no general purpose way of converting point data to gridded
data. It's a pretty difficult task and would be very specific to the
data you're using. Generally, I wouldn't recommend trying
to do that. Instead, I'd suggest looking for other available gridded
datasets. Are you looking for gridded observations of precipitation?
What is your region of interest? You could send me a
sample GRIB file if you'd like, and I could look at the grid you're
using. There may be some satellite observations of precipitation you
could use.
Thanks,
John
On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> Thank you for your kind tips on MET.
>
> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
>
> I have additional questions.
>
> 1. When I make 11-column format for ASCII2NC tool, how can I deal
with
> missing value?
> Can I use "-9999.00" to indicate missing value for observed value
(11th
> column)?
>
> 2. How can I use pcp-combine tool in subtract mode?
> Usage only for sum mode is provided in users' guide.
>
> 3. If I would like to run MODE tool or Wavelet-Stat tool,
> I need gridded model forecast and gridded observations.
>
> Could you recommend some methods to make gridded observations using
point
> (ascii format) observations?
>
> Thank you.
>
> Best regards
> Yonghan Choi
>
>
> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT
<met_help at ucar.edu
>> wrote:
>
>> Yonghan,
>>
>> 1. For precipitation, I often see people using the message type of
>> "MC_PCP", but I don't think it really matters.
>>
>> 2. Yes, the valid time is the end of the accumulation interval.
>> 20130704_120000 is correct.
>>
>> 3. Yes, the GRIB code for accumulated precip is 61.
>>
>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
>> "120000" would work too.
>>
>> 5. The QC column can be filled with any string. If you don't have
any
>> quality control values for this data, I'd suggest just putting "NA"
in the
>> column.
>>
>> 6. I would guess that accumulated total rainfall amount is already
turned
>> on in the default wrf_cntrl.parm file. I'd suggest running UPP
once and
>> looking at the output GRIB file. Run the GRIB file
>> through the "wgrib" utility to dump it's contents and look for
"APCP" in
>> the output. APCP is the GRIB code abbreviation for accumulated
>> precipitation.
>>
>> 7. As you've described, by default, WRF-ARW computes a runtime
>> accumulation of precipitation. So your 48-hour forecast contains
48 hours
>> of accumulated precipitation. To get 12-hour accumulations,
>> you have 2 choices:
>> - You could modify the TPREC setting when running WRF to "dump
the
>> accumulation bucket" every 12 hours. That'd give you 12-hour
accumulations
>> in your GRIB files.
>> - Or you could keep it as a runtime accumulation and run
pcp_combine to
>> compute the 12-hour accumulations. For example, you subtract 60
hours of
>> accumulation minus 48 hours of accumulation to get
>> the 12 hours in between.
>>
>> Hope that helps get you going.
>>
>> Thanks,
>> John
>>
>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear Dr. John Halley Gotway,
>>>
>>> I decided to use point observations (AWS observations) in ASCII
format.
>>>
>>> I have some questions on how to make 11-column format.
>>>
>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
>>>
>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July 2013
to 12
>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>>>
>>> 3. Is grib code (column #7) for accumulated precipitation "61"?
>>>
>>> 4. Is level (column #8) "12"?
>>>
>>> 5. What is appropriate value for QC string (column #10)?
>>>
>>> And... I will use UPP as suggested.
>>>
>>> 6. Should I modify wrf_cntrl.parm (included in the code) to output
>>> accumulated total rainfall amount?
>>>
>>> Finally, as you know, WRF output includes accumulated rainfall up
to
>>> forecast time.
>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
>> output
>>> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
>>>
>>> 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
>>> forecast?
>>> If yes, how can I do this?
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
>>> met_help at ucar.edu> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> If you will not be using PREPBUFR point observations, you won't
need the
>>>> PB2NC utility. I'm happy to help you debug the issue to try to
figure
>> out
>>>> what's going on, but it's up to you.
>>>>
>>>> To answer your questions, yes, if your AWS observations are in
ASCII,
>> I'd
>>>> suggest reformatting them into the 11-column format that ASCII2NC
>> expects.
>>>> After you run them through ASCII2NC, you'll be
>>>> able to use them in point_stat.
>>>>
>>>> I'd suggest using the Unified Post Processor (UPP) whose output
format
>> is
>>>> GRIB. MET handles GRIB files very well. It can read the pinterp
>> output as
>>>> well, but not variables on staggered dimensions,
>>>> such as the winds. For that reason, using UPP is better.
>>>>
>>>> The pcp_combine tool is run to modify precipitation accumulation
>>>> intervals. This is all driven by your observations. For
example,
>> suppose
>>>> you have 24-hour, daily observations of accumulated
>>>> precipitation. You'd want to compare a 24-hour forecast
accumulation to
>>>> that 24-hour observed accumulation. So you may need to run
pcp_combine
>> to
>>>> add or subtract accumulated precipitation across
>>>> your WRF output files. If you're only verifying instantaneous
>> variables,
>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
>>>>
>>>> Hope that helps.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>> Dear John Halley Gotway,
>>>>>
>>>>> Yes, I ran the test script. I checked the log file, and running
pb2nc
>>>>> resulted in the same error (segmentation fault).
>>>>>
>>>>> And I have another question.
>>>>>
>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations and
>>>> WRF
>>>>> model outputs as inputs.
>>>>>
>>>>> Then, should I run ascii2nc to make input observation file for
>> Point-Stat
>>>>> Tool using my own AWS observations?
>>>>>
>>>>> And, should I run Unified Post Processor or pinterp to make
input
>> gridded
>>>>> file for Point-Stat Tool using my WRF forecasts? Is it necessary
to run
>>>>> pcp_combine after running UPP or pinterp?
>>>>>
>>>>> Thank you for your kindness.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>>
>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>>>> met_help at ucar.edu
>>>>>> wrote:
>>>>>
>>>>>> Yonghan,
>>>>>>
>>>>>> Sorry to hear that you're having trouble running pb2nc in the
online
>>>>>> tutorial. Can you tell me, are you able to run it fine using
the
>> script
>>>>>> included in the tarball?
>>>>>>
>>>>>> After you compiled MET, did you go into the scripts directory
and run
>>>> the
>>>>>> test scripts?
>>>>>>
>>>>>> cd METv4.1/scripts
>>>>>> ./test_all.sh >& test_all.log
>>>>>>
>>>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
>>>> fault
>>>>>> there as well?
>>>>>>
>>>>>> Thanks,
>>>>>> John Halley Gotway
>>>>>> met_help at ucar.edu
>>>>>>
>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>>>
>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>>>> Queue: met_help
>>>>>>> Subject: Question on online tutorial
>>>>>>> Owner: Nobody
>>>>>>> Requestors: cyh082 at gmail.com
>>>>>>> Status: new
>>>>>>> Ticket <URL:
>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>
>>>>>>>
>>>>>>> Dear whom it may concern,
>>>>>>>
>>>>>>> I have one question on MET online tutorial.
>>>>>>>
>>>>>>> I downloaded MET source code, and compiled it successfully.
>>>>>>>
>>>>>>> I tried to run PB2NC tool following online tutorial, but I got
error
>> an
>>>>>>> message as belows.
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>> DEBUG 1: Default Config File:
>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>>>> DEBUG 1: User Config File:
>>>>>>>
>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>>>> DEBUG 1: Creating NetCDF File:
>>>>>>> /home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>>>> tutorial_pb.nc
>>>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>>>> Segmentation fault
>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>
>>>>>>> Could you give me some advices on this problem?
>>>>>>>
>>>>>>> And could you give me output of PB2NC (i.e., tutorial_pb.nc)?
>> Because
>>>> I
>>>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>>>
>>>>>>> Thank you for your kindness.
>>>>>>>
>>>>>>> Best regards
>>>>>>> Yonghan Choi
>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>
>>
------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Sat May 10 01:27:13 2014
Dear Dr. John Halley Gotway,
First of all, thank you for your advices.
Actually, I would like to verify WRF-model forecasts (using AWS point
observations; these are currently-available data to me), especially
precipitation forecast.
My region of interest is East Asia, focusing on South Korea.
I used Lambert-Conformal map projection when running the WRF model.
Thank you for your kindness.
Best regards
Yonghan Choi
On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:
> Yonghan,
>
> 1. Yes, -9999 is the value to use for missing data. However, I
don't
> understand why you'd use -9999 in the 11th column for the "observed
value".
> When point-stat encounters a bad observation value,
> it'll just skip that record. So you should just skip over any
> observations with a bad data value.
>
> 2. I was surprised to see that we don't have any examples of running
> pcp_combine in the subtraction mode on our website. Here's an
example
> using the sample data that's included in the MET tarball:
> METv4.1/bin/pcp_combine -subtract \
> METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12
\
> METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06
\
> wrfprs_ruc13_APCP_06_to_12.nc
>
> I've passed one file name followed by an accumulation interval, then
a
> second file name followed by an accumulation interval, and lastly,
the
> output file name. It grabs the 12-hour accumulation from
> the first file and subtracts off the 6-hour accumulation from the
second
> file. The result is the 6 hours of accumulation in between.
>
> 3. There is no general purpose way of converting point data to
gridded
> data. It's a pretty difficult task and would be very specific to
the data
> you're using. Generally, I wouldn't recommend trying
> to do that. Instead, I'd suggest looking for other available
gridded
> datasets. Are you looking for gridded observations of
precipitation? What
> is your region of interest? You could send me a
> sample GRIB file if you'd like, and I could look at the grid you're
using.
> There may be some satellite observations of precipitation you could
use.
>
> Thanks,
> John
>
>
> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > Thank you for your kind tips on MET.
> >
> > I successfully ran ASCII2NC, UPP, and Point-Stat tools.
> >
> > I have additional questions.
> >
> > 1. When I make 11-column format for ASCII2NC tool, how can I deal
with
> > missing value?
> > Can I use "-9999.00" to indicate missing value for observed value
(11th
> > column)?
> >
> > 2. How can I use pcp-combine tool in subtract mode?
> > Usage only for sum mode is provided in users' guide.
> >
> > 3. If I would like to run MODE tool or Wavelet-Stat tool,
> > I need gridded model forecast and gridded observations.
> >
> > Could you recommend some methods to make gridded observations
using point
> > (ascii format) observations?
> >
> > Thank you.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
> met_help at ucar.edu
> >> wrote:
> >
> >> Yonghan,
> >>
> >> 1. For precipitation, I often see people using the message type
of
> >> "MC_PCP", but I don't think it really matters.
> >>
> >> 2. Yes, the valid time is the end of the accumulation interval.
> >> 20130704_120000 is correct.
> >>
> >> 3. Yes, the GRIB code for accumulated precip is 61.
> >>
> >> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
> >> "120000" would work too.
> >>
> >> 5. The QC column can be filled with any string. If you don't
have any
> >> quality control values for this data, I'd suggest just putting
"NA" in
> the
> >> column.
> >>
> >> 6. I would guess that accumulated total rainfall amount is
already
> turned
> >> on in the default wrf_cntrl.parm file. I'd suggest running UPP
once and
> >> looking at the output GRIB file. Run the GRIB file
> >> through the "wgrib" utility to dump it's contents and look for
"APCP" in
> >> the output. APCP is the GRIB code abbreviation for accumulated
> >> precipitation.
> >>
> >> 7. As you've described, by default, WRF-ARW computes a runtime
> >> accumulation of precipitation. So your 48-hour forecast contains
48
> hours
> >> of accumulated precipitation. To get 12-hour accumulations,
> >> you have 2 choices:
> >> - You could modify the TPREC setting when running WRF to
"dump the
> >> accumulation bucket" every 12 hours. That'd give you 12-hour
> accumulations
> >> in your GRIB files.
> >> - Or you could keep it as a runtime accumulation and run
> pcp_combine to
> >> compute the 12-hour accumulations. For example, you subtract 60
hours
> of
> >> accumulation minus 48 hours of accumulation to get
> >> the 12 hours in between.
> >>
> >> Hope that helps get you going.
> >>
> >> Thanks,
> >> John
> >>
> >> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>> Dear Dr. John Halley Gotway,
> >>>
> >>> I decided to use point observations (AWS observations) in ASCII
format.
> >>>
> >>> I have some questions on how to make 11-column format.
> >>>
> >>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
> >>>
> >>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
> 12
> >>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
> >>>
> >>> 3. Is grib code (column #7) for accumulated precipitation "61"?
> >>>
> >>> 4. Is level (column #8) "12"?
> >>>
> >>> 5. What is appropriate value for QC string (column #10)?
> >>>
> >>> And... I will use UPP as suggested.
> >>>
> >>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
> >>> accumulated total rainfall amount?
> >>>
> >>> Finally, as you know, WRF output includes accumulated rainfall
up to
> >>> forecast time.
> >>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
> >> output
> >>> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
> >>>
> >>> 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
> >>> forecast?
> >>> If yes, how can I do this?
> >>>
> >>> Thank you for your kindness.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>>
> >>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> >>> met_help at ucar.edu> wrote:
> >>>
> >>>> Yonghan,
> >>>>
> >>>> If you will not be using PREPBUFR point observations, you won't
need
> the
> >>>> PB2NC utility. I'm happy to help you debug the issue to try to
figure
> >> out
> >>>> what's going on, but it's up to you.
> >>>>
> >>>> To answer your questions, yes, if your AWS observations are in
ASCII,
> >> I'd
> >>>> suggest reformatting them into the 11-column format that
ASCII2NC
> >> expects.
> >>>> After you run them through ASCII2NC, you'll be
> >>>> able to use them in point_stat.
> >>>>
> >>>> I'd suggest using the Unified Post Processor (UPP) whose output
format
> >> is
> >>>> GRIB. MET handles GRIB files very well. It can read the
pinterp
> >> output as
> >>>> well, but not variables on staggered dimensions,
> >>>> such as the winds. For that reason, using UPP is better.
> >>>>
> >>>> The pcp_combine tool is run to modify precipitation
accumulation
> >>>> intervals. This is all driven by your observations. For
example,
> >> suppose
> >>>> you have 24-hour, daily observations of accumulated
> >>>> precipitation. You'd want to compare a 24-hour forecast
accumulation
> to
> >>>> that 24-hour observed accumulation. So you may need to run
> pcp_combine
> >> to
> >>>> add or subtract accumulated precipitation across
> >>>> your WRF output files. If you're only verifying instantaneous
> >> variables,
> >>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
> >>>>
> >>>> Hope that helps.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
> >>>>>
> >>>>> Dear John Halley Gotway,
> >>>>>
> >>>>> Yes, I ran the test script. I checked the log file, and
running pb2nc
> >>>>> resulted in the same error (segmentation fault).
> >>>>>
> >>>>> And I have another question.
> >>>>>
> >>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
> and
> >>>> WRF
> >>>>> model outputs as inputs.
> >>>>>
> >>>>> Then, should I run ascii2nc to make input observation file for
> >> Point-Stat
> >>>>> Tool using my own AWS observations?
> >>>>>
> >>>>> And, should I run Unified Post Processor or pinterp to make
input
> >> gridded
> >>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
> run
> >>>>> pcp_combine after running UPP or pinterp?
> >>>>>
> >>>>> Thank you for your kindness.
> >>>>>
> >>>>> Best regards
> >>>>> Yonghan Choi
> >>>>>
> >>>>>
> >>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> >>>> met_help at ucar.edu
> >>>>>> wrote:
> >>>>>
> >>>>>> Yonghan,
> >>>>>>
> >>>>>> Sorry to hear that you're having trouble running pb2nc in the
online
> >>>>>> tutorial. Can you tell me, are you able to run it fine using
the
> >> script
> >>>>>> included in the tarball?
> >>>>>>
> >>>>>> After you compiled MET, did you go into the scripts directory
and
> run
> >>>> the
> >>>>>> test scripts?
> >>>>>>
> >>>>>> cd METv4.1/scripts
> >>>>>> ./test_all.sh >& test_all.log
> >>>>>>
> >>>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
> >>>> fault
> >>>>>> there as well?
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John Halley Gotway
> >>>>>> met_help at ucar.edu
> >>>>>>
> >>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>>>>>
> >>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>>>>>> Transaction: Ticket created by cyh082 at gmail.com
> >>>>>>> Queue: met_help
> >>>>>>> Subject: Question on online tutorial
> >>>>>>> Owner: Nobody
> >>>>>>> Requestors: cyh082 at gmail.com
> >>>>>>> Status: new
> >>>>>>> Ticket <URL:
> >>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>>>
> >>>>>>>
> >>>>>>> Dear whom it may concern,
> >>>>>>>
> >>>>>>> I have one question on MET online tutorial.
> >>>>>>>
> >>>>>>> I downloaded MET source code, and compiled it successfully.
> >>>>>>>
> >>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
> error
> >> an
> >>>>>>> message as belows.
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>> DEBUG 1: Default Config File:
> >>>>>>>
> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>>>>>> DEBUG 1: User Config File:
> >>>>>>>
> >>>>
> >>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>>>>>> DEBUG 1: Creating NetCDF File:
> >>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> >>>> tutorial_pb.nc
> >>>>>>> DEBUG 1: Processing PrepBufr File:
> >>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>>>>>> Segmentation fault
> >>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>>
> >>>>>>> Could you give me some advices on this problem?
> >>>>>>>
> >>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
> >> Because
> >>>> I
> >>>>>>> need this file to run Point-Stat Tool tutorial.
> >>>>>>>
> >>>>>>> Thank you for your kindness.
> >>>>>>>
> >>>>>>> Best regards
> >>>>>>> Yonghan Choi
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>
> >>
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Tue May 13 17:16:20 2014
Yonghan,
If you're using AWS point observations, reformatting them into the
format expected by ascii2nc is the right way to go.
If you'd like to use gridded satellite data for verification, you
could consider the TRMM data products described on this page:
http://www.dtcenter.org/met/users/downloads/observation_data.php
We also provide an Rscript on that page that will help you reformat
the data into a version that MET expects.
Here's a link directly to the NASA data:
http://gdata1.sci.gsfc.nasa.gov/daac-
bin/G3/gui.cgi?instance_id=TRMM_3-Hourly
Hope that helps.
Thanks,
John Halley Gotway
met_help at ucar.edu
On 05/10/2014 01:27 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> First of all, thank you for your advices.
>
> Actually, I would like to verify WRF-model forecasts (using AWS
point
> observations; these are currently-available data to me), especially
> precipitation forecast.
>
> My region of interest is East Asia, focusing on South Korea.
>
> I used Lambert-Conformal map projection when running the WRF model.
>
> Thank you for your kindness.
>
> Best regards
> Yonghan Choi
>
>
> On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Yonghan,
>>
>> 1. Yes, -9999 is the value to use for missing data. However, I
don't
>> understand why you'd use -9999 in the 11th column for the "observed
value".
>> When point-stat encounters a bad observation value,
>> it'll just skip that record. So you should just skip over any
>> observations with a bad data value.
>>
>> 2. I was surprised to see that we don't have any examples of
running
>> pcp_combine in the subtraction mode on our website. Here's an
example
>> using the sample data that's included in the MET tarball:
>> METv4.1/bin/pcp_combine -subtract \
>> METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212
12 \
>> METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212
06 \
>> wrfprs_ruc13_APCP_06_to_12.nc
>>
>> I've passed one file name followed by an accumulation interval,
then a
>> second file name followed by an accumulation interval, and lastly,
the
>> output file name. It grabs the 12-hour accumulation from
>> the first file and subtracts off the 6-hour accumulation from the
second
>> file. The result is the 6 hours of accumulation in between.
>>
>> 3. There is no general purpose way of converting point data to
gridded
>> data. It's a pretty difficult task and would be very specific to
the data
>> you're using. Generally, I wouldn't recommend trying
>> to do that. Instead, I'd suggest looking for other available
gridded
>> datasets. Are you looking for gridded observations of
precipitation? What
>> is your region of interest? You could send me a
>> sample GRIB file if you'd like, and I could look at the grid you're
using.
>> There may be some satellite observations of precipitation you
could use.
>>
>> Thanks,
>> John
>>
>>
>> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear Dr. John Halley Gotway,
>>>
>>> Thank you for your kind tips on MET.
>>>
>>> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
>>>
>>> I have additional questions.
>>>
>>> 1. When I make 11-column format for ASCII2NC tool, how can I deal
with
>>> missing value?
>>> Can I use "-9999.00" to indicate missing value for observed value
(11th
>>> column)?
>>>
>>> 2. How can I use pcp-combine tool in subtract mode?
>>> Usage only for sum mode is provided in users' guide.
>>>
>>> 3. If I would like to run MODE tool or Wavelet-Stat tool,
>>> I need gridded model forecast and gridded observations.
>>>
>>> Could you recommend some methods to make gridded observations
using point
>>> (ascii format) observations?
>>>
>>> Thank you.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
>> met_help at ucar.edu
>>>> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> 1. For precipitation, I often see people using the message type
of
>>>> "MC_PCP", but I don't think it really matters.
>>>>
>>>> 2. Yes, the valid time is the end of the accumulation interval.
>>>> 20130704_120000 is correct.
>>>>
>>>> 3. Yes, the GRIB code for accumulated precip is 61.
>>>>
>>>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
>>>> "120000" would work too.
>>>>
>>>> 5. The QC column can be filled with any string. If you don't
have any
>>>> quality control values for this data, I'd suggest just putting
"NA" in
>> the
>>>> column.
>>>>
>>>> 6. I would guess that accumulated total rainfall amount is
already
>> turned
>>>> on in the default wrf_cntrl.parm file. I'd suggest running UPP
once and
>>>> looking at the output GRIB file. Run the GRIB file
>>>> through the "wgrib" utility to dump it's contents and look for
"APCP" in
>>>> the output. APCP is the GRIB code abbreviation for accumulated
>>>> precipitation.
>>>>
>>>> 7. As you've described, by default, WRF-ARW computes a runtime
>>>> accumulation of precipitation. So your 48-hour forecast contains
48
>> hours
>>>> of accumulated precipitation. To get 12-hour accumulations,
>>>> you have 2 choices:
>>>> - You could modify the TPREC setting when running WRF to
"dump the
>>>> accumulation bucket" every 12 hours. That'd give you 12-hour
>> accumulations
>>>> in your GRIB files.
>>>> - Or you could keep it as a runtime accumulation and run
>> pcp_combine to
>>>> compute the 12-hour accumulations. For example, you subtract 60
hours
>> of
>>>> accumulation minus 48 hours of accumulation to get
>>>> the 12 hours in between.
>>>>
>>>> Hope that helps get you going.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>> Dear Dr. John Halley Gotway,
>>>>>
>>>>> I decided to use point observations (AWS observations) in ASCII
format.
>>>>>
>>>>> I have some questions on how to make 11-column format.
>>>>>
>>>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
>>>>>
>>>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
>> 12
>>>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>>>>>
>>>>> 3. Is grib code (column #7) for accumulated precipitation "61"?
>>>>>
>>>>> 4. Is level (column #8) "12"?
>>>>>
>>>>> 5. What is appropriate value for QC string (column #10)?
>>>>>
>>>>> And... I will use UPP as suggested.
>>>>>
>>>>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
>>>>> accumulated total rainfall amount?
>>>>>
>>>>> Finally, as you know, WRF output includes accumulated rainfall
up to
>>>>> forecast time.
>>>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
>>>> output
>>>>> file for 12 UTC 4 July 2013 includes 12-h accumulated rainfall.
>>>>>
>>>>> 7. Then, should I use pcp-combine tool to make 12-h accumulated
WRF
>>>>> forecast?
>>>>> If yes, how can I do this?
>>>>>
>>>>> Thank you for your kindness.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>>
>>>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
>>>>> met_help at ucar.edu> wrote:
>>>>>
>>>>>> Yonghan,
>>>>>>
>>>>>> If you will not be using PREPBUFR point observations, you won't
need
>> the
>>>>>> PB2NC utility. I'm happy to help you debug the issue to try to
figure
>>>> out
>>>>>> what's going on, but it's up to you.
>>>>>>
>>>>>> To answer your questions, yes, if your AWS observations are in
ASCII,
>>>> I'd
>>>>>> suggest reformatting them into the 11-column format that
ASCII2NC
>>>> expects.
>>>>>> After you run them through ASCII2NC, you'll be
>>>>>> able to use them in point_stat.
>>>>>>
>>>>>> I'd suggest using the Unified Post Processor (UPP) whose output
format
>>>> is
>>>>>> GRIB. MET handles GRIB files very well. It can read the
pinterp
>>>> output as
>>>>>> well, but not variables on staggered dimensions,
>>>>>> such as the winds. For that reason, using UPP is better.
>>>>>>
>>>>>> The pcp_combine tool is run to modify precipitation
accumulation
>>>>>> intervals. This is all driven by your observations. For
example,
>>>> suppose
>>>>>> you have 24-hour, daily observations of accumulated
>>>>>> precipitation. You'd want to compare a 24-hour forecast
accumulation
>> to
>>>>>> that 24-hour observed accumulation. So you may need to run
>> pcp_combine
>>>> to
>>>>>> add or subtract accumulated precipitation across
>>>>>> your WRF output files. If you're only verifying instantaneous
>>>> variables,
>>>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
>>>>>>
>>>>>> Hope that helps.
>>>>>>
>>>>>> Thanks,
>>>>>> John
>>>>>>
>>>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
>>>>>>>
>>>>>>> Dear John Halley Gotway,
>>>>>>>
>>>>>>> Yes, I ran the test script. I checked the log file, and
running pb2nc
>>>>>>> resulted in the same error (segmentation fault).
>>>>>>>
>>>>>>> And I have another question.
>>>>>>>
>>>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
>> and
>>>>>> WRF
>>>>>>> model outputs as inputs.
>>>>>>>
>>>>>>> Then, should I run ascii2nc to make input observation file for
>>>> Point-Stat
>>>>>>> Tool using my own AWS observations?
>>>>>>>
>>>>>>> And, should I run Unified Post Processor or pinterp to make
input
>>>> gridded
>>>>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
>> run
>>>>>>> pcp_combine after running UPP or pinterp?
>>>>>>>
>>>>>>> Thank you for your kindness.
>>>>>>>
>>>>>>> Best regards
>>>>>>> Yonghan Choi
>>>>>>>
>>>>>>>
>>>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>>>>>> met_help at ucar.edu
>>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yonghan,
>>>>>>>>
>>>>>>>> Sorry to hear that you're having trouble running pb2nc in the
online
>>>>>>>> tutorial. Can you tell me, are you able to run it fine using
the
>>>> script
>>>>>>>> included in the tarball?
>>>>>>>>
>>>>>>>> After you compiled MET, did you go into the scripts directory
and
>> run
>>>>>> the
>>>>>>>> test scripts?
>>>>>>>>
>>>>>>>> cd METv4.1/scripts
>>>>>>>> ./test_all.sh >& test_all.log
>>>>>>>>
>>>>>>>> Does pb2nc run OK in the test scripts, or do you see a
segmentation
>>>>>> fault
>>>>>>>> there as well?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> John Halley Gotway
>>>>>>>> met_help at ucar.edu
>>>>>>>>
>>>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>>>>>
>>>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>>>>>> Queue: met_help
>>>>>>>>> Subject: Question on online tutorial
>>>>>>>>> Owner: Nobody
>>>>>>>>> Requestors: cyh082 at gmail.com
>>>>>>>>> Status: new
>>>>>>>>> Ticket <URL:
>>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Dear whom it may concern,
>>>>>>>>>
>>>>>>>>> I have one question on MET online tutorial.
>>>>>>>>>
>>>>>>>>> I downloaded MET source code, and compiled it successfully.
>>>>>>>>>
>>>>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
>> error
>>>> an
>>>>>>>>> message as belows.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>> DEBUG 1: Default Config File:
>>>>>>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>>>>>> DEBUG 1: User Config File:
>>>>>>>>>
>>>>>>
>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>>>>>> DEBUG 1: Creating NetCDF File:
>>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>>>>>> tutorial_pb.nc
>>>>>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>>>>>> Segmentation fault
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>> Could you give me some advices on this problem?
>>>>>>>>>
>>>>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
>>>> Because
>>>>>> I
>>>>>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>>>>>
>>>>>>>>> Thank you for your kindness.
>>>>>>>>>
>>>>>>>>> Best regards
>>>>>>>>> Yonghan Choi
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>
>>
------------------------------------------------
Subject: Question on online tutorial
From: Yonghan Choi
Time: Wed May 14 08:57:00 2014
Dear Dr. John Halley Gotway,
Yes, I would like to use gridded observation data such as TRMM data.
As I understand, I can get NetCDF-format file that MET expects using
the R
script.
According to users' guide, model grid should be the same as
observation
grid.
However, model grid (in my case, defined by WRF model) is different
from
observation grid (in my case, of TRMM).
Could you give me some suggestions on this problem?
Thank you.
Best regards
Yonghan Choi
On Wed, May 14, 2014 at 8:16 AM, John Halley Gotway via RT <
met_help at ucar.edu> wrote:
> Yonghan,
>
> If you're using AWS point observations, reformatting them into the
format
> expected by ascii2nc is the right way to go.
>
> If you'd like to use gridded satellite data for verification, you
could
> consider the TRMM data products described on this page:
> http://www.dtcenter.org/met/users/downloads/observation_data.php
>
> We also provide an Rscript on that page that will help you reformat
the
> data into a version that MET expects.
>
> Here's a link directly to the NASA data:
>
> http://gdata1.sci.gsfc.nasa.gov/daac-
bin/G3/gui.cgi?instance_id=TRMM_3-Hourly
>
> Hope that helps.
>
> Thanks,
> John Halley Gotway
> met_help at ucar.edu
>
> On 05/10/2014 01:27 AM, Yonghan Choi via RT wrote:
> >
> > <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >
> > Dear Dr. John Halley Gotway,
> >
> > First of all, thank you for your advices.
> >
> > Actually, I would like to verify WRF-model forecasts (using AWS
point
> > observations; these are currently-available data to me),
especially
> > precipitation forecast.
> >
> > My region of interest is East Asia, focusing on South Korea.
> >
> > I used Lambert-Conformal map projection when running the WRF
model.
> >
> > Thank you for your kindness.
> >
> > Best regards
> > Yonghan Choi
> >
> >
> > On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
> > met_help at ucar.edu> wrote:
> >
> >> Yonghan,
> >>
> >> 1. Yes, -9999 is the value to use for missing data. However, I
don't
> >> understand why you'd use -9999 in the 11th column for the
"observed
> value".
> >> When point-stat encounters a bad observation value,
> >> it'll just skip that record. So you should just skip over any
> >> observations with a bad data value.
> >>
> >> 2. I was surprised to see that we don't have any examples of
running
> >> pcp_combine in the subtraction mode on our website. Here's an
example
> >> using the sample data that's included in the MET tarball:
> >> METv4.1/bin/pcp_combine -subtract \
> >>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12 \
> >>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06 \
> >> wrfprs_ruc13_APCP_06_to_12.nc
> >>
> >> I've passed one file name followed by an accumulation interval,
then a
> >> second file name followed by an accumulation interval, and
lastly, the
> >> output file name. It grabs the 12-hour accumulation from
> >> the first file and subtracts off the 6-hour accumulation from the
second
> >> file. The result is the 6 hours of accumulation in between.
> >>
> >> 3. There is no general purpose way of converting point data to
gridded
> >> data. It's a pretty difficult task and would be very specific to
the
> data
> >> you're using. Generally, I wouldn't recommend trying
> >> to do that. Instead, I'd suggest looking for other available
gridded
> >> datasets. Are you looking for gridded observations of
precipitation?
> What
> >> is your region of interest? You could send me a
> >> sample GRIB file if you'd like, and I could look at the grid
you're
> using.
> >> There may be some satellite observations of precipitation you
could
> use.
> >>
> >> Thanks,
> >> John
> >>
> >>
> >> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
> >>>
> >>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>
> >>> Dear Dr. John Halley Gotway,
> >>>
> >>> Thank you for your kind tips on MET.
> >>>
> >>> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
> >>>
> >>> I have additional questions.
> >>>
> >>> 1. When I make 11-column format for ASCII2NC tool, how can I
deal with
> >>> missing value?
> >>> Can I use "-9999.00" to indicate missing value for observed
value (11th
> >>> column)?
> >>>
> >>> 2. How can I use pcp-combine tool in subtract mode?
> >>> Usage only for sum mode is provided in users' guide.
> >>>
> >>> 3. If I would like to run MODE tool or Wavelet-Stat tool,
> >>> I need gridded model forecast and gridded observations.
> >>>
> >>> Could you recommend some methods to make gridded observations
using
> point
> >>> (ascii format) observations?
> >>>
> >>> Thank you.
> >>>
> >>> Best regards
> >>> Yonghan Choi
> >>>
> >>>
> >>> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
> >> met_help at ucar.edu
> >>>> wrote:
> >>>
> >>>> Yonghan,
> >>>>
> >>>> 1. For precipitation, I often see people using the message type
of
> >>>> "MC_PCP", but I don't think it really matters.
> >>>>
> >>>> 2. Yes, the valid time is the end of the accumulation interval.
> >>>> 20130704_120000 is correct.
> >>>>
> >>>> 3. Yes, the GRIB code for accumulated precip is 61.
> >>>>
> >>>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
> >>>> "120000" would work too.
> >>>>
> >>>> 5. The QC column can be filled with any string. If you don't
have any
> >>>> quality control values for this data, I'd suggest just putting
"NA" in
> >> the
> >>>> column.
> >>>>
> >>>> 6. I would guess that accumulated total rainfall amount is
already
> >> turned
> >>>> on in the default wrf_cntrl.parm file. I'd suggest running UPP
once
> and
> >>>> looking at the output GRIB file. Run the GRIB file
> >>>> through the "wgrib" utility to dump it's contents and look for
"APCP"
> in
> >>>> the output. APCP is the GRIB code abbreviation for accumulated
> >>>> precipitation.
> >>>>
> >>>> 7. As you've described, by default, WRF-ARW computes a runtime
> >>>> accumulation of precipitation. So your 48-hour forecast
contains 48
> >> hours
> >>>> of accumulated precipitation. To get 12-hour accumulations,
> >>>> you have 2 choices:
> >>>> - You could modify the TPREC setting when running WRF to
"dump
> the
> >>>> accumulation bucket" every 12 hours. That'd give you 12-hour
> >> accumulations
> >>>> in your GRIB files.
> >>>> - Or you could keep it as a runtime accumulation and run
> >> pcp_combine to
> >>>> compute the 12-hour accumulations. For example, you subtract
60 hours
> >> of
> >>>> accumulation minus 48 hours of accumulation to get
> >>>> the 12 hours in between.
> >>>>
> >>>> Hope that helps get you going.
> >>>>
> >>>> Thanks,
> >>>> John
> >>>>
> >>>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
> >>>>>
> >>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
> >>>>>
> >>>>> Dear Dr. John Halley Gotway,
> >>>>>
> >>>>> I decided to use point observations (AWS observations) in
ASCII
> format.
> >>>>>
> >>>>> I have some questions on how to make 11-column format.
> >>>>>
> >>>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
> >>>>>
> >>>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
> >> 12
> >>>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
> >>>>>
> >>>>> 3. Is grib code (column #7) for accumulated precipitation
"61"?
> >>>>>
> >>>>> 4. Is level (column #8) "12"?
> >>>>>
> >>>>> 5. What is appropriate value for QC string (column #10)?
> >>>>>
> >>>>> And... I will use UPP as suggested.
> >>>>>
> >>>>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
> >>>>> accumulated total rainfall amount?
> >>>>>
> >>>>> Finally, as you know, WRF output includes accumulated rainfall
up to
> >>>>> forecast time.
> >>>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
> >>>> output
> >>>>> file for 12 UTC 4 July 2013 includes 12-h accumulated
rainfall.
> >>>>>
> >>>>> 7. Then, should I use pcp-combine tool to make 12-h
accumulated WRF
> >>>>> forecast?
> >>>>> If yes, how can I do this?
> >>>>>
> >>>>> Thank you for your kindness.
> >>>>>
> >>>>> Best regards
> >>>>> Yonghan Choi
> >>>>>
> >>>>>
> >>>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
> >>>>> met_help at ucar.edu> wrote:
> >>>>>
> >>>>>> Yonghan,
> >>>>>>
> >>>>>> If you will not be using PREPBUFR point observations, you
won't need
> >> the
> >>>>>> PB2NC utility. I'm happy to help you debug the issue to try
to
> figure
> >>>> out
> >>>>>> what's going on, but it's up to you.
> >>>>>>
> >>>>>> To answer your questions, yes, if your AWS observations are
in
> ASCII,
> >>>> I'd
> >>>>>> suggest reformatting them into the 11-column format that
ASCII2NC
> >>>> expects.
> >>>>>> After you run them through ASCII2NC, you'll be
> >>>>>> able to use them in point_stat.
> >>>>>>
> >>>>>> I'd suggest using the Unified Post Processor (UPP) whose
output
> format
> >>>> is
> >>>>>> GRIB. MET handles GRIB files very well. It can read the
pinterp
> >>>> output as
> >>>>>> well, but not variables on staggered dimensions,
> >>>>>> such as the winds. For that reason, using UPP is better.
> >>>>>>
> >>>>>> The pcp_combine tool is run to modify precipitation
accumulation
> >>>>>> intervals. This is all driven by your observations. For
example,
> >>>> suppose
> >>>>>> you have 24-hour, daily observations of accumulated
> >>>>>> precipitation. You'd want to compare a 24-hour forecast
> accumulation
> >> to
> >>>>>> that 24-hour observed accumulation. So you may need to run
> >> pcp_combine
> >>>> to
> >>>>>> add or subtract accumulated precipitation across
> >>>>>> your WRF output files. If you're only verifying
instantaneous
> >>>> variables,
> >>>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
> >>>>>>
> >>>>>> Hope that helps.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> John
> >>>>>>
> >>>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
> >>>>>>>
> >>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>>>
> >>>>>>> Dear John Halley Gotway,
> >>>>>>>
> >>>>>>> Yes, I ran the test script. I checked the log file, and
running
> pb2nc
> >>>>>>> resulted in the same error (segmentation fault).
> >>>>>>>
> >>>>>>> And I have another question.
> >>>>>>>
> >>>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
> >> and
> >>>>>> WRF
> >>>>>>> model outputs as inputs.
> >>>>>>>
> >>>>>>> Then, should I run ascii2nc to make input observation file
for
> >>>> Point-Stat
> >>>>>>> Tool using my own AWS observations?
> >>>>>>>
> >>>>>>> And, should I run Unified Post Processor or pinterp to make
input
> >>>> gridded
> >>>>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
> >> run
> >>>>>>> pcp_combine after running UPP or pinterp?
> >>>>>>>
> >>>>>>> Thank you for your kindness.
> >>>>>>>
> >>>>>>> Best regards
> >>>>>>> Yonghan Choi
> >>>>>>>
> >>>>>>>
> >>>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
> >>>>>> met_help at ucar.edu
> >>>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Yonghan,
> >>>>>>>>
> >>>>>>>> Sorry to hear that you're having trouble running pb2nc in
the
> online
> >>>>>>>> tutorial. Can you tell me, are you able to run it fine
using the
> >>>> script
> >>>>>>>> included in the tarball?
> >>>>>>>>
> >>>>>>>> After you compiled MET, did you go into the scripts
directory and
> >> run
> >>>>>> the
> >>>>>>>> test scripts?
> >>>>>>>>
> >>>>>>>> cd METv4.1/scripts
> >>>>>>>> ./test_all.sh >& test_all.log
> >>>>>>>>
> >>>>>>>> Does pb2nc run OK in the test scripts, or do you see a
> segmentation
> >>>>>> fault
> >>>>>>>> there as well?
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> John Halley Gotway
> >>>>>>>> met_help at ucar.edu
> >>>>>>>>
> >>>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
> >>>>>>>>>
> >>>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
> >>>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
> >>>>>>>>> Queue: met_help
> >>>>>>>>> Subject: Question on online tutorial
> >>>>>>>>> Owner: Nobody
> >>>>>>>>> Requestors: cyh082 at gmail.com
> >>>>>>>>> Status: new
> >>>>>>>>> Ticket <URL:
> >>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Dear whom it may concern,
> >>>>>>>>>
> >>>>>>>>> I have one question on MET online tutorial.
> >>>>>>>>>
> >>>>>>>>> I downloaded MET source code, and compiled it
successfully.
> >>>>>>>>>
> >>>>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
> >> error
> >>>> an
> >>>>>>>>> message as belows.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>>>> DEBUG 1: Default Config File:
> >>>>>>>>>
> >>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
> >>>>>>>>> DEBUG 1: User Config File:
> >>>>>>>>>
> >>>>>>
> >>>>
> >>
>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
> >>>>>>>>> DEBUG 1: Creating NetCDF File:
> >>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
> >>>>>> tutorial_pb.nc
> >>>>>>>>> DEBUG 1: Processing PrepBufr File:
> >>>>>>>>>
> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
> >>>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
> >>>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
> >>>>>>>>> Segmentation fault
> >>>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>
> >>
>
-----------------------------------------------------------------------------------------------------------------
> >>>>>>>>>
> >>>>>>>>> Could you give me some advices on this problem?
> >>>>>>>>>
> >>>>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
> >>>> Because
> >>>>>> I
> >>>>>>>>> need this file to run Point-Stat Tool tutorial.
> >>>>>>>>>
> >>>>>>>>> Thank you for your kindness.
> >>>>>>>>>
> >>>>>>>>> Best regards
> >>>>>>>>> Yonghan Choi
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>
> >>
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #66543] Question on online tutorial
From: John Halley Gotway
Time: Thu May 15 12:55:40 2014
Yonghan,
Yes, putting the forecast and observation data on a common grid is a
necessary first step. The TRMM data is available at 1/4 degree
resolution on a lat/lon grid.
I'd suggest the following steps:
(1) Retrieve 1/4 degree lat/lon TRMM data that covers the region over
which you're running WRF. TRMM is available every 3 hours or in daily
accumulations. So you need to decide which you'd like to
use for your evaluation. As for data formats, you have 2 options
here...
- You could use NASA's TOVAS website to get an ASCII version of
the data over your domain of interest, and then run the trmm2nc.R
script to reformat into a NetCDF file for use in MET.
- You could pull a binary version of the TRMM data and run the
trmmbin2nc.R script to reformat into a NetCDF file for use in MET.
See the details on this page:
http://www.dtcenter.org/met/users/downloads/observation_data.php
(2) Run copygb to regrid your WRF model output in GRIB format to the
1/4 lat/lon grid of your TRMM data. Examples of using copygb to
regrid to a lat/lon grid can be found here:
http://www.dtcenter.org/met/users/support/online_tutorial/METv4.1/copygb/run2.php
And then you should be able to compare the two files with MET.
Hope that helps get you going.
Thanks,
John
On 05/14/2014 08:57 AM, Yonghan Choi via RT wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>
> Dear Dr. John Halley Gotway,
>
> Yes, I would like to use gridded observation data such as TRMM data.
>
> As I understand, I can get NetCDF-format file that MET expects using
the R
> script.
>
> According to users' guide, model grid should be the same as
observation
> grid.
>
> However, model grid (in my case, defined by WRF model) is different
from
> observation grid (in my case, of TRMM).
>
> Could you give me some suggestions on this problem?
>
> Thank you.
>
> Best regards
> Yonghan Choi
>
>
> On Wed, May 14, 2014 at 8:16 AM, John Halley Gotway via RT <
> met_help at ucar.edu> wrote:
>
>> Yonghan,
>>
>> If you're using AWS point observations, reformatting them into the
format
>> expected by ascii2nc is the right way to go.
>>
>> If you'd like to use gridded satellite data for verification, you
could
>> consider the TRMM data products described on this page:
>>
http://www.dtcenter.org/met/users/downloads/observation_data.php
>>
>> We also provide an Rscript on that page that will help you reformat
the
>> data into a version that MET expects.
>>
>> Here's a link directly to the NASA data:
>>
>> http://gdata1.sci.gsfc.nasa.gov/daac-
bin/G3/gui.cgi?instance_id=TRMM_3-Hourly
>>
>> Hope that helps.
>>
>> Thanks,
>> John Halley Gotway
>> met_help at ucar.edu
>>
>> On 05/10/2014 01:27 AM, Yonghan Choi via RT wrote:
>>>
>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>
>>> Dear Dr. John Halley Gotway,
>>>
>>> First of all, thank you for your advices.
>>>
>>> Actually, I would like to verify WRF-model forecasts (using AWS
point
>>> observations; these are currently-available data to me),
especially
>>> precipitation forecast.
>>>
>>> My region of interest is East Asia, focusing on South Korea.
>>>
>>> I used Lambert-Conformal map projection when running the WRF
model.
>>>
>>> Thank you for your kindness.
>>>
>>> Best regards
>>> Yonghan Choi
>>>
>>>
>>> On Sat, May 10, 2014 at 1:10 AM, John Halley Gotway via RT <
>>> met_help at ucar.edu> wrote:
>>>
>>>> Yonghan,
>>>>
>>>> 1. Yes, -9999 is the value to use for missing data. However, I
don't
>>>> understand why you'd use -9999 in the 11th column for the
"observed
>> value".
>>>> When point-stat encounters a bad observation value,
>>>> it'll just skip that record. So you should just skip over any
>>>> observations with a bad data value.
>>>>
>>>> 2. I was surprised to see that we don't have any examples of
running
>>>> pcp_combine in the subtraction mode on our website. Here's an
example
>>>> using the sample data that's included in the MET tarball:
>>>> METv4.1/bin/pcp_combine -subtract \
>>>>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_12.tm00_G212 12 \
>>>>
METv4.1/data/sample_fcst/2005080700/wrfprs_ruc13_06.tm00_G212 06 \
>>>> wrfprs_ruc13_APCP_06_to_12.nc
>>>>
>>>> I've passed one file name followed by an accumulation interval,
then a
>>>> second file name followed by an accumulation interval, and
lastly, the
>>>> output file name. It grabs the 12-hour accumulation from
>>>> the first file and subtracts off the 6-hour accumulation from the
second
>>>> file. The result is the 6 hours of accumulation in between.
>>>>
>>>> 3. There is no general purpose way of converting point data to
gridded
>>>> data. It's a pretty difficult task and would be very specific to
the
>> data
>>>> you're using. Generally, I wouldn't recommend trying
>>>> to do that. Instead, I'd suggest looking for other available
gridded
>>>> datasets. Are you looking for gridded observations of
precipitation?
>> What
>>>> is your region of interest? You could send me a
>>>> sample GRIB file if you'd like, and I could look at the grid
you're
>> using.
>>>> There may be some satellite observations of precipitation you
could
>> use.
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>>
>>>> On 05/09/2014 06:22 AM, Yonghan Choi via RT wrote:
>>>>>
>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>
>>>>> Dear Dr. John Halley Gotway,
>>>>>
>>>>> Thank you for your kind tips on MET.
>>>>>
>>>>> I successfully ran ASCII2NC, UPP, and Point-Stat tools.
>>>>>
>>>>> I have additional questions.
>>>>>
>>>>> 1. When I make 11-column format for ASCII2NC tool, how can I
deal with
>>>>> missing value?
>>>>> Can I use "-9999.00" to indicate missing value for observed
value (11th
>>>>> column)?
>>>>>
>>>>> 2. How can I use pcp-combine tool in subtract mode?
>>>>> Usage only for sum mode is provided in users' guide.
>>>>>
>>>>> 3. If I would like to run MODE tool or Wavelet-Stat tool,
>>>>> I need gridded model forecast and gridded observations.
>>>>>
>>>>> Could you recommend some methods to make gridded observations
using
>> point
>>>>> (ascii format) observations?
>>>>>
>>>>> Thank you.
>>>>>
>>>>> Best regards
>>>>> Yonghan Choi
>>>>>
>>>>>
>>>>> On Fri, May 9, 2014 at 3:15 AM, John Halley Gotway via RT <
>>>> met_help at ucar.edu
>>>>>> wrote:
>>>>>
>>>>>> Yonghan,
>>>>>>
>>>>>> 1. For precipitation, I often see people using the message type
of
>>>>>> "MC_PCP", but I don't think it really matters.
>>>>>>
>>>>>> 2. Yes, the valid time is the end of the accumulation interval.
>>>>>> 20130704_120000 is correct.
>>>>>>
>>>>>> 3. Yes, the GRIB code for accumulated precip is 61.
>>>>>>
>>>>>> 4. Yes, the level would be "12" for 12 hours of accumulation.
Using
>>>>>> "120000" would work too.
>>>>>>
>>>>>> 5. The QC column can be filled with any string. If you don't
have any
>>>>>> quality control values for this data, I'd suggest just putting
"NA" in
>>>> the
>>>>>> column.
>>>>>>
>>>>>> 6. I would guess that accumulated total rainfall amount is
already
>>>> turned
>>>>>> on in the default wrf_cntrl.parm file. I'd suggest running UPP
once
>> and
>>>>>> looking at the output GRIB file. Run the GRIB file
>>>>>> through the "wgrib" utility to dump it's contents and look for
"APCP"
>> in
>>>>>> the output. APCP is the GRIB code abbreviation for accumulated
>>>>>> precipitation.
>>>>>>
>>>>>> 7. As you've described, by default, WRF-ARW computes a runtime
>>>>>> accumulation of precipitation. So your 48-hour forecast
contains 48
>>>> hours
>>>>>> of accumulated precipitation. To get 12-hour accumulations,
>>>>>> you have 2 choices:
>>>>>> - You could modify the TPREC setting when running WRF to
"dump
>> the
>>>>>> accumulation bucket" every 12 hours. That'd give you 12-hour
>>>> accumulations
>>>>>> in your GRIB files.
>>>>>> - Or you could keep it as a runtime accumulation and run
>>>> pcp_combine to
>>>>>> compute the 12-hour accumulations. For example, you subtract
60 hours
>>>> of
>>>>>> accumulation minus 48 hours of accumulation to get
>>>>>> the 12 hours in between.
>>>>>>
>>>>>> Hope that helps get you going.
>>>>>>
>>>>>> Thanks,
>>>>>> John
>>>>>>
>>>>>> On 05/08/2014 02:38 AM, Yonghan Choi via RT wrote:
>>>>>>>
>>>>>>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543
>
>>>>>>>
>>>>>>> Dear Dr. John Halley Gotway,
>>>>>>>
>>>>>>> I decided to use point observations (AWS observations) in
ASCII
>> format.
>>>>>>>
>>>>>>> I have some questions on how to make 11-column format.
>>>>>>>
>>>>>>> 1. If I use AWS observations, is message type (column #1)
"ADPSFC"?
>>>>>>>
>>>>>>> 2. If I use 12-h accumulated precipitation from 00 UTC 4 July
2013 to
>>>> 12
>>>>>>> UTC 4 July 2013, is valid time (column #3) "20130704_120000"?
>>>>>>>
>>>>>>> 3. Is grib code (column #7) for accumulated precipitation
"61"?
>>>>>>>
>>>>>>> 4. Is level (column #8) "12"?
>>>>>>>
>>>>>>> 5. What is appropriate value for QC string (column #10)?
>>>>>>>
>>>>>>> And... I will use UPP as suggested.
>>>>>>>
>>>>>>> 6. Should I modify wrf_cntrl.parm (included in the code) to
output
>>>>>>> accumulated total rainfall amount?
>>>>>>>
>>>>>>> Finally, as you know, WRF output includes accumulated rainfall
up to
>>>>>>> forecast time.
>>>>>>> For example, if initial time for WRF forecast is 00 UTC 4 July
2013,
>>>>>> output
>>>>>>> file for 12 UTC 4 July 2013 includes 12-h accumulated
rainfall.
>>>>>>>
>>>>>>> 7. Then, should I use pcp-combine tool to make 12-h
accumulated WRF
>>>>>>> forecast?
>>>>>>> If yes, how can I do this?
>>>>>>>
>>>>>>> Thank you for your kindness.
>>>>>>>
>>>>>>> Best regards
>>>>>>> Yonghan Choi
>>>>>>>
>>>>>>>
>>>>>>> On Fri, May 2, 2014 at 12:34 AM, John Halley Gotway via RT <
>>>>>>> met_help at ucar.edu> wrote:
>>>>>>>
>>>>>>>> Yonghan,
>>>>>>>>
>>>>>>>> If you will not be using PREPBUFR point observations, you
won't need
>>>> the
>>>>>>>> PB2NC utility. I'm happy to help you debug the issue to try
to
>> figure
>>>>>> out
>>>>>>>> what's going on, but it's up to you.
>>>>>>>>
>>>>>>>> To answer your questions, yes, if your AWS observations are
in
>> ASCII,
>>>>>> I'd
>>>>>>>> suggest reformatting them into the 11-column format that
ASCII2NC
>>>>>> expects.
>>>>>>>> After you run them through ASCII2NC, you'll be
>>>>>>>> able to use them in point_stat.
>>>>>>>>
>>>>>>>> I'd suggest using the Unified Post Processor (UPP) whose
output
>> format
>>>>>> is
>>>>>>>> GRIB. MET handles GRIB files very well. It can read the
pinterp
>>>>>> output as
>>>>>>>> well, but not variables on staggered dimensions,
>>>>>>>> such as the winds. For that reason, using UPP is better.
>>>>>>>>
>>>>>>>> The pcp_combine tool is run to modify precipitation
accumulation
>>>>>>>> intervals. This is all driven by your observations. For
example,
>>>>>> suppose
>>>>>>>> you have 24-hour, daily observations of accumulated
>>>>>>>> precipitation. You'd want to compare a 24-hour forecast
>> accumulation
>>>> to
>>>>>>>> that 24-hour observed accumulation. So you may need to run
>>>> pcp_combine
>>>>>> to
>>>>>>>> add or subtract accumulated precipitation across
>>>>>>>> your WRF output files. If you're only verifying
instantaneous
>>>>>> variables,
>>>>>>>> such as temperature or winds, you wouldn't need to run
pcp_combine.
>>>>>>>>
>>>>>>>> Hope that helps.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> John
>>>>>>>>
>>>>>>>> On 05/01/2014 03:28 AM, Yonghan Choi via RT wrote:
>>>>>>>>>
>>>>>>>>> <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>>>
>>>>>>>>> Dear John Halley Gotway,
>>>>>>>>>
>>>>>>>>> Yes, I ran the test script. I checked the log file, and
running
>> pb2nc
>>>>>>>>> resulted in the same error (segmentation fault).
>>>>>>>>>
>>>>>>>>> And I have another question.
>>>>>>>>>
>>>>>>>>> Actually, I would like to run Point-Stat Tool with AWS
observations
>>>> and
>>>>>>>> WRF
>>>>>>>>> model outputs as inputs.
>>>>>>>>>
>>>>>>>>> Then, should I run ascii2nc to make input observation file
for
>>>>>> Point-Stat
>>>>>>>>> Tool using my own AWS observations?
>>>>>>>>>
>>>>>>>>> And, should I run Unified Post Processor or pinterp to make
input
>>>>>> gridded
>>>>>>>>> file for Point-Stat Tool using my WRF forecasts? Is it
necessary to
>>>> run
>>>>>>>>> pcp_combine after running UPP or pinterp?
>>>>>>>>>
>>>>>>>>> Thank you for your kindness.
>>>>>>>>>
>>>>>>>>> Best regards
>>>>>>>>> Yonghan Choi
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, May 1, 2014 at 4:22 AM, John Halley Gotway via RT <
>>>>>>>> met_help at ucar.edu
>>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Yonghan,
>>>>>>>>>>
>>>>>>>>>> Sorry to hear that you're having trouble running pb2nc in
the
>> online
>>>>>>>>>> tutorial. Can you tell me, are you able to run it fine
using the
>>>>>> script
>>>>>>>>>> included in the tarball?
>>>>>>>>>>
>>>>>>>>>> After you compiled MET, did you go into the scripts
directory and
>>>> run
>>>>>>>> the
>>>>>>>>>> test scripts?
>>>>>>>>>>
>>>>>>>>>> cd METv4.1/scripts
>>>>>>>>>> ./test_all.sh >& test_all.log
>>>>>>>>>>
>>>>>>>>>> Does pb2nc run OK in the test scripts, or do you see a
>> segmentation
>>>>>>>> fault
>>>>>>>>>> there as well?
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> John Halley Gotway
>>>>>>>>>> met_help at ucar.edu
>>>>>>>>>>
>>>>>>>>>> On 04/30/2014 02:54 AM, Yonghan Choi via RT wrote:
>>>>>>>>>>>
>>>>>>>>>>> Wed Apr 30 02:54:33 2014: Request 66543 was acted upon.
>>>>>>>>>>> Transaction: Ticket created by cyh082 at gmail.com
>>>>>>>>>>> Queue: met_help
>>>>>>>>>>> Subject: Question on online tutorial
>>>>>>>>>>> Owner: Nobody
>>>>>>>>>>> Requestors: cyh082 at gmail.com
>>>>>>>>>>> Status: new
>>>>>>>>>>> Ticket <URL:
>>>>>>>> https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=66543 >
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Dear whom it may concern,
>>>>>>>>>>>
>>>>>>>>>>> I have one question on MET online tutorial.
>>>>>>>>>>>
>>>>>>>>>>> I downloaded MET source code, and compiled it
successfully.
>>>>>>>>>>>
>>>>>>>>>>> I tried to run PB2NC tool following online tutorial, but I
got
>>>> error
>>>>>> an
>>>>>>>>>>> message as belows.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>>>> DEBUG 1: Default Config File:
>>>>>>>>>>>
>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/data/config/PB2NCConfig_default
>>>>>>>>>>> DEBUG 1: User Config File:
>>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/config/PB2NCConfig_tutorial
>>>>>>>>>>> DEBUG 1: Creating NetCDF File:
>>>>>>>>>>>
/home/dklee/ANALYSIS/MET_v41/METv4.1/tutorial/out/pb2nc/
>>>>>>>> tutorial_pb.nc
>>>>>>>>>>> DEBUG 1: Processing PrepBufr File:
>>>>>>>>>>>
>> /home/dklee/ANALYSIS/MET_v41/METv4.1/data/sample_obs/prepbufr/
>>>>>>>>>>> ndas.t00z.prepbufr.tm12.20070401.nr
>>>>>>>>>>> DEBUG 1: Blocking PrepBufr file to:
/tmp/tmp_pb2nc_blk_8994_0
>>>>>>>>>>> Segmentation fault
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
-----------------------------------------------------------------------------------------------------------------
>>>>>>>>>>>
>>>>>>>>>>> Could you give me some advices on this problem?
>>>>>>>>>>>
>>>>>>>>>>> And could you give me output of PB2NC (i.e.,
tutorial_pb.nc)?
>>>>>> Because
>>>>>>>> I
>>>>>>>>>>> need this file to run Point-Stat Tool tutorial.
>>>>>>>>>>>
>>>>>>>>>>> Thank you for your kindness.
>>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Yonghan Choi
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>
>>
------------------------------------------------
More information about the Met_help
mailing list