[Met_help] [rt.rap.ucar.edu #79806] History for Conceptual doubts
John Halley Gotway via RT
met_help at ucar.edu
Wed Mar 15 14:09:09 MDT 2017
----------------------------------------------------------------
Initial Request
----------------------------------------------------------------
Dear MET Help staff,
I need some conceptual clarifications. My issue is to compare the
24-hours accumulated rainfall performance of 2 weather forecasting
models, lets say Mod5 vs. Mod15, where 5 and 15 are the grid resolution
(in KMs). Both model's data are in Grib1 format and observation is in a
binary gridded format. We are going to evaluate them during the last 90
summer days.
1) What I'm supposed to do? Should I compare each one against
observation, keep the results separately and then plot the 2 sets of
results? Or there is a way to already evaluate them directly?
2) We've decided to work in the Mod15 grid resolution and I thought
about using Grid-Stat tool at first. The question is, if I have to make
the evaluation of each model in a separately way, how do I change to the
Mod15 resolution when I compare Mod5 vs. observation?
3) Model's and observation data are in 1-hour accumulated values and I
need to make comparison of 24-hour accumulated values. As I'm not sure
if MET tools can accumulate on-the-fly, I've decided to convert all them
to NetCDF. I already know how to accumulate both models and observation
for 24-hours using pcp_combine or NCL, but each accumulation process
generates me one NetCDF file per day. I was thinking about joining this
90 NetCDF files (of each model or observation) into only one, just
increasing the time dimension. In this way I'll have just three NetCDF
files, does MET handle it? Is this the best way to do it?
Well it's enough for now. Thanks a lot.
Best regards,
JR Garcia
--
==================================================
Jose Roberto Motta Garcia, PhD
Divisão de Modelagem e Desenvolvimento (DMD)
www.cptec.inpe.br
www.inpe.br
Tel.: +55 (12) 3208-7966
--------------------------------------------------
***** Save natural resources *****
==================================================
----------------------------------------------------------------
Complete Ticket History
----------------------------------------------------------------
Subject: Conceptual doubts
From: John Halley Gotway
Time: Mon Mar 13 09:55:31 2017
JR Garcia,
I see you have some questions about designing your verification. Here
are
some thoughts you may find relevant.
I work in the Developmental Testbed Center at NCAR, and we do a lot of
evaluations like this. It's a great idea to think carefully about the
design of the experiment prior to jumping in.
When comparing the performance of two models, we usually compare the
models
to the same set of observations and compute statistics. Then we
compute
"pairwise differences" of those statistics. Since you're talking
about
precip, let's say you've computed ETS for Mod5 and Mod15. For each
run,
compute diff = ETS of Mod5 - ETS of Mod15. So if you have 1000 ETS
values
for each model, you'll get 1000 differences of ETS. Then look at the
mean
of those differences and compute a 95% or 99% confidence interval
around
the mean. If that confidence interval includes 0, the differences are
not
statistically significant. If 0 falls outside of the CI, then the
difference is statistically significant. Or rather the computing the
mean
and normal confidence interval, sometimes we compute the CI's using a
bootstrapping aggregation method.
So that's they approach, we typically apply. But there are a few
other
questions you should answer.
What grid should you use for your evaluation? Usually, we place
models on
a common grid prior to comparing them. Should you use the 5-km grid,
15-km
grid, the observation grid, or something in between? I don't have an
easy
answer for you. I'd suggest considering your overall project goals,
before
deciding on the evaluation grid. Technically, you could evaluate Mod5
on
the 5-km grid and Mod15 on the 15-km grid, but then you won't know how
much
the difference in statistics is affected by the differing evaluation
grids.
Regarding the accumulation interval, the pcp_combine tool is meant to
add
and subtract precip for you. So MET doesn't accumulate precip "on-
the-fly"
but that tool can be used to change accumulation intervals.
As for dumping all of your data into 3 big NetCDF files, if they were
written following the NetCDF CF-convention then MET *should* be able
to
handle them. But you'd still need to run the Grid-Stat tool once for
each
output time.
Another approach would be...
(1) Keep Mod5 and Mod15 in GRIB1 format.
(2) For each evaluation time, run Mod5 and Mod15 through pcp_combine
to
compute 24-hour accumulation intervals, as needed.
(3) Reformat your observations from binary into either GRIB or a
NetCDF
format that MET can read (i.e. cf-compliant or make it look like the
NetCDF
output of pcp_combine).
(4) Run Grid-Stat for each evaluation time and in the config file, use
the
"regrid" option to place your forecast/obs data on a common evaluation
grid.
Hope that helps.
Thanks,
John Halley Gotway
On Mon, Mar 13, 2017 at 9:34 AM, Roberto Garcia (INPE) via RT <
met_help at ucar.edu> wrote:
>
> Mon Mar 13 09:34:23 2017: Request 79806 was acted upon.
> Transaction: Ticket created by roberto.garcia at inpe.br
> Queue: met_help
> Subject: Conceptual doubts
> Owner: Nobody
> Requestors: roberto.garcia at inpe.br
> Status: new
> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=79806 >
>
>
> Dear MET Help staff,
>
> I need some conceptual clarifications. My issue is to compare the
> 24-hours accumulated rainfall performance of 2 weather forecasting
> models, lets say Mod5 vs. Mod15, where 5 and 15 are the grid
resolution
> (in KMs). Both model's data are in Grib1 format and observation is
in a
> binary gridded format. We are going to evaluate them during the last
90
> summer days.
>
> 1) What I'm supposed to do? Should I compare each one against
> observation, keep the results separately and then plot the 2 sets of
> results? Or there is a way to already evaluate them directly?
>
> 2) We've decided to work in the Mod15 grid resolution and I thought
> about using Grid-Stat tool at first. The question is, if I have to
make
> the evaluation of each model in a separately way, how do I change to
the
> Mod15 resolution when I compare Mod5 vs. observation?
>
> 3) Model's and observation data are in 1-hour accumulated values and
I
> need to make comparison of 24-hour accumulated values. As I'm not
sure
> if MET tools can accumulate on-the-fly, I've decided to convert all
them
> to NetCDF. I already know how to accumulate both models and
observation
> for 24-hours using pcp_combine or NCL, but each accumulation process
> generates me one NetCDF file per day. I was thinking about joining
this
> 90 NetCDF files (of each model or observation) into only one, just
> increasing the time dimension. In this way I'll have just three
NetCDF
> files, does MET handle it? Is this the best way to do it?
>
>
> Well it's enough for now. Thanks a lot.
>
> Best regards,
>
> JR Garcia
>
> --
> ==================================================
> Jose Roberto Motta Garcia, PhD
>
> Divisão de Modelagem e Desenvolvimento (DMD)
> www.cptec.inpe.br
> www.inpe.br
> Tel.: +55 (12) 3208-7966
> --------------------------------------------------
> ***** Save natural resources *****
> ==================================================
>
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #79806] Conceptual doubts
From: Roberto Garcia (INPE)
Time: Mon Mar 13 10:14:19 2017
Many thanks for the clarification, John. It will be very useful.
Another question, you said "But you'd still need to run the Grid-Stat
tool once for each output time." Does it mean that, if I have a 90-day
evaluation period, I have to run Grid-Stat tool 90 times?
Best regards,
JR Garcia
--
==================================================
Jose Roberto Motta Garcia, PhD
Divisão de Modelagem e Desenvolvimento (DMD)
www.cptec.inpe.br
www.inpe.br
Tel.: +55 (12) 3208-7966
--------------------------------------------------
***** Save natural resources *****
==================================================
On 13/03/2017 12:55, John Halley Gotway via RT wrote:
> JR Garcia,
>
> I see you have some questions about designing your verification.
Here are
> some thoughts you may find relevant.
>
> I work in the Developmental Testbed Center at NCAR, and we do a lot
of
> evaluations like this. It's a great idea to think carefully about
the
> design of the experiment prior to jumping in.
>
> When comparing the performance of two models, we usually compare the
models
> to the same set of observations and compute statistics. Then we
compute
> "pairwise differences" of those statistics. Since you're talking
about
> precip, let's say you've computed ETS for Mod5 and Mod15. For each
run,
> compute diff = ETS of Mod5 - ETS of Mod15. So if you have 1000 ETS
values
> for each model, you'll get 1000 differences of ETS. Then look at
the mean
> of those differences and compute a 95% or 99% confidence interval
around
> the mean. If that confidence interval includes 0, the differences
are not
> statistically significant. If 0 falls outside of the CI, then the
> difference is statistically significant. Or rather the computing
the mean
> and normal confidence interval, sometimes we compute the CI's using
a
> bootstrapping aggregation method.
>
> So that's they approach, we typically apply. But there are a few
other
> questions you should answer.
>
> What grid should you use for your evaluation? Usually, we place
models on
> a common grid prior to comparing them. Should you use the 5-km
grid, 15-km
> grid, the observation grid, or something in between? I don't have
an easy
> answer for you. I'd suggest considering your overall project goals,
before
> deciding on the evaluation grid. Technically, you could evaluate
Mod5 on
> the 5-km grid and Mod15 on the 15-km grid, but then you won't know
how much
> the difference in statistics is affected by the differing evaluation
grids.
>
> Regarding the accumulation interval, the pcp_combine tool is meant
to add
> and subtract precip for you. So MET doesn't accumulate precip "on-
the-fly"
> but that tool can be used to change accumulation intervals.
>
> As for dumping all of your data into 3 big NetCDF files, if they
were
> written following the NetCDF CF-convention then MET *should* be able
to
> handle them. But you'd still need to run the Grid-Stat tool once
for each
> output time.
>
> Another approach would be...
> (1) Keep Mod5 and Mod15 in GRIB1 format.
> (2) For each evaluation time, run Mod5 and Mod15 through pcp_combine
to
> compute 24-hour accumulation intervals, as needed.
> (3) Reformat your observations from binary into either GRIB or a
NetCDF
> format that MET can read (i.e. cf-compliant or make it look like the
NetCDF
> output of pcp_combine).
> (4) Run Grid-Stat for each evaluation time and in the config file,
use the
> "regrid" option to place your forecast/obs data on a common
evaluation grid.
>
> Hope that helps.
>
> Thanks,
> John Halley Gotway
>
>
>
>
>
>
> On Mon, Mar 13, 2017 at 9:34 AM, Roberto Garcia (INPE) via RT <
> met_help at ucar.edu> wrote:
>
>> Mon Mar 13 09:34:23 2017: Request 79806 was acted upon.
>> Transaction: Ticket created by roberto.garcia at inpe.br
>> Queue: met_help
>> Subject: Conceptual doubts
>> Owner: Nobody
>> Requestors: roberto.garcia at inpe.br
>> Status: new
>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=79806 >
>>
>>
>> Dear MET Help staff,
>>
>> I need some conceptual clarifications. My issue is to compare the
>> 24-hours accumulated rainfall performance of 2 weather forecasting
>> models, lets say Mod5 vs. Mod15, where 5 and 15 are the grid
resolution
>> (in KMs). Both model's data are in Grib1 format and observation is
in a
>> binary gridded format. We are going to evaluate them during the
last 90
>> summer days.
>>
>> 1) What I'm supposed to do? Should I compare each one against
>> observation, keep the results separately and then plot the 2 sets
of
>> results? Or there is a way to already evaluate them directly?
>>
>> 2) We've decided to work in the Mod15 grid resolution and I thought
>> about using Grid-Stat tool at first. The question is, if I have to
make
>> the evaluation of each model in a separately way, how do I change
to the
>> Mod15 resolution when I compare Mod5 vs. observation?
>>
>> 3) Model's and observation data are in 1-hour accumulated values
and I
>> need to make comparison of 24-hour accumulated values. As I'm not
sure
>> if MET tools can accumulate on-the-fly, I've decided to convert all
them
>> to NetCDF. I already know how to accumulate both models and
observation
>> for 24-hours using pcp_combine or NCL, but each accumulation
process
>> generates me one NetCDF file per day. I was thinking about joining
this
>> 90 NetCDF files (of each model or observation) into only one, just
>> increasing the time dimension. In this way I'll have just three
NetCDF
>> files, does MET handle it? Is this the best way to do it?
>>
>>
>> Well it's enough for now. Thanks a lot.
>>
>> Best regards,
>>
>> JR Garcia
>>
>> --
>> ==================================================
>> Jose Roberto Motta Garcia, PhD
>>
>> Divisão de Modelagem e Desenvolvimento (DMD)
>> www.cptec.inpe.br
>> www.inpe.br
>> Tel.: +55 (12) 3208-7966
>> --------------------------------------------------
>> ***** Save natural resources *****
>> ==================================================
>>
>>
>>
------------------------------------------------
Subject: Conceptual doubts
From: John Halley Gotway
Time: Mon Mar 13 10:58:01 2017
Yes, the Grid-Stat tool is designed to be run once per evaluation
time.
Obviously you wouldn't do that by hand, but you can run a script to
loop
through the 90 evaluation times.
There is another tool you may find useful. Grid-Stat is run once per
output time and computes statistics by aggregating points over one or
more
spatial areas (i.e. masks). Series-Analysis, instead of aggregating
points
spatially, aggregates them in time. You could run Series-Analysis
once,
passing to it lists of 90 forecast files and 90 observation files. In
the
configuration file, you select your desired output statistics. And
the
output of Series-Analysis is a NetCDF file containing maps of
statistics.
For example, at each grid point, you might compute ETS of precip > 0.
This tool is useful in quantifying how your model performance varies
over
your domain.
Thanks,
John
On Mon, Mar 13, 2017 at 10:14 AM, Roberto Garcia (INPE) via RT <
met_help at ucar.edu> wrote:
>
> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=79806 >
>
> Many thanks for the clarification, John. It will be very useful.
>
> Another question, you said "But you'd still need to run the Grid-
Stat
> tool once for each output time." Does it mean that, if I have a 90-
day
> evaluation period, I have to run Grid-Stat tool 90 times?
>
> Best regards,
>
>
> JR Garcia
>
> --
> ==================================================
> Jose Roberto Motta Garcia, PhD
>
> Divisão de Modelagem e Desenvolvimento (DMD)
> www.cptec.inpe.br
> www.inpe.br
> Tel.: +55 (12) 3208-7966
> --------------------------------------------------
> ***** Save natural resources *****
> ==================================================
>
> On 13/03/2017 12:55, John Halley Gotway via RT wrote:
> > JR Garcia,
> >
> > I see you have some questions about designing your verification.
Here
> are
> > some thoughts you may find relevant.
> >
> > I work in the Developmental Testbed Center at NCAR, and we do a
lot of
> > evaluations like this. It's a great idea to think carefully about
the
> > design of the experiment prior to jumping in.
> >
> > When comparing the performance of two models, we usually compare
the
> models
> > to the same set of observations and compute statistics. Then we
compute
> > "pairwise differences" of those statistics. Since you're talking
about
> > precip, let's say you've computed ETS for Mod5 and Mod15. For
each run,
> > compute diff = ETS of Mod5 - ETS of Mod15. So if you have 1000
ETS
> values
> > for each model, you'll get 1000 differences of ETS. Then look at
the
> mean
> > of those differences and compute a 95% or 99% confidence interval
around
> > the mean. If that confidence interval includes 0, the differences
are
> not
> > statistically significant. If 0 falls outside of the CI, then the
> > difference is statistically significant. Or rather the computing
the
> mean
> > and normal confidence interval, sometimes we compute the CI's
using a
> > bootstrapping aggregation method.
> >
> > So that's they approach, we typically apply. But there are a few
other
> > questions you should answer.
> >
> > What grid should you use for your evaluation? Usually, we place
models
> on
> > a common grid prior to comparing them. Should you use the 5-km
grid,
> 15-km
> > grid, the observation grid, or something in between? I don't have
an
> easy
> > answer for you. I'd suggest considering your overall project
goals,
> before
> > deciding on the evaluation grid. Technically, you could evaluate
Mod5 on
> > the 5-km grid and Mod15 on the 15-km grid, but then you won't know
how
> much
> > the difference in statistics is affected by the differing
evaluation
> grids.
> >
> > Regarding the accumulation interval, the pcp_combine tool is meant
to add
> > and subtract precip for you. So MET doesn't accumulate precip
> "on-the-fly"
> > but that tool can be used to change accumulation intervals.
> >
> > As for dumping all of your data into 3 big NetCDF files, if they
were
> > written following the NetCDF CF-convention then MET *should* be
able to
> > handle them. But you'd still need to run the Grid-Stat tool once
for
> each
> > output time.
> >
> > Another approach would be...
> > (1) Keep Mod5 and Mod15 in GRIB1 format.
> > (2) For each evaluation time, run Mod5 and Mod15 through
pcp_combine to
> > compute 24-hour accumulation intervals, as needed.
> > (3) Reformat your observations from binary into either GRIB or a
NetCDF
> > format that MET can read (i.e. cf-compliant or make it look like
the
> NetCDF
> > output of pcp_combine).
> > (4) Run Grid-Stat for each evaluation time and in the config file,
use
> the
> > "regrid" option to place your forecast/obs data on a common
evaluation
> grid.
> >
> > Hope that helps.
> >
> > Thanks,
> > John Halley Gotway
> >
> >
> >
> >
> >
> >
> > On Mon, Mar 13, 2017 at 9:34 AM, Roberto Garcia (INPE) via RT <
> > met_help at ucar.edu> wrote:
> >
> >> Mon Mar 13 09:34:23 2017: Request 79806 was acted upon.
> >> Transaction: Ticket created by roberto.garcia at inpe.br
> >> Queue: met_help
> >> Subject: Conceptual doubts
> >> Owner: Nobody
> >> Requestors: roberto.garcia at inpe.br
> >> Status: new
> >> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=79806
> >
> >>
> >>
> >> Dear MET Help staff,
> >>
> >> I need some conceptual clarifications. My issue is to compare the
> >> 24-hours accumulated rainfall performance of 2 weather
forecasting
> >> models, lets say Mod5 vs. Mod15, where 5 and 15 are the grid
resolution
> >> (in KMs). Both model's data are in Grib1 format and observation
is in a
> >> binary gridded format. We are going to evaluate them during the
last 90
> >> summer days.
> >>
> >> 1) What I'm supposed to do? Should I compare each one against
> >> observation, keep the results separately and then plot the 2 sets
of
> >> results? Or there is a way to already evaluate them directly?
> >>
> >> 2) We've decided to work in the Mod15 grid resolution and I
thought
> >> about using Grid-Stat tool at first. The question is, if I have
to make
> >> the evaluation of each model in a separately way, how do I change
to the
> >> Mod15 resolution when I compare Mod5 vs. observation?
> >>
> >> 3) Model's and observation data are in 1-hour accumulated values
and I
> >> need to make comparison of 24-hour accumulated values. As I'm not
sure
> >> if MET tools can accumulate on-the-fly, I've decided to convert
all them
> >> to NetCDF. I already know how to accumulate both models and
observation
> >> for 24-hours using pcp_combine or NCL, but each accumulation
process
> >> generates me one NetCDF file per day. I was thinking about
joining this
> >> 90 NetCDF files (of each model or observation) into only one,
just
> >> increasing the time dimension. In this way I'll have just three
NetCDF
> >> files, does MET handle it? Is this the best way to do it?
> >>
> >>
> >> Well it's enough for now. Thanks a lot.
> >>
> >> Best regards,
> >>
> >> JR Garcia
> >>
> >> --
> >> ==================================================
> >> Jose Roberto Motta Garcia, PhD
> >>
> >> Divisão de Modelagem e Desenvolvimento (DMD)
> >> www.cptec.inpe.br
> >> www.inpe.br
> >> Tel.: +55 (12) 3208-7966
> >> --------------------------------------------------
> >> ***** Save natural resources *****
> >> ==================================================
> >>
> >>
> >>
>
>
>
------------------------------------------------
Subject: Re: [rt.rap.ucar.edu #79806] Conceptual doubts
From: Roberto Garcia (INPE)
Time: Mon Mar 13 11:03:10 2017
Ok John, thank you very much.
I'll try both.
Best regards,
JR Garcia
--
==================================================
Jose Roberto Motta Garcia, PhD
Divisão de Modelagem e Desenvolvimento (DMD)
www.cptec.inpe.br
www.inpe.br
Tel.: +55 (12) 3208-7966
--------------------------------------------------
***** Save natural resources *****
==================================================
On 13/03/2017 13:58, John Halley Gotway via RT wrote:
> Yes, the Grid-Stat tool is designed to be run once per evaluation
time.
> Obviously you wouldn't do that by hand, but you can run a script to
loop
> through the 90 evaluation times.
>
> There is another tool you may find useful. Grid-Stat is run once
per
> output time and computes statistics by aggregating points over one
or more
> spatial areas (i.e. masks). Series-Analysis, instead of aggregating
points
> spatially, aggregates them in time. You could run Series-Analysis
once,
> passing to it lists of 90 forecast files and 90 observation files.
In the
> configuration file, you select your desired output statistics. And
the
> output of Series-Analysis is a NetCDF file containing maps of
statistics.
> For example, at each grid point, you might compute ETS of precip >
0.
>
> This tool is useful in quantifying how your model performance varies
over
> your domain.
>
> Thanks,
> John
>
>
>
> On Mon, Mar 13, 2017 at 10:14 AM, Roberto Garcia (INPE) via RT <
> met_help at ucar.edu> wrote:
>
>> <URL: https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=79806 >
>>
>> Many thanks for the clarification, John. It will be very useful.
>>
>> Another question, you said "But you'd still need to run the Grid-
Stat
>> tool once for each output time." Does it mean that, if I have a 90-
day
>> evaluation period, I have to run Grid-Stat tool 90 times?
>>
>> Best regards,
>>
>>
>> JR Garcia
>>
>> --
>> ==================================================
>> Jose Roberto Motta Garcia, PhD
>>
>> Divisão de Modelagem e Desenvolvimento (DMD)
>> www.cptec.inpe.br
>> www.inpe.br
>> Tel.: +55 (12) 3208-7966
>> --------------------------------------------------
>> ***** Save natural resources *****
>> ==================================================
>>
>> On 13/03/2017 12:55, John Halley Gotway via RT wrote:
>>> JR Garcia,
>>>
>>> I see you have some questions about designing your verification.
Here
>> are
>>> some thoughts you may find relevant.
>>>
>>> I work in the Developmental Testbed Center at NCAR, and we do a
lot of
>>> evaluations like this. It's a great idea to think carefully about
the
>>> design of the experiment prior to jumping in.
>>>
>>> When comparing the performance of two models, we usually compare
the
>> models
>>> to the same set of observations and compute statistics. Then we
compute
>>> "pairwise differences" of those statistics. Since you're talking
about
>>> precip, let's say you've computed ETS for Mod5 and Mod15. For
each run,
>>> compute diff = ETS of Mod5 - ETS of Mod15. So if you have 1000
ETS
>> values
>>> for each model, you'll get 1000 differences of ETS. Then look at
the
>> mean
>>> of those differences and compute a 95% or 99% confidence interval
around
>>> the mean. If that confidence interval includes 0, the differences
are
>> not
>>> statistically significant. If 0 falls outside of the CI, then the
>>> difference is statistically significant. Or rather the computing
the
>> mean
>>> and normal confidence interval, sometimes we compute the CI's
using a
>>> bootstrapping aggregation method.
>>>
>>> So that's they approach, we typically apply. But there are a few
other
>>> questions you should answer.
>>>
>>> What grid should you use for your evaluation? Usually, we place
models
>> on
>>> a common grid prior to comparing them. Should you use the 5-km
grid,
>> 15-km
>>> grid, the observation grid, or something in between? I don't have
an
>> easy
>>> answer for you. I'd suggest considering your overall project
goals,
>> before
>>> deciding on the evaluation grid. Technically, you could evaluate
Mod5 on
>>> the 5-km grid and Mod15 on the 15-km grid, but then you won't know
how
>> much
>>> the difference in statistics is affected by the differing
evaluation
>> grids.
>>> Regarding the accumulation interval, the pcp_combine tool is meant
to add
>>> and subtract precip for you. So MET doesn't accumulate precip
>> "on-the-fly"
>>> but that tool can be used to change accumulation intervals.
>>>
>>> As for dumping all of your data into 3 big NetCDF files, if they
were
>>> written following the NetCDF CF-convention then MET *should* be
able to
>>> handle them. But you'd still need to run the Grid-Stat tool once
for
>> each
>>> output time.
>>>
>>> Another approach would be...
>>> (1) Keep Mod5 and Mod15 in GRIB1 format.
>>> (2) For each evaluation time, run Mod5 and Mod15 through
pcp_combine to
>>> compute 24-hour accumulation intervals, as needed.
>>> (3) Reformat your observations from binary into either GRIB or a
NetCDF
>>> format that MET can read (i.e. cf-compliant or make it look like
the
>> NetCDF
>>> output of pcp_combine).
>>> (4) Run Grid-Stat for each evaluation time and in the config file,
use
>> the
>>> "regrid" option to place your forecast/obs data on a common
evaluation
>> grid.
>>> Hope that helps.
>>>
>>> Thanks,
>>> John Halley Gotway
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Mar 13, 2017 at 9:34 AM, Roberto Garcia (INPE) via RT <
>>> met_help at ucar.edu> wrote:
>>>
>>>> Mon Mar 13 09:34:23 2017: Request 79806 was acted upon.
>>>> Transaction: Ticket created by roberto.garcia at inpe.br
>>>> Queue: met_help
>>>> Subject: Conceptual doubts
>>>> Owner: Nobody
>>>> Requestors: roberto.garcia at inpe.br
>>>> Status: new
>>>> Ticket <URL:
https://rt.rap.ucar.edu/rt/Ticket/Display.html?id=79806
>>>>
>>>> Dear MET Help staff,
>>>>
>>>> I need some conceptual clarifications. My issue is to compare the
>>>> 24-hours accumulated rainfall performance of 2 weather
forecasting
>>>> models, lets say Mod5 vs. Mod15, where 5 and 15 are the grid
resolution
>>>> (in KMs). Both model's data are in Grib1 format and observation
is in a
>>>> binary gridded format. We are going to evaluate them during the
last 90
>>>> summer days.
>>>>
>>>> 1) What I'm supposed to do? Should I compare each one against
>>>> observation, keep the results separately and then plot the 2 sets
of
>>>> results? Or there is a way to already evaluate them directly?
>>>>
>>>> 2) We've decided to work in the Mod15 grid resolution and I
thought
>>>> about using Grid-Stat tool at first. The question is, if I have
to make
>>>> the evaluation of each model in a separately way, how do I change
to the
>>>> Mod15 resolution when I compare Mod5 vs. observation?
>>>>
>>>> 3) Model's and observation data are in 1-hour accumulated values
and I
>>>> need to make comparison of 24-hour accumulated values. As I'm not
sure
>>>> if MET tools can accumulate on-the-fly, I've decided to convert
all them
>>>> to NetCDF. I already know how to accumulate both models and
observation
>>>> for 24-hours using pcp_combine or NCL, but each accumulation
process
>>>> generates me one NetCDF file per day. I was thinking about
joining this
>>>> 90 NetCDF files (of each model or observation) into only one,
just
>>>> increasing the time dimension. In this way I'll have just three
NetCDF
>>>> files, does MET handle it? Is this the best way to do it?
>>>>
>>>>
>>>> Well it's enough for now. Thanks a lot.
>>>>
>>>> Best regards,
>>>>
>>>> JR Garcia
>>>>
>>>> --
>>>> ==================================================
>>>> Jose Roberto Motta Garcia, PhD
>>>>
>>>> Divisão de Modelagem e Desenvolvimento (DMD)
>>>> www.cptec.inpe.br
>>>> www.inpe.br
>>>> Tel.: +55 (12) 3208-7966
>>>> --------------------------------------------------
>>>> ***** Save natural resources *****
>>>> ==================================================
>>>>
>>>>
>>>>
>>
>>
------------------------------------------------
More information about the Met_help
mailing list